Welcome to WarBulletin - your new best friend in the world of gaming. We're all about bringing you the hottest updates and juicy insights from across the gaming universe. Are you into epic RPG adventures or fast-paced eSports? We've got you covered with the latest scoop on everything from next-level PC gaming rigs to the coolest game releases. But hey, we're more than just news! Ever wondered what goes on behind the scenes of your favorite games? We're talking exclusive interviews with the brains behind the games, fresh off-the-press photos and videos straight from gaming conventions, and, of course, breaking news that you just can't miss. We know you love gaming 24/7, and that's why we're here round the clock, updating you on all things gaming. Whether it's the lowdown on a new patch or the buzz about the next big gaming celeb, we're on it.

Contacts

  • Owner: SNOWLAND s.r.o.
  • Registration certificate 06691200
  • 16200, Na okraji 381/41, Veleslavín, 162 00 Praha 6
  • Czech Republic

Google researchers find novel way of turning a single photo of a human into AI-generated video good enough to make you think 'this might go badly'

Google researchers have found a way to create video versions of humans generated from just a single still image. This enables it to do things like, generate a video of someone speaking from input text, or changing a person's mouth movements to match an audio track in a different language to the one originally spoken. It also feels like a slippery slope into identity theft and misinformation, but what's AI if not with a hint of frightening consequences.

The tech itself is rather interesting: it's called Vlogger by the Google researchers that published the paper. In it the authors (Enric Corona et al) offer up various examples of how the AI takes a single input image of a human—in this case, I believe mostly AI-generated humans—and with an audio file produces both facial and bodily movements for them to match.

That's just one of a few potential use cases for the tech. Another is editing video, specifically a video subject's facial expressions. In an example, the researchers show various versions of the same clip: one has a presenter speaking to camera, another with the presenter's mouth closed in an eerie fashion, another with their eyes closed. My favourite is the video of the presenter with their eyes artificially held open by the AI, unblinking. Huge serial killer vibes. Thanks, AI.

The most useful feature in my opinion is the ability to swap an audio track for a video with a dubbed foreign language version and have the AI lip-sync the person's facial movements to the audio track.

It works through the use of two stages: «1) a stochastic human-to-3d-motion diffusion model, and 2) a novel diffusion based architecture that augments text-to-image models with both temporal and spatial controls. This approach enables the generation of high quality videos of variable length, that are easily controllable through high-level representations of human faces and bodies,» the GitHub page says.

2. Generation of Moving and Talking People Here's an example on talking face

Read more on pcgamer.com