Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Google demos new Lumiere text to video engine. Results are a huge leap forward from previous engines.

Google’s new video generation AI model Lumiere uses a new diffusion model called Space-Time-U-Net, or STUNet, that figures out where things are in a video (space) and how they simultaneously move and change (time). Ars Technica reports this method lets Lumiere create the video in one process instead of putting smaller still frames together.

Lumiere starts with creating a base frame from the prompt. Then, it uses the STUNet framework to begin approximating where objects within that frame will move to create more frames that flow into each other, creating the appearance of seamless motion. Lumiere also generates 80 frames compared to 25 frames from Stable Video Diffusion.

Beyond text-to-video generation, Lumiere will also allow for image-to-video generation, stylized generation, which lets users make videos in a specific style, cinemagraphs that animate only a portion of a video, and inpainting to mask out an area of the video to change the color or pattern.

Google’s Lumiere paper, though, noted that “there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases to ensure a safe and fair use.” The paper’s authors didn’t explain how this can be achieved.

Synopsis excerpted from The Verge article.

JudiDench ,

Will never be usable by the public

FaceDeer ,
@FaceDeer@kbin.social avatar

It's still driving the state of the art forward, which will result in models that will be used by the public.

peopleproblems ,

Right? Once the model and training methods are published in some journal, the only barrier becomes the hardware to use it.

Which, given like stable diffusion etc, is really a matter of VRAM. Have enough of that, and this should be possible

FaceDeer ,
@FaceDeer@kbin.social avatar

Indeed. Often the hardest part of an invention is the discovery that a thing is actually possible. Even if nobody knows how it was done they can now justify throwing resources into figuring it out and know what results to keep an eye out for.

WHYAREWEALLCAPS ,

It's almost like most of the time in history cutting edge tech tended to be unusable by the public until it matured enough to get businesses interested. Then they'd invest in a usability layer that was unimportant to the cutting edge research.

AtmaJnana ,

Having used diffusion a bit for static images, I can only look forward to the eldrich horrors it will inevitably create.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines