Abstract

We present Stable Video Diffusion — a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our basemodel provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at https://github.com/Stability-AI/generative-models

Blog: https://stability.ai/news/stable-video-diffusion-open-ai-video-model

Paper: https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf

Code:https://github.com/Stability-AI/generative-models

Waitlist: https://stability.ai/contact

Model: https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt/tree/main

  • Scew@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’m just a lowly image generation hobbyist able to run some decent models on my 2060 super. lol. I had the highest tier of collab for awhile which was nice, but didn’t feel like learning how to create jupityr notebooks so was at the mercy of people keeping their dependencies up-to-date and would more often sit down to a broken notebook than anything else. My whole rig is probably achievable for less than the price of 1 3090 q.q

    Edit: took 5 seconds to do a search and I was low-balling my rig. Haven’t looked at prices in awhile.

    • ffhein@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Definitely not cheap, but at least not as bad as having to buy an A100 for €7000 to get 40GB VRAM. I’m hoping second hand GPU prices will plummet after Christmas