Stable Diffusion with Deforum Hybrid Video
How to Create Stable Diffusion with Deforum Hybrid Video
Creating warp fusion quality videos inside Stable Diffusion and Deforum might seem complex, but with this game-changing technique, you’ll find it’s surprisingly simple.
Key Takeaways:
- How to create warp fusion-quality videos inside Stable Diffusion.
- The step-by-step process for installing and using Deforum.
- How to use multiple models and embeddings for specific video effects.
- Expert tips for improving video consistency and resolving common issues like flickering and inconsistent details.
In this tutorial, I’ll walk you through the steps to create high-quality hybrid videos using Stable Diffusion and Deforum. Whether you’re familiar with AI video creation or just starting, this guide will teach you how to achieve impressive results.
Introduction to Deforum Hybrid Video Technique
Stable Diffusion is an incredible tool for creating AI-generated art, but did you know you can use it to create amazing hybrid videos? This tutorial introduces a technique discovered by Unreal Unit that allows you to produce warp fusion-quality videos directly inside Stable Diffusion using Deforum.
Unreal Unit, a talented creator, has moved from using warp fusion to this new method, producing videos with remarkable consistency. I’ll also showcase some fantastic videos by creators like Stable Swirls and ReallyBigName, highlighting what you can achieve using this method.
Step 1: Installing Deforum
To begin, you’ll need to install Deforum in Stable Diffusion. Follow these steps:
- Go to the Extensions tab in Stable Diffusion.
- Click on “Available,” then “Load from” and type “Deforum.”
- If it isn’t installed yet, click the install button.
Important Tip: Make sure to disable any large extensions in Stable Diffusion, as these could interfere with ControlNet. Only leave the built-in ones enabled.
Once installed, restart the UI. You’re now ready to begin.
Step 2: Setting Up Deforum for Video Creation
After installation, there are a few settings you need to configure:
- Go to the Settings tab, navigate to “User Interfaces,” and scroll to “Initial Noise Multiplier.”
- Set the initial noise multiplier to zero by editing the file shared.py in the Stable Diffusion folder. This is crucial for avoiding too much detail, which can cause flickering in your video.
- Save the changes and restart Stable Diffusion.
Step 3: Creating Prompts and Using Img2Img
Now, let’s move to the Img2Img tab to create the prompts for your video. This is a key part of getting consistent results:
- Enter your prompt and negative prompt in the Img2Img tab.
- Upload the image you want to transform, then export it using DaVinci Resolve by going to File -> Export -> Covered Frame as Still.
For my superhero video, I used the Ref Animated model and the Bad Hand V4 embedding to enhance quality. You’ll find all the necessary links below to download these models and embeddings.
Step 4: Applying the ControlNet Settings
Next, we’ll apply ControlNet settings to achieve the best results:
- Enable ControlNet in the Img2Img tab and activate Pixel Perfect.
- Set the ControlNet preprocessor to None and control weight to 1.75.
- Add a second ControlNet unit with Open Pose and set the control weight to 1.0.
By using ControlNet in this way, you ensure your video maintains consistency, especially in areas like facial details and movement.
Step 5: Using Loras for Enhanced Style
To add even more stylistic elements to your video, you can use Loras (Low-Rank Adaptation). For my LEGO-style video, I used the Protogene V2.2 model and the LEGO Lora. Simply:
- Download and install the Lora.
- Add it to your prompt in the Img2Img tab.
- Set the control weight to 1.5 for more precise results.
Step 6: Transferring Settings to Deforum
Once you’re satisfied with the results in Img2Img, it’s time to transfer these settings to Deforum. This allows you to render the full video sequence:
- In the Deforum tab, select your prompts, settings, and ControlNet configurations from Img2Img.
- Set the animation mode to 3D for the keyframes.
- Adjust the settings, including frame cadence, translation, and border mode, to suit your video.
For my LEGO and steampunk cat videos, these settings provided amazing results, and I’ll share more detailed settings for each project below.
Step 7: Finalizing and Optimizing the Video
After generating the video in Deforum, you can further optimize it:
- Use Topaz Labs or Flow Frames for frame interpolation and upscaling.
- If you’re working with DaVinci Resolve, follow the tips for framing your 9×16 video in a 1280 timeline.
- Apply dirt removal and deflicker nodes to smooth out the final product.
The combination of Deforum’s hybrid video capabilities and these post-processing steps ensures your video looks professional and consistent.
Conclusion
By following this step-by-step process, you’ll be able to create high-quality, consistent videos using Stable Diffusion and Deforum. Whether you’re transforming characters into LEGO figures or creating stylized steampunk cats, these techniques give you full control over the final result.
If you found this tutorial helpful, consider subscribing to my YouTube channel for more tips and tutorials. Let me know in the comments if you have any questions!
Links Mentioned in the Video:
- Andrey – Unreal Unit YouTube: YouTube Channel
- Andrey – Unreal Unit Instagram: unreal unit
- Stable Swirls YouTube: YouTube Channel
- ReallyBigName YouTube: YouTube Channel
- Pexels Cat Video: Pexels
- Insta: Manoletyet: Instagram
- Rev Animated Model: Civitai
- Protogene V2.2 Model: Civitai
- Bad Hand V4 Embedding: Civitai
- Easy Negative Embedding: Civitai
- LoRa LEGO Person: Civitai
- LoRa Robot: Civitai
- LoRa Mechanical Cat: Civitai
- Lowra Lora: Civitai
- Sebastian Kamph’s Installation of Stable Diffusion: YouTube Tutorial
- Flow Frames Interpolation Tool: Flow Frames
- Topaz Labs: Topaz Labs
- DaVinci Resolve: Blackmagic Design