You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

MISHANM/video_generation

The MISHANM/video_generation model is a diffusion-based video generation model . It is designed to generate high-quality videos from textual prompts using advanced diffusion techniques.

Model Details

  1. Language: English
  2. Tasks: Video Generation

Model Example output

This is the model inference output:

How to Get Started with the Model

Diffusers

pip install git+https://github.com/huggingface/diffusers.git

Use the code below to get started with the model.

import imageio  
import imageio_ffmpeg  
import torch  
from diffusers import MochiPipeline  
from diffusers.utils import export_to_video  
  
# Load the pre-trained video generation model  
model = MochiPipeline.from_pretrained(  
    "MISHANM/video_generation",  
    variant="bf16",  
    torch_dtype=torch.bfloat16,  
    device_map="balanced"  
)  
  
# Enable memory savings by tiling the VAE  
model.enable_vae_tiling()  
  
# Define the prompt and number of frames  
prompt = "A cow drinking water on the surface of Mars."  
num_frames = 20  
  
frames = model(prompt, num_frames=num_frames).frames[0]  
    
export_to_video(frames, "video.mp4", fps=30)  
  
print("Video generation complete. Saved as 'video.mp4'.")  

Uses

Direct Use

The model is intended for generating videos from textual descriptions. It can be used in creative applications, content generation, and artistic exploration.

Out-of-Scope Use

The model is not suitable for generating videos with explicit or harmful content. It may not perform well with highly abstract or nonsensical prompts.

Bias, Risks, and Limitations

The model may reflect biases present in the training data. It may generate stereotypical or biased videos based on the input prompts.

Recommendations

Users should be aware of potential biases and limitations. It is recommended to review generated content for appropriateness and accuracy.

Citation Information

@misc{MISHANM/video_generation,
  author = {Mishan Maurya},
  title = {Introducing Video Generation model},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  
}
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .