logo
0
0
Login
Lev Novitskiy<leffff@users.noreply.huggingface.co>
Update README.md
Habr | Project Page | Technical Report (soon) | Github

Kandinsky 5.0: A family of diffusion models for Video & Image generation

In this repository, we provide a family of diffusion models to generate a video or an image (Coming Soon) given a textual prompt and distilled model for faster generation.

Project Updates

  • 🔥 Source: 2025/09/29: We have open-sourced Kandinsky 5.0 T2V Lite a lite (2B parameters) version of Kandinsky 5.0 Video text-to-video generation model. Released checkpoints: kandinsky5lite_t2v_pretrain_5s, kandinsky5lite_t2v_pretrain_10s, kandinsky5lite_t2v_sft_5s, kandinsky5lite_t2v_sft_10s, kandinsky5lite_t2v_nocfg_5s, kandinsky5lite_t2v_nocfg_10s, kandinsky5lite_t2v_distilled16steps_5s, kandinsky5lite_t2v_distilled16steps_10s contains weight from pretrain, supervised finetuning, cfg distillation and diffusion distillation into 16 steps. 5s checkpoints are capable of generating videos up to 5 seconds long. 10s checkpoints is faster models checkpoints trained with NABLA algorithm and capable to generate videos up to 10 seconds long.

Kandinsky 5.0 T2V Lite

Kandinsky 5.0 T2V Lite is a lightweight video generation model (2B parameters) that ranks #1 among open-source models in its class. It outperforms larger Wan models (5B and 14B) and offers the best understanding of Russian concepts in the open-source ecosystem.

We provide 8 model variants, each optimized for different use cases:

  • SFT model — delivers the highest generation quality;

  • CFG-distilled — runs 2× faster;

  • Diffusion-distilled — enables low-latency generation with minimal quality loss (6× faster);

  • Pretrain model — designed for fine-tuning by researchers and enthusiasts.

All models are available in two versions: for generating 5-second and 10-second videos.

Pipeline

Latent diffusion pipeline with Flow Matching.

Diffusion Transformer (DiT) as the main generative backbone with cross-attention to text embeddings.

  • Qwen2.5-VL and CLIP provides text embeddings.

  • HunyuanVideo 3D VAE encodes/decodes video into a latent space.

  • DiT is the main generative module using cross-attention to condition on text.

Picture1 Picture2

Model Zoo

Modelconfigvideo durationNFECheckpointLatency*
Kandinsky 5.0 T2V Lite SFT 5sconfigs/config_5s_sft.yaml5s100🤗 HF139 s
Kandinsky 5.0 T2V Lite SFT 10sconfigs/config_10s_sft.yaml10s100🤗 HF224 s
Kandinsky 5.0 T2V Lite pretrain 5sconfigs/config_5s_pretrain.yaml5s100🤗 HF139 s
Kandinsky 5.0 T2V Lite pretrain 10sconfigs/config_10s_pretrain.yaml10s100🤗 HF224 s
Kandinsky 5.0 T2V Lite no-CFG 5sconfigs/config_5s_nocfg.yaml5s50🤗 HF77 s
Kandinsky 5.0 T2V Lite no-CFG 10sconfigs/config_10s_nocfg.yaml10s50🤗 HF124 s
Kandinsky 5.0 T2V Lite distill 5sconfigs/config_5s_distil.yaml5s16🤗 HF35 s
Kandinsky 5.0 T2V Lite distill 10sconfigs/config_10s_distil.yaml10s16🤗 HF55 s

*Latency was measured after the second inference run. The first run of the model can be slower due to the compilation process. Inference was measured on an NVIDIA H100 GPU with 80 GB of memory, using CUDA 12.8.1 and PyTorch 2.8. For 5-second models Flash Attention 3 was used.

Examples:

Kandinsky 5.0 T2V Lite SFT

Kandinsky 5.0 T2V Lite Distill

Results:

Side-by-Side evaluation

The evaluation is based on the expanded prompts from the Movie Gen benchmark, which are available in the expanded_prompt column of the benchmark/moviegen_bench.csv file.

Distill Side-by-Side evaluation

VBench results

Quickstart

Installation

Clone the repo:

git clone https://github.com/ai-forever/Kandinsky-5.git cd Kandinsky-5

Install dependencies:

pip install -r requirements.txt

To improve inference performance on NVidia Hopper GPUs, we recommend installing Flash Attention 3.

Model Download

python download_models.py

Run Kandinsky 5.0 T2V Lite SFT 5s

python test.py --prompt "A dog in red hat"

Run Kandinsky 5.0 T2V Lite SFT 10s

python test.py --config ./configs/config_10s_sft.yaml --prompt "A dog in red hat" --video_duration 10

Run Kandinsky 5.0 T2V Lite pretrain 5s

python test.py --config ./configs/config_5s_pretrain.yaml --prompt "A dog in red hat"

Run Kandinsky 5.0 T2V Lite pretrain 10s

python test.py --config ./configs/config_10s_pretrain.yaml --prompt "A dog in red hat" --video_duration 10

Run Kandinsky 5.0 T2V Lite no-CFG 5s

python test.py --config ./configs/config_5s_nocfg.yaml --prompt "A dog in red hat"

Run Kandinsky 5.0 T2V Lite no-CFG 10s

python test.py --config ./configs/config_10s_nocfg.yaml --prompt "A dog in red hat" --video_duration 10

Run Kandinsky 5.0 T2V Lite distill 5s

python test.py --config ./configs/config_5s_distil.yaml --prompt "A dog in red hat"

Run Kandinsky 5.0 T2V Lite distill 10s

python test.py --config ./configs/config_10s_distil.yaml --prompt "A dog in red hat" --video_duration 10

Inference

import torch from IPython.display import Video from kandinsky import get_T2V_pipeline device_map = { "dit": torch.device('cuda:0'), "vae": torch.device('cuda:0'), "text_embedder": torch.device('cuda:0') } pipe = get_T2V_pipeline(device_map, conf_path="configs/config_5s_sft.yaml") images = pipe( seed=42, time_length=5, width=768, height=512, save_path="./test.mp4", text="A cat in a red hat", ) Video("./test.mp4")

Please, refer to inference_example.ipynb notebook for more usage details.

Distributed Inference

For a faster inference, we also provide the capability to perform inference in a distributed way:

NUMBER_OF_NODES=1 NUMBER_OF_DEVICES_PER_NODE=1 / 2 / 4 python -m torch.distributed.launch --nnodes $NUMBER_OF_NODES --nproc-per-node $NUMBER_OF_DEVICES_PER_NODE test.py

Optimized Inference

Offloading

For less memory consumption you can use offloading of the models.

python test.py --prompt "A dog in red hat" --offload

Magcache

Also we provide Magcache inference for faster generations (now available for sft 5s and sft 10s checkpoints).

python test.py --prompt "A dog in red hat" --magcache

ComfyUI

See the instruction here

Beta testing

You can apply to participate in the beta testing of the Kandinsky Video Lite via the telegram bot.

📑 Todo List

  • Kandinsky 5.0 Lite Text-to-Video
    • Multi-GPU Inference code of the 2B models
    • Checkpoints 2B models
      • pretrain
      • sft
      • rl
      • cfg distil
      • distil 16 steps
      • autoregressive generation
    • ComfyUI integration
    • Diffusers integration
    • Caching acceleration support
  • Kandinsky 5.0 Lite Image-to-Video
    • Multi-GPU Inference code of the 2B model
    • Checkpoints of the 2B model
    • ComfyUI integration
    • Diffusers integration
  • Kandinsky 5.0 Pro Text-to-Video
    • Multi-GPU Inference code of the models
    • Checkpoints of the model
    • ComfyUI integration
    • Diffusers integration
  • Kandinsky 5.0 Pro Image-to-Video
    • Multi-GPU Inference code of the model
    • Checkpoints of the model
    • ComfyUI integration
    • Diffusers integration
  • Technical report

Authors

Project Leader: Denis Dimitrov

Team Leads: Vladimir Arkhipkin, Vladimir Korviakov, Nikolai Gerasimenko, Denis Parkhomenko

Core Contributors: Alexey Letunovskiy, Maria Kovaleva, Ivan Kirillov, Lev Novitskiy, Denis Koposov, Dmitrii Mikhailov, Anna Averchenkova, Andrey Shutkin, Julia Agafonova, Olga Kim, Anastasiia Kargapoltseva, Nikita Kiselev

Contributors: Anna Dmitrienko, Anastasia Maltseva, Kirill Chernyshev, Ilia Vasiliev, Viacheslav Vasilev, Vladimir Polovnikov, Yury Kolabushin, Alexander Belykh, Mikhail Mamaev, Anastasia Aliaskina, Tatiana Nikulina, Polina Gavrilova

Citation

@misc{kandinsky2025, author = {Alexey Letunovskiy, Maria Kovaleva, Ivan Kirillov, Lev Novitskiy, Denis Koposov, Dmitrii Mikhailov, Anna Averchenkova, Andrey Shutkin, Julia Agafonova, Olga Kim, Anastasiia Kargapoltseva, Nikita Kiselev, Vladimir Arkhipkin, Vladimir Korviakov, Nikolai Gerasimenko, Denis Parkhomenko, Anna Dmitrienko, Anastasia Maltseva, Kirill Chernyshev, Ilia Vasiliev, Viacheslav Vasilev, Vladimir Polovnikov, Yury Kolabushin, Alexander Belykh, Mikhail Mamaev, Anastasia Aliaskina, Tatiana Nikulina, Polina Gavrilova, Denis Dimitrov}, title = {Kandinsky 5.0: A family of diffusion models for Video & Image generation}, howpublished = {\url{https://github.com/ai-forever/Kandinsky-5}}, year = 2025 } @misc{mikhailov2025nablanablaneighborhoodadaptiveblocklevel, title={$\nabla$NABLA: Neighborhood Adaptive Block-Level Attention}, author={Dmitrii Mikhailov and Aleksey Letunovskiy and Maria Kovaleva and Vladimir Arkhipkin and Vladimir Korviakov and Vladimir Polovnikov and Viacheslav Vasilev and Evelina Sidorova and Denis Dimitrov}, year={2025}, eprint={2507.13546}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.13546}, }

About

No description, topics, or website provided.