📍 Visit QingYing and API Platform to experience commercial video generation models.
CogVideoX is an open-source video generation model originating from Qingying. CogVideoX1.5 is the upgraded version of the open-source CogVideoX model.
The CogVideoX1.5-5B series model supports 10-second videos and higher resolutions. The CogVideoX1.5-5B-I2V variant supports any resolution for video generation.
This repository contains the SAT-weight version of the CogVideoX1.5-5B model, specifically including the following modules:
Includes weights for both I2V and T2V models. Specifically, it includes the following modules:
├── transformer_i2v │ ├── 1000 │ │ └── mp_rank_00_model_states.pt │ └── latest └── transformer_t2v ├── 1000 │ └── mp_rank_00_model_states.pt └── latest
Please select the corresponding weights when performing inference.
The VAE part is consistent with the CogVideoX-5B series and does not require updating. You can also download it directly from here. Specifically, it includes the following modules:
└── vae └── 3d-vae.pt
Consistent with the diffusers version of CogVideoX-5B, no updates are necessary. You can also download it directly from here. Specifically, it includes the following modules:
├── t5-v1_1-xxl ├── added_tokens.json ├── config.json ├── model-00001-of-00002.safetensors ├── model-00002-of-00002.safetensors ├── model.safetensors.index.json ├── special_tokens_map.json ├── spiece.model └── tokenizer_config.json 0 directories, 8 files
This model is released under the CogVideoX LICENSE.
@article{yang2024cogvideox, title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer}, author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others}, journal={arXiv preprint arXiv:2408.06072}, year={2024} }