logo
0
0
Login

DeepSeek-V3.1-Terminus-W4AFP8

This model is a mixed-precision quantized version of DeepSeek-V3.1-Terminus, with dense layer keep the FP8 quantization of the original model, while MoE layers uses INT4 weights and FP8 activation, also called W4AFP8.

Benchmark

The accuracy below was obtained with SGLang V0.5.3 in non-thinking mode.

Modelmath_500gpqaaime2024mmlu-pro
DeepSeek-V3.1-Terminus-W4AFP889.8378.2880.083.66

Inference with SGLang

We have already supported deploying this model using tensor parallel in sglang for better performance. The releated PR https://github.com/sgl-project/sglang/pull/8118 has been merged in SGLang V0.5.2, so you can deploy this model using SGLang version 0.5.2 or later with tensor parallel.

python3 -m sglang.launch_server --model-path /path/to/DeepSeek-V3.1-Terminus-W4AFP8 --tp 8 --trust-remote-code --host 0.0.0.0 --port 8000

About

No description, topics, or website provided.
341.32 GiB
0 forks0 stars1 branches0 TagREADMEMIT license
Language
Python100%