Skip to content

Enable PTPC FP8 for CompressedTensorsW8A8Fp8MoEMethod (triton fused_moe) #16537

New issue

Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? No Sign in to your account

Merged
merged 1 commit into from
Apr 13, 2025

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Apr 12, 2025

Basically port of the logic from CompressedTensorsW8A8Fp8MoECutlassMethod and utilize the new fused_moe support from #16366. This should enable Llama-4 FP8 on AMD, so asking @tjtanaa to test MI300X.

Manually tested by disabling the CUTLASS pathway on H100:

Processed prompts: 100%|█████████| 1319/1319 [00:48<00:00, 27.46it/s, est. speed input: 23890.23 toks/s, output: 2823.34 toks/s]
vllm (pretrained=RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic,tensor_parallel_size=4,max_model_len=10000,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9189|±  |0.0075|
|     |       |strict-match    |     5|exact_match|↑  |0.9045|±  |0.0081|

For reference here is the CUTLASS result:

Processed prompts: 100%|█████████| 1319/1319 [00:36<00:00, 36.27it/s, est. speed input: 31560.01 toks/s, output: 3705.61 toks/s]
vllm (pretrained=RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic,tensor_parallel_size=4,max_model_len=10000,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9136|±  |0.0077|
|     |       |strict-match    |     5|exact_match|↑  |0.8961|±  |0.0084|

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mgoin mgoin marked this pull request as draft April 12, 2025 16:37
@mgoin mgoin marked this pull request as ready for review April 12, 2025 17:18
@mgoin mgoin added rocm Related to AMD ROCm quantization labels Apr 12, 2025
@tjtanaa
Copy link
Contributor

tjtanaa commented Apr 12, 2025

🙌 meta-llama/Llama-4-Maverick-17B-128E-Instruct runs on MI300X as well.

Server

VLLM_USE_V1=1 \
VLLM_USE_TRITON_FLASH_ATTN=1 \
VLLM_WORKER_MULTIPROC_METHOD=spawn \
VLLM_ROCM_FP8_PADDING=1 \
SAFETENSORS_FAST_GPU=1 \
vllm serve meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 -tp 8

Client

examples/online_serving# python openai_chat_completion_client.py 
Chat completion results:
ChatCompletion(id='chatcmpl-a8ebf395ece5492bba48434232f257d3', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The 2020 World Series was played at Globe Life Field in Arlington, Texas. The Dodgers defeated the Tampa Bay Rays in the series, winning 4 games to 2. Globe Life Field was the home stadium of the Texas Rangers, but it was used as a neutral site due to COVID-19 pandemic travel restrictions.', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=[], reasoning_content=None), stop_reason=None)], created=1744478804, model='meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=66, prompt_tokens=59, total_tokens=125, completion_tokens_details=None, prompt_tokens_details=None), prompt_logprobs=None)

examples/online_serving# python openai_completion_client.py
(with logprob disabled)
Completion results:
Completion(id='cmpl-bbb9b46207a543eb89e448f755150972', choices=[CompletionChoice(finish_reason='length', index=0, logprobs=None, text=' or, through inaction, allow a human being to come to harm.\nA', stop_reason=None, prompt_logprobs=None), CompletionChoice(finish_reason='length', index=1, logprobs=None, text=' or, through inaction, allow a human being to come to harm. \n', stop_reason=None, prompt_logprobs=None)], created=1744478823, model='meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', object='text_completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=32, prompt_tokens=10, total_tokens=42, completion_tokens_details=None, prompt_tokens_details=None)) 

GSM8K lm-eval results

vllm (pretrained=RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic,tensor_parallel_size=4,max_model_len=10000,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9098|±  |0.0079|
|     |       |strict-match    |     5|exact_match|↑  |0.8954|±  |0.0084|

@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 12, 2025
@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) April 12, 2025 18:42
@tlrmchlsmth tlrmchlsmth merged commit d085a44 into vllm-project:main Apr 13, 2025
67 checks passed
erdaltoprak pushed a commit to erdaltoprak/vllm that referenced this pull request Apr 14, 2025
…oe) (vllm-project#16537)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Erdal Toprak <contact@erdaltoprak.com>
Chenyaaang pushed a commit to Chenyaaang/vllm that referenced this pull request Apr 16, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
…oe) (vllm-project#16537)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
No Sign up for free to join this conversation on GitHub. Already have an account? No Sign in to comment
Labels
quantization ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants