Skip to content

[MISC] Dump model runner inputs when crashing #8305

New issue

Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? No Sign in to your account

Merged
merged 3 commits into from
Sep 12, 2024

Conversation

comaniac
Copy link
Collaborator

@comaniac comaniac commented Sep 9, 2024

To better reproduce the model runner crashing due to illegal memory access and possibly other errors, this PR introduces a utility that dumps model runner inputs when crashing. Since the model runner inputs may be long, I dumped them using pickle for now. Any suggestions or better ideas are welcome.

cc @robertgshaw2-neuralmagic @simon-mo @DarkLight1337

Copy link

github-actions bot commented Sep 9, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337
Copy link
Member

Can we add instructions in the GitHub issues template so users can share their logs upon encountering such errors?

@comaniac
Copy link
Collaborator Author

Can we add instructions in the GitHub issues template so users can share their logs upon encountering such errors?

Good point. Will do

@youkaichao
Copy link
Member

do we need to add a flag for this? looks like some debugging feature that can also be added in https://docs.vllm.ai/en/latest/getting_started/debugging.html

@robertgshaw2-redhat
Copy link
Collaborator

do we need to add a flag for this? looks like some debugging feature that can also be added in https://docs.vllm.ai/en/latest/getting_started/debugging.html

The goal is to be able to get logs from production usage to help track down hard to replicate bugs (like illegal mem access in prefix caching). So having a flag defeats the purpose

@youkaichao
Copy link
Member

makes sense then. please ignore my comment.

@comaniac
Copy link
Collaborator Author

@DarkLight1337 added to issue template. PTAL.

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. We should be careful when loading untrusted pickle files though.

@comaniac
Copy link
Collaborator Author

It should be fine as we never load it automatically? But yeah you may get virus if someone post a malicious pickle file to an issue...

@comaniac comaniac added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 11, 2024
@comaniac comaniac enabled auto-merge (squash) September 11, 2024 15:59
@comaniac comaniac merged commit a65cb16 into vllm-project:main Sep 12, 2024
67 of 68 checks passed
@comaniac comaniac deleted the dump_inputs branch September 12, 2024 16:11
njhill added a commit to njhill/vllm that referenced this pull request Sep 16, 2024
vllm-project#8305 was recently added to dump model running inputs when encountering a fatal error.

If this happens during decode however it will include the kvcache tensors which are typically huge (~60GB in the case I was testing), and can therefore take minutes to write to disk.

When this happens the engine loop is blocked and health checks time-out causing the server to be killed.

This change replaces kvcache tensors with their dtype + shape. With this the pickling is sub-second and the filesize in my test case was 7KB.
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Signed-off-by: LeiWang1999 <leiwang1999@outlook.com>
No Sign up for free to join this conversation on GitHub. Already have an account? No Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants