-
-
Notifications
You must be signed in to change notification settings - Fork 7.1k
[Misc] Consolidate LRUCache implementations #15481
New issue
Have a question about this project? No Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “No Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? No Sign in to your account
Conversation
Signed-off-by: Bella kira <2374035698@qq.com>
Signed-off-by: Bella kira <2374035698@qq.com>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Bella kira <2374035698@qq.com>
Btw you can run |
Signed-off-by: Bella kira <2374035698@qq.com>
Thanks for your suggestion. However, I found that in my local environment, pre-commit run --all-files reports MyPy errors only on the first run. When I run the command a second time without modifying any code, the errors are no longer reported. Could you provide some help in resolving this problem? |
I haven't encountered this. Did you stage any changes after you ran the first pre-commit? |
No, so maybe I should stage changes before I run the pre-commit? |
Normally pre-commit only runs on staged changes while |
@hmellor any idea? |
Sounds like you've got it right @DarkLight1337. With pre-commit installed with
When running with
|
Thank you for your response, which helped me understand how |
Hi @DarkLight1337 , I have fixed the pre-commit errors and recommitted the code. Could you please help review it? |
Signed-off-by: Bella kira <2374035698@qq.com>
Signed-off-by: Bella kira <2374035698@qq.com>
Signed-off-by: Bella kira <2374035698@qq.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now, thanks for your effort and patience!
Signed-off-by: Bella kira <2374035698@qq.com>
Signed-off-by: Bella kira <2374035698@qq.com> Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Bella kira <2374035698@qq.com> Signed-off-by: xinyuxiao <xinyuxiao2024@gmail.com>
Signed-off-by: Bella kira <2374035698@qq.com> Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
Signed-off-by: Bella kira <2374035698@qq.com>
Fix #14927
To unify the functionality of different LRUCache implementations, I designed
vllm.utils.LRUCache
based oncachetools.LRUCache
, incorporating the following features:cachetools.LRUCache
.Notably, cachetools.Cache inherits from collections.abc.MutableMapping, allowing LRUCache to seamlessly integrate into scenarios utilizing
cachetools.cached
, ensuring thread safety.I have run tests involving LRUCache in
tests/models/multimodal/processing/test_common.py
andtests/lora/test_utils.py
, all of which passed. However, due to hardware limitations, I only tested the Qwen2-VL-2B-Instruct model in test_common.py and have not verified compatibility with other models.