#
llama-cpp
Here are 7 public repositories matching this topic...
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
nodejs cmake ai metal json-schema gpu vulkan grammar cuda self-hosted bindings llama embedding cmake-js prebuilt-binaries llm llama-cpp catai function-calling gguf
-
Updated
Mar 28, 2025 - TypeScript
🎓 Showcasing Project, in 2024 Google Machine Learning Bootcamp - 🏆🤖 Award-Factory: Awards lovingly crafted for you by a hilariously talented generative AI! #Google #Gemma:2b #fine-tuning #quantization
docker google docker-compose nextjs quantization fine-tuning fastapi large-language-model llama-cpp gemma-2b
-
Updated
Feb 5, 2025 - TypeScript
Evaluate hacker news predictions with LLMs
-
Updated
Jul 6, 2024 - TypeScript
Improve this page
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."