YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Custom GGUF quants of deepseek-ai/DeepSeek-R1-Distill-Llama-8B, where the Output Tensors are quantized to Q8_0 or upcast to F32, while the Embeddings are kept at F32. Enjoy! 🧠🔥🚀
- Downloads last month
- 83
Hardware compatibility
Log In to add your hardware
4-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support