MiniMax-M2.1-REAP-40-GGUF
This model was converted to GGUF format from 0xSero/MiniMax-M2.1-REAP-40 using GGUF Forge.
Quants
The following quants are available: Q3_K_L, Q4_K_S, Q4_K_M, Q5_K_M, Q6_K, Q8_0
Conversion Stats
| Metric | Value |
|---|---|
| Job ID | a9834b56-d9ba-457b-b5db-7b960a984439 |
| GGUF Forge Version | v6.0 |
| Total Time | 9.5h |
| Avg Time per Quant | 43.7min |
Step Breakdown
- Download: 35.4min
- FP16 Conversion: 2.5h
- Quantization: 6.4h
🚀 Convert Your Own Models
Want to convert more models to GGUF?
👉 gguforge.com — Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
Links
- 🌐 Free Hosted Service: gguforge.com
- 🛠️ Self-host GGUF Forge: GitHub
- 📦 llama.cpp (quantization engine): GitHub
- 💬 Community & Support: Discord
Converted automatically by GGUF Forge v6.0
- Downloads last month
- 268
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support