GGUF-IQ-Imatrix quants for an #experimental model.
Read about the original model here:
[grimjim/fireblossom-32K-7B]
- Downloads last month
- 65
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
