Inference Providers
Active filters: quant
AngelSlim/Hy-MT1.5-1.8B-1.25bit
Translation
• Updated • 17.6k
• 177
AngelSlim/Hy-MT1.5-1.8B-2bit-GGUF
Translation
• 2B • Updated • 5.54k
• 20
AngelSlim/Hy-MT1.5-1.8B-1.25bit-GGUF
Translation
• 2B • Updated • 8.6k
• 38
tencent/Hy-MT1.5-1.8B-2bit-GGUF
Translation
• 2B • Updated • 7.08k
• 26
tencent/Hy-MT1.5-1.8B-2bit
Translation
• 2B • Updated • 46.9k
• 33
tencent/Hy-MT1.5-1.8B-1.25bit-GGUF
Translation
• 2B • Updated • 5.63k
• 16
tencent/Hy-MT1.5-1.8B-1.25bit
Translation
• Updated • 330
• 26
eaddario/Qwen3.6-35B-A3B-GGUF
Image-Text-to-Text
• 35B • Updated • 1.38k
• 2
digitous/13B-HyperMantis_GPTQ_4bit-128g
Text Generation
• Updated • 8
• 12
pszemraj/nougat-small-onnx-quant_avx2
Image-Text-to-Text
• Updated • 6
pszemraj/nougat-base-onnx-quant_avx2
Image-Text-to-Text
• Updated • 7
fhai50032/RolePlayLake-7B-GGUF
7B • Updated • 34
• 3
oldbridge/latxa-7b-instruct-q8
Text Generation
• 7B • Updated • 15
pszemraj/nougat-small-onnx-quant_avx512_vnni
Image-Text-to-Text
• Updated • 5
RDson/Llama-3-Magenta-Instruct-4x8B-MoE-GGUF
25B • Updated • 181
• 1
TroyDoesAI/Codestral-21B-Pruned
Text Generation
• 21B • Updated • 9
• 2
mradermacher/Codestral-21B-Pruned-GGUF
21B • Updated • 382
mradermacher/Codestral-21B-Pruned-i1-GGUF
21B • Updated • 580
pszemraj/candle-flanUL2-quantized
Text Generation
• 19B • Updated • 24
byroneverson/gemma-2-27b-it-abliterated-gguf
Text Generation
• 27B • Updated • 275
• 12
QuantFactory/gemma-2-27b-it-abliterated-GGUF
Text Generation
• 27B • Updated • 783
• 7
EmperorKronos/gemma-2-27b-it-abliterated-exl2
Text Generation
• Updated • 2
byroneverson/LongWriter-glm4-9b-abliterated-gguf
Text Generation
• 9B • Updated • 12
• 3
Question Answering
• 8B • Updated • 4
• 4
mradermacher/FinShibainu-GGUF
8B • Updated • 130
• 1
eaddario/Hammer2.1-7b-GGUF
Text Generation
• 8B • Updated • 975
• 2
eaddario/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation
• 8B • Updated • 1.11k
• 3
eaddario/Watt-Tool-8B-GGUF
Text Generation
• 8B • Updated • 1.07k
• 5