Inference Providers
Active filters: quantllm
QuantLLM/Meta-Llama-3-70B-Instruct-4bit-gguf
Text Generation
• 71B • Updated • 33
• 1
codewithdark/Llama-3.2-3B-4bit
3B • Updated • 5
codewithdark/Llama-3.2-3B-GGUF-4bit
3B • Updated • 8
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
• 3B • Updated • 64
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
• 3B • Updated • 22
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
• 3B • Updated • 42
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
• 3B • Updated • 27
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
• 3B • Updated • 19
QuantLLM/Llama-3.2-3B-5bit-gguf
3B • Updated • 13
QuantLLM/Llama-3.2-3B-2bit-gguf
3B • Updated • 26
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B • Updated • 10
• 1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B • Updated • 14
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
• 0.3B • Updated • 9
QuantLLM/Qwen3-0.6B-2bit-gguf
0.6B • Updated • 161
QuantLLM/Qwen3-0.6B-4bit-gguf
0.6B • Updated • 145
QuantLLM/Qwen3-0.6B-8bit-gguf
0.6B • Updated • 7