Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper
•
2404.14219
•
Published
•
259
4 bits quantized GGUF weight of phi-3-mini-4k-instruct. Mlx compatible.
Official model: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf (not supported by mlx)
Please note that the official phi-3-mini-4k-instruct.gguf model is of llama-2 architecture as stated in the paper (https://huggingface.co/papers/2404.14219)
4-bit