llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.
Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
Updated on: Wed Jul 03, 15:19:29
- Downloads last month
- 38
Hardware compatibility
Log In to add your hardware
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ZeroWw/internlm2_5-7b-chat-1m-GGUF", filename="", )