Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

TheBloke
/
Mistral-7B-Instruct-v0.2-GPTQ

Text Generation
Transformers
Safetensors
mistral
finetuned
conversational
text-generation-inference
4-bit precision
gptq
Model card Files Files and versions
xet
Community
7
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Issue: Dependency Conflicts When Using GPTQ Model with transformers Pipeline

#7 opened 5 months ago by
taviez

weights not used when initializing MistralForCausalLM

1
#6 opened 12 months ago by
iproskurina

Are Bloke's models usually slow on Kaggle?

#4 opened over 1 year ago by
fahim9778

Strange response

3
#3 opened almost 2 years ago by
JoaoCP

Is it supports the chat template?

1
#1 opened almost 2 years ago by
DayiTokat
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs