Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
amd
/
Mistral-7B-Instruct-v0.3-awq-g128-int4-asym-fp16-onnx-hybrid
like
0
Follow
AMD
2.16k
ONNX
ryzenai-hybrid
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
Mistral-7B-Instruct-v0.3-awq-g128-int4-asym-fp16-onnx-hybrid
7.85 GB
4 contributors
History:
14 commits
dhndhn
Update README.md
47473a9
verified
3 months ago
.gitattributes
Safe
1.59 kB
Upload 9 files
about 1 year ago
Mistral-7B-Instruct-v0.3_jit.bin
3.87 GB
xet
Upload 9 files
about 1 year ago
Mistral-7B-Instruct-v0.3_jit.onnx
294 kB
xet
Upload 9 files
about 1 year ago
Mistral-7B-Instruct-v0.3_jit.onnx.data
3.97 GB
xet
Upload 9 files
about 1 year ago
Mistral-7B-Instruct-v0.3_jit.pb.bin
7.7 kB
xet
Upload 9 files
about 1 year ago
README.md
Safe
2.4 kB
Update README.md
3 months ago
config.json
Safe
2 Bytes
Create config.json
about 1 year ago
genai_config.json
Safe
1.74 kB
Upload 9 files
about 1 year ago
rai_config.json
Safe
138 Bytes
Upload rai_config.json
6 months ago
special_tokens_map.json
Safe
551 Bytes
Upload 9 files
about 1 year ago
tokenizer.json
Safe
3.67 MB
Upload 9 files
about 1 year ago
tokenizer.model
Safe
587 kB
xet
Upload 9 files
about 1 year ago
tokenizer_config.json
Safe
141 kB
Upload 9 files
about 1 year ago