kyujinpy/Ko-various-dataset
Viewer • Updated • 38.2k • 80 • 2
How to use PracticeLLM/Custom-KoLLM-13B-v3 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="PracticeLLM/Custom-KoLLM-13B-v3") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("PracticeLLM/Custom-KoLLM-13B-v3")
model = AutoModelForCausalLM.from_pretrained("PracticeLLM/Custom-KoLLM-13B-v3")How to use PracticeLLM/Custom-KoLLM-13B-v3 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "PracticeLLM/Custom-KoLLM-13B-v3"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PracticeLLM/Custom-KoLLM-13B-v3",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/PracticeLLM/Custom-KoLLM-13B-v3
How to use PracticeLLM/Custom-KoLLM-13B-v3 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "PracticeLLM/Custom-KoLLM-13B-v3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PracticeLLM/Custom-KoLLM-13B-v3",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "PracticeLLM/Custom-KoLLM-13B-v3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PracticeLLM/Custom-KoLLM-13B-v3",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use PracticeLLM/Custom-KoLLM-13B-v3 with Docker Model Runner:
docker model run hf.co/PracticeLLM/Custom-KoLLM-13B-v3
Model Developers
Model Architecture
Base Model
Training Dataset
Ko-LLM leaderboard(11/27; link)
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
|---|---|---|---|---|---|---|
| ⭐My custom LLM 13B-v1⭐ | 50.19 | 45.99 | 56.93 | 41.78 | 41.66 | 64.58 |
| ⭐My custom LLM 13B-v2⭐ | 48.28 | 45.73 | 56.97 | 38.77 | 38.75 | 61.16 |
| ⭐My custom LLM 13B-v3⭐ | 46.40 | 44.71 | 56.89 | 40.86 | 44.22 | 45.34 |
AI-Harness evaluation; link
| Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg |
|---|---|---|---|---|---|---|---|---|
| 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | |
| ⭐My custom LLM 13B-v1⭐ | 0.7987 | 0.8269 | 0.4994 | 0.5660 | 0.3343 | 0.5060 | 0.6984 | 0.9723 |
| ⭐My custom LLM 13B-v2⭐ | 0.7938 | 0.8209 | 0.4978 | 0.4893 | 0.3343 | 0.5614 | 0.6283 | 0.9773 |
| ⭐My custom LLM 13B-v3⭐ | 0.8107 | 0.8359 | 0.5176 | 0.5182 | 0.6702 | 0.7851 | 0.5241 | 0.9698 |
| beomi/llama-2-koen-13b | 0.7768 | 0.8128 | 0.4999 | 0.5127 | 0.3988 | 0.7038 | 0.5870 | 0.9748 |
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/Custom-KoLLM-13B-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)