Instructions to use MattyMroz/MangaShift with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use MattyMroz/MangaShift with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="MattyMroz/MangaShift", filename="models/ocr/qianfan-ocr/gguf/qianfan-ocr-bf16.gguf", )
llm.create_chat_completion( messages = "\"cats.jpg\"" )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use MattyMroz/MangaShift with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf MattyMroz/MangaShift:BF16 # Run inference directly in the terminal: llama-cli -hf MattyMroz/MangaShift:BF16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf MattyMroz/MangaShift:BF16 # Run inference directly in the terminal: llama-cli -hf MattyMroz/MangaShift:BF16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf MattyMroz/MangaShift:BF16 # Run inference directly in the terminal: ./llama-cli -hf MattyMroz/MangaShift:BF16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf MattyMroz/MangaShift:BF16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf MattyMroz/MangaShift:BF16
Use Docker
docker model run hf.co/MattyMroz/MangaShift:BF16
- LM Studio
- Jan
- Ollama
How to use MattyMroz/MangaShift with Ollama:
ollama run hf.co/MattyMroz/MangaShift:BF16
- Unsloth Studio new
How to use MattyMroz/MangaShift with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MattyMroz/MangaShift to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for MattyMroz/MangaShift to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for MattyMroz/MangaShift to start chatting
- Pi new
How to use MattyMroz/MangaShift with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf MattyMroz/MangaShift:BF16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "MattyMroz/MangaShift:BF16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use MattyMroz/MangaShift with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf MattyMroz/MangaShift:BF16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default MattyMroz/MangaShift:BF16
Run Hermes
hermes
- Docker Model Runner
How to use MattyMroz/MangaShift with Docker Model Runner:
docker model run hf.co/MattyMroz/MangaShift:BF16
- Lemonade
How to use MattyMroz/MangaShift with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull MattyMroz/MangaShift:BF16
Run and chat with the model
lemonade run user.MangaShift-BF16
List all available models
lemonade list
MangaShift External Resources
External models and code repositories for the MangaShift project.
🔗 Main Project: https://github.com/MattyMroz/MangaShift
🔗 Datasets: https://huggingface.co/datasets/MattyMroz/MangaShift
Naming Convention (models/ i .legacy/models/)
Wszystkie modele w external/models/ i curated legacy payloads w external/.legacy/models/ używają jednej spójnej konwencji nazewniczej. Pełny brainstorm i uzasadnienie: temp/brain_storm/2026-04-27-naming-convention/.
11 Reguł
- Casing:
kebab-case, lowercase, ASCII only (bez CamelCase, bez_). - Subfoldery formatów: każdy model ma
pt/,onnx/,safetensors/,gguf/per-format wewnątrzexternal/models/<category>/<model-name>/, jeśli dany format istnieje. - Single-file model:
<model-name>.<ext>(np.big-lama.pt,comic-text-detector.pt). - Multi-part model: subfolder per-variant + plik = nazwa części:
<model-name>-<variant>/<part>.<ext>(np.hi-sam/onnx/hi-sam-b/encoder.onnx,paddle-ocr-manga/onnx/vision-encoder.onnx). - Variant size: sufiks rozmiaru:
<model-name>-<size>.<ext>(np.anime-text-l.pt,hi-sam-b.pth; rozmiary:n/s/m/l/xlubb/l/h). - Quantization: sufiks tylko jeśli ≠ FP32:
<model-name>-<quant>.<ext>. Standardowe sufiksy:fp16,bf16,int8,q4-k-m,q8-0. FP32 = brak sufiksu (default). - Upscaling exception: zachowaj prefix skali (
4x-,2x-) — community standard z OpenModelDB. - ComfyUI exception: modele dla stack ComfyUI (flux, qwen) zachowują strukturę
diffusion_models/,text_encoders/,vae/— manifest klasyfikuje je jako formatcomfy, nawet jeśli pliki mają rozszerzenie.safetensors. - Upstream hash: drop hashy z nazw upstream (np.
sam_vit_b_01ec64.pth→sam-vit-b.pth). Wersjonowanie idzie przez bundle.yaml. - HF
model.safetensorsdefault: rename na<model-name>.safetensorslokalnie (nie używamy generycznych nazw). - Source placeholder: każdy model ma
src/z.gitkeep, nawet jeśli źródła/export scripts jeszcze nie są zapisane.
Przykłady
external/models/
├── detection/
│ ├── anime-text/{pt,onnx}/anime-text-{n,s,m,l,x}.{pt,onnx}
│ ├── comic-text-detector/{pt,onnx}/comic-text-detector.{pt,onnx}
│ ├── hi-sam/
│ │ ├── pt/hi-sam-{b,l,h}.pth
│ │ └── onnx/hi-sam-{b,l,h}/{encoder,decoder,fg-decoder}.onnx
│ └── magi-v3/{pt,onnx}/<part>.{safetensors,onnx}
├── inpainting/
│ ├── big-lama/{pt,onnx}/big-lama.{pt,onnx} # + big-lama-fp16.onnx
│ └── flux2-klein/ # ComfyUI exception
│ ├── diffusion_models/flux-2-klein-{4b,9b}.safetensors
│ ├── text_encoders/qwen-3-{4b,8b}.safetensors
│ ├── vae/flux2-vae.safetensors
│ └── src/.gitkeep
├── ocr/
│ ├── paddle-ocr-manga/
│ │ ├── pt/paddle-ocr-manga.safetensors
│ │ └── onnx/{vision-encoder,text-decoder-prefill,text-decoder-decode,projector}.onnx
│ └── qianfan-ocr/gguf/qianfan-ocr-{bf16,q4-k-m,q8-0}.gguf
└── upscaling/
├── pt/4x-{anime-sharp-esrgan,...}.pth
├── onnx/4x-{anime-sharp-esrgan,...}.onnx
└── src/.gitkeep
external/.legacy/models/
└── detection/
├── magi/{pt,src}/
└── magi-v2/{pt,src}/
HF Publish Scope
- Publikujemy:
README.md,.gitattributes,model_hashes.json,fonts/,models/detection/,models/inpainting/,models/ocr/,models/upscaling/,.legacy/models/detection/{magi,magi-v2}/. - Nie publikujemy:
code/,bin/,.deprecated/,.external_state.json,models/_fixtures/. code/jest folderem research/competitor tracking i zostaje lokalnie poza HF model repo..legacy/zawiera tylko curated retired payloads, trzymane w tym samym układzie bucket + format dirs co głównemodels/.models/_fixtures/jest lokalnym miejscem na test fixtures/placeholders; zostaje poza HF, aktualnie bez publishable payloadu.model_hashes.jsonużywa schema v2: zasoby są grupowane po ścieżce bucketu i po formatach assetów (pt,onnx,gguf,native,comfy).
- Downloads last month
- 4
16-bit