Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning-q6-mlx

Quantized model performance

q6  0.594,0.746,0.881,0.779,0.464,0.816,0.751
q8  0.596,0.748,0.881,0.779,0.458,0.819,0.751

Brainwaves for regular vs Heretic models in the q8 quant

regular  0.590,0.742,0.883,0.781,0.458,0.822,0.751
heretic  0.596,0.748,0.881,0.779,0.458,0.819,0.751

Heretic ablation improved the model arc/arc_easy significantly, with minor drops in other places

Brainwaves for baseline vs Gemini trained model

gemma-3-27b-it-heretic
q8  0.557,0.711,0.868,0.533,0.452,0.706,0.695

Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning
q8  0.596,0.748,0.881,0.779,0.458,0.819,0.751 

DavidAU's Gemini training was very successful, raising the model perfomance envelope on all metrics

-G

This model Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning-q6-mlx was converted to MLX format from DavidAU/Gemma-3-27b-it-Gemini-Deep-Reasoning using mlx-lm version 0.30.4.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning-q6-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
107
Safetensors
Model size
27B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning-q6-mlx

Dataset used to train nightmedia/Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning-q6-mlx

Collections including nightmedia/Gemma-3-27b-it-HERETIC-Gemini-Deep-Reasoning-q6-mlx