opus-k26-py-step150-2026-05-02
LoRA adapter trained with reinforcement learning (GRPO via Thinking Machines' Tinker SDK) on the Opus-Magnum puzzle-solving REPL benchmark, snapshotted at training step 150.
Training setup
- Base model:
moonshotai/Kimi-K2.6 - Renderer:
kimi_k25 - Representation:
python(action language the agent emits) - Adapter: LoRA, rank
32 - RL recipe: GRPO via Tinker
- Hyperparameters:
learning_rate = 1e-5group_size = 8,groups_per_batch = 16max_tokens = 1024,max_trajectory_tokens = 12000distances = 1,2,3,4max_steps_off_policy = Nonesave_every = 5
Files
adapter_model.safetensors— Tinker raw LoRA adapter weightsadapter_config.json— adapter metadata (rank, alpha, target modules)README.md— this file
Provenance
Tinker checkpoint:
tinker://0aedf8c7-c9ad-57de-b8d5-d451fd058fde:train:0/sampler_weights/000150
Converting to PEFT format
The files above are in Tinker's raw adapter format. To convert to PEFT format
suitable for direct vLLM --lora-modules loading, run on a machine that can
host the base model:
from tinker_cookbook.weights import build_lora_adapter
build_lora_adapter(
base_model="moonshotai/Kimi-K2.6",
adapter_path="./tinker_adapter", # this repo's contents
output_path="./peft_adapter",
)
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for GoodStartLabs/opus-k26-py-step150-2026-05-02
Base model
moonshotai/Kimi-K2.6