🧠 LoRA Adapter for Llama-3.2-3B-Instruct (Multi-Class Classification)

πŸ“Œ Model Overview

This repository contains a LoRA adapter fine-tuned on Meta-Llama/Llama-3.2-3B-Instruct for multi-class text classification.
The model predicts one of the following labels:

Yes, No, Neutral, (D), A, B, C, D, E, N, (C), (A)

This adapter can be applied to various text classification tasks and is optimized for efficiency using Low-Rank Adaptation (LoRA).


πŸš€ Training Details

Training Command

The LoRA adapter was trained using the following command:

python peft_training.py \
  --model-name meta-llama/Llama-3.2-3B-Instruct \
  --train-file ../GLoRE/data/splits/train.jsonl \
  --output-dir lora-multi \
  --classes Yes No Neutral "(D)" A B C D E N "(C)" "(A)"
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for SwashBuckler001/Llama-3.2-3B-Instruct-LoRA-GLoRE

Finetuned
(879)
this model

Dataset used to train SwashBuckler001/Llama-3.2-3B-Instruct-LoRA-GLoRE

Evaluation results