CDLI SLAM-ASR Luganda Atypical Speech LLM-LoRA Checkpoint (Epoch 2 Step 107)
LLM-LoRA plus projector atypical-speech adaptation checkpoint for SLAM-ASR on the CDLI Luganda atypical speech dataset. The Whisper encoder remains frozen; the linear projector and decoder LoRA adapters are updated from the ASR-adapted starting checkpoint.
What this repository contains
This Hub repository stores a partial SLAM-ASR checkpoint for use with the
SLAM-LLM codebase. It is not a standalone transformers checkpoint.
- Checkpoint type:
llm_lora_projector - Architecture: Whisper encoder (Sunbird/asr-whisper-large-v3-salt) + linear projector + Sunflower-14B decoder; encoder frozen; LLM base frozen; decoder LoRA on q_proj/v_proj.
- Base encoder:
Sunbird/asr-whisper-large-v3-salt - Base LLM:
Sunbird/Sunflower-14B - Exported files:
model.pt
Training / evaluation context
- Dataset:
cdli/ugandan_luganda_nonstandard_speech_v1.0 - Evaluation split:
test - Training speakers: 36
- Validation speakers: 5
- Speaker overlap: No speaker overlap between train and validation/test
Reported metrics
- Normalized WER (JiWER scorer): 58.63%
- Normalized CER (JiWER scorer): 22.91%
- Atypical overall normalized WER: 59.17%
- Atypical overall normalized CER: 22.98%
- Atypical averaged utterance WER: 54.42%
- Atypical averaged utterance CER: 19.10%
Decode settings used for the reported metrics
Test decode used MAX_NEW_TOKENS=200, NUM_BEAMS=4, REPETITION_PENALTY=2.0, NO_REPEAT_NGRAM_SIZE=2, USE_LLM_PEFT=true, LLM_TARGET_MODULES=[q_proj,v_proj].
Additional results notes
Notebook-style subgroup breakdown on the test split: Mild 48.94% WER, Moderate 52.78%, Severe 62.60%. By disorder: Dysarthria 50.15%, Articulation Disorders 53.70%, Stuttering 54.28%, Voice disorder 67.68%. This checkpoint shows stable decoding with hyp/ref ratio 92.88%.
Loading notes
Load through SLAM-LLM; this repository stores a partial SLAM-ASR checkpoint, not a standalone Transformers model.
Typical decode flow in this project uses:
examples/asr_luganda/scripts/decode_luganda_sunflower.shUSE_ENCODER_PEFT=truefor encoder-LoRA checkpoints- matching LoRA target modules at decode time
Caveats
- This repository stores SLAM-ASR training artifacts intended for research use.
- The checkpoint must be used with the matching SLAM-LLM model code and base components.
- Results can be sensitive to decode settings and evaluation protocol.