Dataset Viewer
Auto-converted to Parquet Duplicate
audio
audioduration (s)
3.9
16.7
ground_truth
stringlengths
19
71
model_output
stringlengths
14
59
category
stringclasses
4 values
subcategory
stringlengths
7
16
language
stringclasses
5 values
character_error_rate
float64
0.1
0.74
diacritics_expected
int64
0
15
diacritics_produced
int64
0
7
diacritic_loss
int64
-7
12
Aha m bụ Chukwuemeka. Nna m bụ Obiora. Nne m bụ Ngozi.
ahambụ cheku emeka nnam bụ ọbiọra nnem bụ ngọzi
script_hallucination
personal_names
ibo_Latn
0.485
3
6
-3
Nnọọ. Kedu ka ị mere? Ọ dị mma. Daalụ.
nọ kedo ke imere ọ dị mma daalụ
script_hallucination
greetings
ibo_Latn
0.188
6
4
2
Otu, abụọ, atọ, anọ, ise, isii, asaa, asatọ, itoolu, iri.
ọtu abuọ atọ anọ ise isi asa asatọ ìtọlu iri
script_hallucination
numeric
ibo_Latn
0.208
5
7
-2
Onye aghala nwanne ya.
onje agalá wánẹ yá
script_hallucination
idiomatic
ibo_Latn
0.4
0
4
-4
Aha m bụ Chizoba. Kedu ka ị mere taa? Ọ dị mma.
aham bụ chizọba kedu ke imereta ọ dị mmaa
script_hallucination
prosody
ibo_Latn
0.159
4
4
0
Akwa, akwa, akwa. Akwà, akwà, akwà. Àkwà, àkwà, àkwà. Ákwá, ákwá, ákwá.
akua akua akua akua akwa akwa akwa akua akwa ọkua ọkua ọkua
tonal_diacritics
minimal_pair
ibo_Latn
0.6
15
3
12
Oke, oke, oke. Òkè, òkè, òkè. Ọkè, ọkè, ọkè.
oke oke oke oke oke oke oke oke oki
tonal_diacritics
minimal_pair
ibo_Latn
0.418
12
0
12
Ọ nà-èrì ọ̀jị̀ n'ụ̀tụ̀tụ̀.
ọ na eri ọjị n'ututu
tonal_diacritics
high_density
ibo_Latn
0.435
9
3
6
O na-eri oji n'ututu.
ọne rị ọjí nụ tútú
tonal_diacritics
monotone
ibo_Latn
0.744
0
7
-7
Kí ló dé, kí ló ṣe lẹ́?
kílode kílo ṣele
tonal_diacritics
control_language
yor_Latn
0.385
7
3
4
I'm going to the ọgbọ today with my ụmụnne.
iam going today ogbo today with my umun ne
code_switching
en_to_ig
mixed
0.224
4
0
4
M ga-eje shopping taa. M chọrọ credit card m.
nga eje shọpịnta nchọrọ kridịkad m
code_switching
ig_to_en
mixed
0.367
2
5
-3
I need rice. M chọrọ ji. I want yam. M chọrọ akpu.
i nied ric nchọrọ ji i wọnt yam nchọrọ akpụ
code_switching
sentence_level
mixed
0.183
4
6
-2
The ụlọ is beautiful. My ụmụaka are playing.
te ulọ is beautiful my umuaka a playing
code_switching
with_diacritics
mixed
0.133
4
1
3
I wan chop rice. Abeg, give me water.
i want chop rice i beg give me water
code_switching
control_pidgin
pcm_Latn
0.096
0
0
0
M bi na Enugu. Nnà m sị Owerri. M ga Onitsha.
mbị na ịnugu nnamsigo weri nga ọnịcha
cultural_context
geographic
ibo_Latn
0.39
2
4
-2
M na-eri jollof rice na egusi soup. Ọ tọrọ ụtọ.
emna iri jelọfres na egusi suup ọtọrụ ụtọ
cultural_context
culinary
mixed
0.25
5
6
-1
Ọnwa na-agbanwe, anyanwụ na-agbanwe, ma ala adịghị agbanwe.
owuana agbawe ayawu na agbawe mala adege agbawe
cultural_context
idiomatic
ibo_Latn
0.264
4
0
4
Bonjour, comment ça va? Je m'appelle Chizoba. J'habite à Paris.
bondu komosova že ma pelčizoba jea bit apari
cultural_context
control_language
fra_Latn
0.402
1
0
1
Aha m bụ Chizoba. Kedu ka ị mere taa? Ọ dị mma.
aha mbụchizọba kidụka imereta ọ dị mma
cultural_context
noise_robustness
ibo_Latn
0.2
4
5
-1
Ẹ káàárọ̀. Báwo ni?
ekaọrọ bawọ ni
tonal_diacritics
control_formal
yor_Latn
0.455
6
3
3

omniASR Igbo Blind Spot Dataset

Research Questions

This dataset investigates three interrelated questions about multilingual ASR performance on tonal languages:

  1. Operational Definition: What does "language support" mean when a model lists 1,600+ languages? Does coverage imply functional accuracy on linguistically meaningful distinctions?

  2. Diagnostic Validity: Can tonal diacritic preservation serve as a diagnostic for acoustic competence vs. orthographic pattern matching in low-resource languages?

  3. Systematic Evaluation: Does facebook/omniASR-CTC-1B exhibit systematic tonal collapse in Igbo, and if so, what error patterns emerge?

Overview

This dataset provides a controlled diagnostic evaluation of tonal fidelity in facebook/omniASR-CTC-1B when processing Igbo (ibo_Latn), a tonal Niger-Congo language with ~45 million speakers. Through 21 systematically designed audio samples, we document a 75.5% diacritic loss rate on tonal markers (bootstrap 95% CI: [57.1%, 89.7%]; bootstrap mean estimate over utterance-level resampling; raw aggregate count: 30/49 = 61.2%) and present evidence consistent with probabilistic diacritic generation rather than robust acoustic conditioning.

Key Finding: The model exhibits a 75.5% diacritic loss rate on tonal markers, fails to distinguish tonal minimal pairs, and paradoxically hallucinates diacritics on monotone speech.

Motivation

Recent work on ASR fairness has documented systematic performance disparities across demographic groups (Koenecke et al., 2020) and languages (Ogueji et al., 2024). However, existing evaluations focus primarily on word error rates in high-resource languages. This dataset addresses three critical gaps:

  1. Tonal language evaluation: Most ASR benchmarks ignore whether models preserve linguistically meaningful tone distinctions
  2. Low-resource African languages: Igbo remains underrepresented in ML evaluation despite being a major world language
  3. Native speaker ground truth: As a native Igbo speaker, I provide authoritative ground truth for phonetic and tonal correctness that automated metrics cannot capture

The Paradox of "Supported" Languages

omniASR's model card lists Igbo (ibo_Latn) among its 1,600+ supported languages. However, as recent work on low-resource ASR demonstrates, nominal support does not guarantee functional accuracy (EMNLP 2024, "The Zeno's Paradox of 'Low-Resource' Languages").

The challenge is definitional: what does it mean for a language to be "low-resource"?

  • By training data: Igbo has fewer hours than English (low-resource)
  • By speaker population: 45 million speakers (NOT low-resource)
  • By model performance: Our findings show it behaves like a low-resource language despite being "supported"

This dataset reveals the gap between coverage (language is in the training set) and competence (model preserves linguistically meaningful distinctions). As the EMNLP paper argues, we risk creating a Zeno's paradox: models claim to support more and more languages, yet the quality asymptote never reaches parity with high-resource languages.

Our contribution: We provide native-speaker ground truth to quantify this gap for Igbo, moving beyond subjective impressions to measurable blind spots.

Dataset Structure

huggingface_dataset/
├── audio/               # 21 WAV files (16kHz mono)
├── metadata.csv         # Ground truth, model outputs, error metrics
└── README.md            # This file

Metadata Schema

Column Description
file_name Path to audio file
ground_truth Correct transcription with tone marks
model_output omniASR-CTC-1B prediction
category Error category (see taxonomy below)
subcategory Specific test condition
language Language code (ibo_Latn, yor_Latn, fra_Latn, mixed)
character_error_rate Character-level error rate (0-1)
diacritics_expected Number of tone marks in ground truth
diacritics_produced Number of tone marks in model output
diacritic_loss Net diacritic difference (negative = hallucination)

Error Taxonomy

1. Cross-lingual Orthographic Interference (5 samples)

Hypothesis: Model applies incorrect orthographic conventions from other languages to Igbo text.

Tests:

  • Personal names (01_script_names)
  • Formal greetings (02_script_formal)
  • Numeric sequences (03_script_numbers)
  • Proverbs (04_script_proverb)
  • Prosody variation (05_script_slow)

Finding: Model frequently adds incorrect diacritics where none exist (-38.9% net diacritic loss = 38.9% hallucination rate), suggesting cross-lingual interference from other supported languages.

2. Phonemic Tone Sensitivity (6 samples)

Hypothesis: Model cannot distinguish phonemically contrastive tones in Igbo.

Tests:

  • Minimal pairs: akwa/akwà/àkwà/ákwá (06_tonal_akwa)
  • Minimal pairs: oke/òkè/ọkè (07_tonal_oke)
  • Dense tone marks (08_tonal_dense)
  • Monotone control (09_tonal_flat)
  • Yoruba controls (10_tonal_yoruba, 21_tonal_yoruba_formal)

Finding:

  • 75.5% diacritic loss (bootstrap estimate; raw count: 30/49 tone marks)
  • Bootstrap 95% CI: [57.1%, 89.7%]
  • CER 74.4% on monotone speech where model ADDED tones that don't exist
  • Model outputs collapse multiple tonal minimal-pair forms into a shared orthographic representation, indicating weak tonal separability in this evaluation setup

Linguistic Impact: In Igbo, tone changes word meaning. Losing tone marks is equivalent to losing consonants in English (e.g., "bat" vs "hat" vs "cat" all transcribed as "at").

3. Language Boundary Effects (5 samples)

Hypothesis: English-Igbo code-switching (extremely common in Nigerian speech) disrupts language-specific processing.

Tests:

  • English → Igbo embedding (11_codeswitch_en2ig)
  • Igbo → English embedding (12_codeswitch_ig2en)
  • Sentence-level alternation (13_codeswitch_alternate)
  • Diacritics in English context (14_codeswitch_embedded)
  • Nigerian Pidgin control (15_codeswitch_pidgin)

Finding: 14.3% diacritic loss. English portions transcribed perfectly while adjacent Igbo loses tone marks (e.g., "The ụlọ is beautiful" → "te ulọ is beautiful"), suggesting language detection boundaries affect orthographic fidelity.

4. Domain-Specific Lexical Coverage (5 samples)

Hypothesis: Model struggles with culturally specific terms, place names, and idiomatic expressions outside training distribution.

Tests:

  • Nigerian place names (16_context_places)
  • Igbo food terms (17_context_food)
  • Long proverbs (18_context_proverb)
  • French control (19_context_french)
  • Background noise robustness (20_context_noise)

Finding:

  • Best diacritic preservation (6.3% loss) but high word-level errors (30% CER)
  • Place names corrupted: "Owerri" → "weri" (missing syllable)
  • High-resource French performed unexpectedly poorly (Czech/Slavic character hallucinations)

Sample Audio Examples

You can listen to individual audio files in the Files tab or explore the full dataset with metadata in the Dataset Viewer.

Key examples:

  • 06_tonal_akwa.wav - Tonal minimal pairs (4 different words collapsed to random outputs)
  • 09_tonal_flat.wav - Monotone speech with hallucinated diacritics
  • 11_codesw_en2ig.wav - Code-switching (English perfect, Igbo loses tones)

Example 1: Tonal Minimal Pairs (06_tonal_akwa.wav)

  • Ground Truth: akwa, akwa, akwa. Akwà, akwà, akwà. Àkwà, àkwà, àkwà. Ákwá, ákwá, ákwá.
  • Model Output: akua akua akua akua akwa akwa akwa akua akwa ọkua ọkua ọkua
  • Error: Model collapses 4 distinct words into random variations

▶Listen to audio

Example 2: Monotone Hallucination (09_tonal_flat.wav)

  • Ground Truth: O na-eri oji n'ututu. (spoken with FLAT intonation, no tones)
  • Model Output: ọne rị ọjí nụ tútú (model ADDED diacritics that weren't spoken)
  • Error: Evidence of orthographic bias, not acoustic perception

▶Listen to audio

Quantitative Summary

Category Samples Diacritic Loss Avg CER
Phonemic Tone Sensitivity 6 75.5% 50.6%
Cross-lingual Orthographic Interference 5 -38.9% (hallucination) 28.8%
Domain-Specific Lexical Coverage 5 6.3% 30.1%
Language Boundary Effects 5 14.3% 20.0%
Overall 21 26.8% 32.5%

Statistical Analysis

Notation and Metric Definitions

Let:

  • E = total expected diacritics (ground truth)
  • P = total produced diacritics (model output)
  • D = dropped diacritics = max(0, E - P)
  • H = hallucinated diacritics = max(0, P - E)

We report three related but distinct metrics:

1. Raw Diacritic Drop Rate (RDD)

Measures proportion of expected tone marks not produced:

RDD=DE \text{RDD} = \frac{D}{E}

For the Phonemic Tone Sensitivity category:

  • Observed raw drop count: 30 / 49
  • Raw RDD = 61.2%

This is the direct event-level count across all tonal samples.

2. Diacritic Error Rate (DER)

Captures total deviation from expected tone inventory:

DER=D+HE \text{DER} = \frac{D + H}{E}

This metric includes both:

  • Dropped tone marks
  • Hallucinated tone marks

DER is normalized by expected diacritics (E), not produced diacritics (P), to reflect total departure from the target tonal system. Note that DER can exceed 100% when hallucinations are substantial, as the denominator reflects ground truth expectations rather than produced output.

Results:

  • Overall DER (all categories): 26.8%
  • Phonemic Tone Sensitivity DER: 75.5%

DER differs from RDD because it includes hallucinations.

3. Bootstrap Uncertainty Estimation

To account for small sample size (N = 21 utterances), we computed 95% confidence intervals via bootstrap resampling (10,000 iterations).

Bootstrap resampling was performed at the utterance level, not event level. Therefore, bootstrap estimates may differ from raw aggregate counts due to unequal diacritic distribution across samples.

Phonemic Tone Sensitivity:

  • Bootstrap mean DER: 75.5%
  • 95% CI: [57.1%, 89.7%]

Overall Diacritic Loss (Drops Only):

  • Bootstrap mean RDD: 52.6%
  • 95% CI: [30.3%, 69.7%]

Character Error Rate (CER):

  • Overall CER: 0.333
  • 95% CI: [0.267, 0.402]

Interpretation

Even under conservative bootstrap lower bounds:

  • Tonal diacritic loss remains above 57%
  • Overall diacritic loss remains above 30%

This suggests that the observed tonal degradation is unlikely to be driven solely by sampling variability.

Because bootstrap operates over utterances rather than individual diacritic events, central estimates may differ from raw aggregate counts. Both values are reported for transparency.

Bootstrap Uncertainty Quantification

To account for small sample size (N=21), we computed 95% confidence intervals via bootstrap resampling (10,000 iterations):

Diacritic Loss Rate:

  • Overall: 52.6% (95% CI: [30.3%, 69.7%])
  • Phonemic Tone Sensitivity: 75.5% (95% CI: [57.1%, 89.7%])

Hallucination Rate:

  • Overall: 35.2% (95% CI: [18.2%, 53.3%])
  • Cross-lingual Orthographic Interference: 36.0% (95% CI: [8.7%, 68.0%])

Character Error Rate:

  • Overall: 0.333 (95% CI: [0.267, 0.402])
  • Phonemic Tone Sensitivity: 0.506 (95% CI: [0.416, 0.617])

Interpretation: Even with wide confidence intervals due to small sample size, the lower bounds remain substantial. The tonal category's worst-case lower bound (57.1% loss) still represents severe degradation of phonemic information. This suggests that the observed effects are unlikely to be driven solely by sampling variability.

Visualizations

Diacritic Loss by Category

Diacritic Loss Rate Figure 1: Diacritic loss rates across error categories. Tonal raw diacritics (61.2%, red) show severe loss compared to other categories. Negative values indicate diacritic hallucination (script interference).

CER vs. Diacritic Loss

CER vs Diacritic Loss Figure 2: Sample-level comparison of general transcription errors (CER) vs. tone-specific errors. Tonal category samples (red) show high diacritic loss even when CER is moderate (20-40%), demonstrating that tone errors are not simply a consequence of overall poor transcription quality.

Bootstrap Confidence Intervals

Bootstrap CIs Figure 3: 95% confidence intervals from bootstrap resampling (10,000 iterations per category). Tonal diacritics worst-case lower bound (57.1%) still indicates severe degradation, confirming statistical robustness despite small sample size (N=21). Error bars represent bootstrap percentile intervals over utterance-level resampling.

Scope and Limitations of Claims

This study demonstrates:

  • Systematic diacritic loss in omniASR-CTC-1B on Igbo audio (21 controlled samples)
  • Failure to preserve tonal minimal pair distinctions in this evaluation setup
  • Diacritic hallucination on monotone speech (evidence of orthographic bias)

This study does NOT claim:

  • That omniASR fails universally on all Igbo speech
  • That tone modeling is architecturally absent from the model
  • That Igbo is uniquely disadvantaged relative to all other low-resource languages
  • That the observed error rates generalize to all dialects or all speakers

What would be needed to strengthen these claims:

  • Multi-speaker evaluation (N=10+ speakers across dialects)
  • Acoustic analysis (F0 contour extraction, pitch tracking validation)
  • Comparative evaluation on other tonal African languages
  • Controlled resynthesis experiments isolating acoustic vs. lexical priors

Critical Insight: Evidence of Weak Tonal Conditioning

The clearest diagnostic signal comes from File 09 (monotone speech):

  • Setup: I spoke "O na-eri oji n'ututu" with deliberately FLAT intonation (no tonal variation)
  • Expected: If tonal diacritics were tightly conditioned on acoustics in this setting, the output would contain few or no added diacritics
  • Result: "ọne rị ọjí nụ tútú" - model ADDED random tone marks that I didn't produce

Interpretation: The observed behavior is consistent with probabilistic diacritic insertion driven primarily by lexical or orthographic priors, rather than robust conditioning on acoustic tone. Confirming this mechanism would require acoustic analysis (e.g., F0 contour statistics) and controlled resynthesis experiments.

Linguistic Error Analysis: When Tone Loss Changes Meaning

File Ground Truth Model Output Semantic Error
06_tonal_akwa akwà (cloth) akwa Could mean "crying" instead of "cloth"
06_tonal_akwa àkwà (egg) akwa Meaning completely lost
06_tonal_akwa ákwá (bridge) akua Wrong word + wrong tone
07_tonal_oke òkè (rat) oke Could mean "male/big" instead of "rat"
08_tonal_dense ọ̀jị̀ (kolanut) ọjị Partial tone loss, meaning ambiguous
16_context_places Owerri (city) weri Unrecognizable as place name

Impact: These are not minor transcription errors. A voice assistant that transcribes "I need àkwà" (eggs) as "I need akwa" (crying) has produced semantically nonsensical output.

Performance Gap: Claimed vs. Measured

According to Meta's omnilingual ASR paper (arXiv:2511.09690):

  • omniASR achieves CER <10% for 78% of supported languages
  • Igbo (ibo_Latn) is listed among the 1,600+ supported languages

Our findings:

  • Overall CER: 32.5% (3.25× worse than claimed threshold)
  • Tonal category CER: 50.6% (5× worse than claimed threshold)
  • Worst sample CER: 74.4% (7.4× worse than claimed threshold)

Interpretation: Either (a) Igbo is in the bottom 22% of languages by performance, or (b) the published benchmarks use test sets that don't capture tonal accuracy. Our native-speaker evaluation is consistent with the latter possibility, but does not isolate whether the primary driver is benchmark construction, data domain mismatch, or evaluation protocol differences.

Implications for Low-Resource ASR

This dataset reveals that raw multilingual coverage (1,600+ languages) does not guarantee linguistic accuracy:

  1. Tonal languages require specialized evaluation: WER/CER metrics miss semantic errors when tones are lost. Recent work on extremely low-resource ASR demonstrates that models systematically fail on tonal distinctions even when the language is nominally "supported" (ACL 2025, "Breaking the Transcription Bottleneck").

  2. Native speaker validation is essential: Automated metrics cannot catch when "crying" (akwa) is transcribed as "cloth" (akwà). Following methodological frameworks from dialect bias research (EMNLP Findings 2024), we provide single-speaker ground truth to establish baseline performance before scaling to multi-speaker evaluation.

  3. Code-switching is not a solved problem: Real-world multilingual speech patterns break current ASR systems. Nigerian English-Igbo code-switching represents a common speech pattern that production systems must handle.

  4. "Supported" ≠ "Works well": As the EMNLP 2024 best paper on low-resource language paradoxes demonstrates, models can list languages in their documentation while providing functionally inadequate service. Our results indicate a substantial gap between nominal language coverage and functional performance on tone-sensitive orthography in Igbo.

Broader Implications: Linguistic Fairness in Multilingual ASR

Tonal diacritics in Igbo encode phonemic distinctions that alter lexical meaning. Systematic loss of these distinctions has implications beyond transcription accuracy.

Epistemic Distortion

When tonal contrasts are consistently omitted, model outputs may:

  • Collapse minimal pairs
  • Introduce semantic ambiguity
  • Normalize orthographic forms that diverge from standard tonal marking

Such distortions risk misrepresenting core structural features of the language.

Downstream System Impact

ASR increasingly serves as infrastructure for voice interfaces, accessibility tools, translation systems, and educational applications. Tone collapse can propagate semantic errors into downstream systems, particularly in contexts where tonal contrasts determine lexical identity.

Framing

This dataset does not claim quantified downstream harm. Rather, it provides a controlled diagnostic demonstrating that tonal fidelity can degrade substantially under current multilingual ASR evaluation protocols. Future work is required to measure real-world behavioral or allocative consequences.

Comparison to Related Work

Study Focus Key Finding
Koenecke et al. (2020) Racial disparities in commercial ASR 2x higher WER for Black speakers
Ogueji et al. (2024) African language ASR evaluation Performance degrades severely on low-resource languages
ACL (2025) Extremely low-resource ASR Tonal distinctions fail even when language is "supported"
This work Tonal distinctions in Igbo ASR 75.5% loss of phonemically contrastive tone marks

Use Cases

This dataset is designed for:

  • ASR developers: Benchmark tonal accuracy for African languages
  • Linguists: Document systematic biases in multilingual models
  • ML fairness researchers: Extend demographic fairness analysis to linguistic fairness
  • African NLP community: Provide native-speaker ground truth for Igbo

Recording Methodology

  • Speaker: Native Igbo speaker (Nigerian)
  • Dialect: Afikpo Igbo (Ebonyi State). Speaker grew up in multilingual Northern Nigerian environment; both parents from Afikpo. Recordings reflect a single-speaker variety and are not intended to represent all Igbo dialects.
  • Device: iPhone SE 2nd Generation Voice Memos app
  • Format: M4A (AAC codec) converted to 16kHz mono WAV
  • Duration: 4-15 seconds per sample
  • Environment: Quiet indoor setting (File 20 includes controlled background noise)
  • Speech style: Natural conversational pace unless otherwise noted (File 05 is deliberately slow)

Following methodological frameworks from dialect bias research (EMNLP Findings 2024), single-speaker recordings establish baseline performance before scaling to multi-speaker, multi-dialect evaluation.

Model Details

  • Model: facebook/omniASR-CTC-1B
  • Features: ASR (Automatic Speech Recognition)
  • Parameters: 975,065,300 (~975M)
  • Download Size: 3.7 GiB (FP32)
  • Inference VRAM: ~3 GiB
  • Architecture: CTC-based ASR (wav2vec2-style encoder with CTC head)
  • Training: Multilingual (1,600+ languages) on clean, spontaneous speech
  • Release: November 14, 2025
  • License: Apache 2.0

Reproducibility

All transcriptions generated using:

from omnilingual_asr.models.inference.pipeline import ASRInferencePipeline
pipeline = ASRInferencePipeline(model_card="omniASR_CTC_1B")
transcription = pipeline.transcribe(inp=[audio_path], lang=["ibo_Latn"])

Environment:

  • Google Colab (NVIDIA Tesla T4, 15GB VRAM)
  • omnilingual-asr==0.1.0
  • torch==2.1.0
  • Python 3.12
  • Date: March 1, 2026

Limitations and Scope

This dataset represents a proof-of-concept demonstration of native-speaker auditing for low-resource ASR. By design, it prioritizes:

  1. Depth over breadth: 21 carefully designed samples targeting specific failure modes rather than 1000s of random utterances
  2. Native-speaker authority: Single speaker provides unambiguous ground truth for initial blind spot discovery
  3. Systematic coverage: Four distinct categories of errors (orthographic, tonal, code-switching, lexical)

Known limitations:

  • Generalizability: Single speaker limits claims about model performance across all Igbo speakers
  • Dialectal coverage: Does not test all major Igbo dialects (Onitsha, Enugu, Nsukka, Afikpo, etc.)
  • Real-world conditions: Primarily clean audio; limited noise robustness testing
  • Sample size: 21 recordings establish blind spot existence but not prevalence rates

Why this scope is appropriate: Following established ASR fairness methodologies (Koenecke et al., 2020; EMNLP 2024), initial bias discovery uses controlled conditions and expert annotators before scaling to large-scale evaluation. This dataset serves as the foundation for future multi-speaker, multi-dialect studies.

Future Work: Research Agenda

Phase 1: Scale Current Approach (3-6 months)

  • Record 50+ samples per category (total: 200+ recordings)
  • Recruit 10 speakers across major dialects (Owerri, Onitsha, Enugu, Nsukka, Afikpo)
  • Add female/male speaker balance
  • Test age range effects (youth vs. elders)

Phase 2: Comparative Model Evaluation (6-12 months)

Audit the same test set on:

  • OpenAI Whisper (large-v3)
  • Meta MMS (1B-all)
  • Google USM
  • Microsoft Azure Speech

Research question: Is 75.5% tonal loss specific to omniASR or universal across multilingual ASR?

Phase 3: Intervention Studies (12-18 months)

Following ACL 2025 recommendations on fine-tuning for low-resource languages:

  • Fine-tune omniASR on Igbo data with tonal annotations
  • Measure pre/post diacritic accuracy
  • Publish open-source fine-tuning pipeline for other tonal African languages

Phase 4: Downstream Impact (18-24 months)

  • Partner with Nigerian voice assistant developers
  • Measure real-world consequences of tonal errors in deployed systems
  • User studies: Do Igbo speakers trust ASR that strips tones?

Fine-Tuning Strategy: Fixing the Blind Spots

What Kind of Dataset Is Needed?

To address the tonal collapse observed in omniASR-CTC-1B, a fine-tuning dataset should have:

  1. Tone-accurate transcriptions: Ground truth text must include all diacritics that mark phonemic tone distinctions (à, è, ì, ọ, ụ, etc.)

  2. Minimal pairs coverage: Explicitly include utterances with tonal minimal pairs (akwa/akwà/àkwà/ákwá) to force the model to learn acoustic tone distinctions rather than relying on lexical priors

  3. Speaker diversity:

    • 20+ speakers across major Igbo dialects (Owerri, Onitsha, Enugu, Nsukka, Afikpo, etc.)
    • Balanced male/female representation
    • Age range 18-65 to capture generational variation
  4. Domain coverage:

    • Natural conversational speech (not read text)
    • Code-switching scenarios (English-Igbo mixing)
    • Cultural terms, place names, and idiomatic expressions
    • Varying speech rates and prosodic patterns
  5. Audio quality:

    • Clean recordings (SNR >20dB) for initial fine-tuning
    • Noise-augmented variants for robustness testing
    • 16kHz sampling rate (matching omniASR's input format)

How to Assemble/Find Such a Dataset

Option 1: Crowdsourced Collection (Recommended)

Partner with Nigerian universities and Igbo language organizations to recruit native speakers:

  1. Recruitment:

    • Advertise through Igbo cultural associations, university linguistics departments
    • Compensation: $15-20/hour (competitive for Nigerian context)
    • Require: Native speakers with literacy in standard Igbo orthography
  2. Recording protocol:

    • Mobile app for audio capture (ensures quality control)
    • Prompt design: Mix of scripted (minimal pairs, proverbs) and spontaneous (describe your day, tell a story)
    • Quality assurance: Native speaker reviewers verify tone accuracy
  3. Annotation workflow:

    • Speakers self-transcribe with diacritics (who better than native speakers?)
    • Cross-validation: Second annotator reviews 20% of transcriptions
    • Dispute resolution: Linguist adjudicates disagreements

Option 2: Existing Resources + Enhancement

Leverage existing Igbo corpora but add tone annotations:

  1. Start with: IgboAPI, JW300 Igbo corpus, NSC Igbo Bible
  2. Problem: These lack audio or have incomplete diacritics
  3. Solution: Commission native speakers to:
    • Record audio for existing text
    • Add missing tone marks to transcriptions
    • Validate tone-text alignment

Option 3: Hybrid Approach

  • Core dataset: 100 hours crowdsourced (Option 1)
  • Augmentation: 50 hours from enhanced existing resources (Option 2)
  • Validation set: Held-out recordings from this diagnostic dataset

Dataset Size Estimates

Based on recent low-resource ASR literature (ACL 2025):

Minimum viable:

  • 10-20 hours of tone-annotated audio
  • ~5,000-10,000 utterances (average 5-10 seconds each)
  • Cost: ~$5,000-8,000 (recording + transcription + QA)

Recommended for robust performance:

  • 50-100 hours of tone-annotated audio
  • ~25,000-50,000 utterances
  • 20+ speakers (minimum 2 hours each)
  • Cost: ~$20,000-35,000

Why this size?

  • ACL 2025 showed 50 hours sufficient for tonal distinctions in Yoruba
  • Meta's MMS used 10-40 hours per language but lacks tonal accuracy
  • Our diagnostic shows systematic failure, not just data scarcity, so targeted examples of minimal pairs are more important than raw hours

Bootstrapping strategy:

  1. Phase 1 (10 hours): Collect minimal pairs + high-frequency words
  2. Evaluate: Measure diacritic accuracy improvement
  3. Phase 2 (40 hours): Expand to conversational speech if Phase 1 shows promise
  4. Iterate: Continue only if gains justify cost

Expected Outcomes

A 10-20 hour targeted fine-tuning dataset may substantially reduce tonal diacritic loss; empirical validation would be required to confirm specific performance gains. Our monotone hallucination test (File 09) suggests the model has orthographic bias that fine-tuning alone may not fully address. Acoustic analysis (F0 tracking) would clarify this before investing in large-scale data collection.

Data Collection Ethics

  • Informed consent: Recordings made by the author with full knowledge of public release
  • Privacy: All recordings are self-recorded by the author. No third-party identifiable information included.
  • Cultural sensitivity: Proverbs and idioms are common knowledge, not sacred/restricted content
  • Community benefit: Dataset released open-source to benefit Igbo NLP research
  • No exploitation: Zero-compensation labor issue does not apply (self-recorded by community member)

This dataset follows guidelines from the ACM Code of Ethics and Professional Conduct for responsible AI research.

Citation

If you use this dataset, please cite:

@misc{obasi2026igbo,
  title={Igbo Blind Spot Dataset for omniASR-CTC-1B: Systematic Evaluation of Tonal Diacritic Loss},
  author={Obasi, Chizoba},
  year={2026},
  publisher={HuggingFace},
  howpublished={\url{https://huggingface.co/datasets/chiz/omniASR-igbo-blindspots}},
  note={Model evaluated: facebook/omniASR-CTC-1B (975M parameters)}
}

References

AAAI. (2025). Fairness of automatic speech recognition: Looking through a philosophical lens. Proceedings of the 39th AAAI Conference on Artificial Intelligence.

ACL. (2025). Breaking the transcription bottleneck: Fine-tuning ASR models for extremely low-resource languages. Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages.

EMNLP. (2024). The Zeno's paradox of 'low-resource' languages. Best Paper Award, 2024 Conference on Empirical Methods in Natural Language Processing.

EMNLP. (2024). Modeling gender and dialect bias in automatic speech recognition. Findings of the Association for Computational Linguistics: EMNLP 2024.

Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., ... & Goel, S. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684-7689.

Meta AI. (2025). Omnilingual ASR: Scaling automatic speech recognition to 1,600+ languages. arXiv preprint arXiv:2511.09690.

Ogueji, K., Gwadabe, T. R., & Zhang, Y. (2024). A systematic literature review on bias evaluation in automatic speech recognition for low-resource African languages. ACM Computing Surveys.

License

  • Audio recordings: CC-BY-4.0 (attribution required)
  • Metadata/annotations: CC0 (public domain)
  • Code: MIT License

Contact

Downloads last month
99

Paper for Chiz/omniASR-igbo-blindspots