audio audio | timestamps_start list | timestamps_end list | speakers list | transcript list | word_speakers list | recording_id string | duration float64 | sampling_rate int32 | num_samples int64 | num_speakers int32 | transition_type list | original_cut_id list | speech_level_db list | word_index list | manifest_json string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[
1.091,
1.291,
1.541,
1.711,
2.031,
2.241,
2.451,
2.521,
2.781,
3.261,
3.421,
3.621,
3.821,
3.931,
4.241,
4.491,
4.601,
4.671,
5.191,
5.371,
5.501,
5.591,
5.911,
6.021,
6.241,
6.451,
6.621,
8.967,
9.367,
9.747,
10.657,
10.807,
11.017,
11.287,
11.417... | [
1.291,
1.541,
1.7109999999999999,
2.031,
2.241,
2.451,
2.521,
2.7809999999999997,
3.2310000000000003,
3.4210000000000003,
3.621,
3.821,
3.931,
4.241,
4.491,
4.601,
4.671,
5.121,
5.3709999999999996,
5.501,
5.591,
5.9110000000000005,
6.021,
6.241,
6.451,
6.62099999999... | [
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"3000",
"3000",
"3000",
"3000",
"3000"... | [
"NO",
"LETTER",
"HAD",
"COME",
"NO",
"WORD",
"OF",
"ANY",
"KIND",
"AND",
"YET",
"HERE",
"IT",
"WAS",
"LATE",
"IN",
"THE",
"EVENING",
"AND",
"SHE",
"HAD",
"AGREED",
"TO",
"MEET",
"HIM",
"THAT",
"MORNING",
"UNDER",
"CERTAIN",
"CONDITIONS",
"YOU",
"MAY",
... | [
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"2277",
"3000",
"3000",
"3000",
"3000",
"3000"... | e8d1f8de-f1d2-4fa6-af89-b68b752b68b3 | 97.38875 | 16,000 | 1,558,220 | 6 | [
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"FIRST",
"TURN_SWITCH",
"T... | [
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1612",
"2277-149897-0002-1... | [
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.08708992532589,
-26.0870... | [
1,
2,
3,
4,
5,
6,
7,
8,
9,
11,
12,
13,
14,
15,
16,
17,
18,
19,
21,
22,
23,
24,
25,
26,
27,
28,
29,
1,
2,
3,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
18,
19,
20,
21,
22,
23,
24,
26,
27,
28,
30,
31,
32,
33,... | {"id": "e8d1f8de-f1d2-4fa6-af89-b68b752b68b3-0", "start": 0, "duration": 97.38875, "channel": 0, "supervisions": [{"id": "e8d1f8de-f1d2-4fa6-af89-b68b752b68b3_0000_w0001", "recording_id": "e8d1f8de-f1d2-4fa6-af89-b68b752b68b3", "start": 1.0905932924609445, "duration": 0.2, "channel": 0, "text": "NO", "language": "Engli... |
FastMSS synthetic multi-speaker meetings - parquet edition
Streaming-friendly parquet shards of the FastMSS synthetic multi-speaker conversational corpus. Each row is one mixture with the audio bytes embedded inline (16 kHz mono WAV) plus per-segment diarization timestamps, per-word transcript and the full lhotse cut as a JSON blob. See fastmss/hf_dataset.py for the schema docstring.
Subsets and splits
debug— splits:train— 1 mixtures, 1.6 min total, 6 unique speakers, 1 shard(s) (3.1 MB).v0.1— splits:train,val— 1000 mixtures, 1546.0 min total, 40 unique speakers, 5 shard(s) (2888.0 MB).
Layout
<subset>/
data/
train-XXXXX-of-YYYYY.parquet
val-XXXXX-of-YYYYY.parquet # if subsplit
split_assignment.json # if subsplit
provenance/
all_cuts.jsonl.gz # source utterance pool
all_rooms.json # RIR pool metadata
noise_files.txt # background noise pool
sim.log # generator log
Per-row schema
| Field | Type | Source | Description |
|---|---|---|---|
audio |
datasets.Audio (16 kHz) |
audio/<id>.wav |
Mixture WAV, bytes embedded inline. |
timestamps_start |
list[float] |
parsed from rttm_word/ |
Per-segment start times (s). |
timestamps_end |
list[float] |
parsed from rttm_word/ |
Per-segment end times (s). |
speakers |
list[str] |
parsed from rttm_word/ |
Per-segment speaker label. |
transcript |
list[str] |
cut supervisions | Per-word tokens. |
word_speakers |
list[str] |
cut supervisions | Per-word speakers (parallel to transcript). |
recording_id |
str |
cut/recording | Lhotse recording id (also the wav stem). |
duration |
float |
cut/recording | Mixture length in seconds. |
sampling_rate |
int |
cut/recording | Source rate of the WAV. |
num_samples |
int |
cut/recording | Sample count of the WAV. |
num_speakers |
int |
cut/supervisions | Distinct speakers active in the mixture. |
transition_type |
list[str] |
supervision custom |
FIRST / TURN_SWITCH / BACKCHANNEL / ... per word. |
original_cut_id |
list[str] |
supervision custom |
Source utterance id per word. |
speech_level_db |
list[float] |
supervision custom |
Per-word loudness target. |
word_index |
list[int] |
supervision custom |
Per-utterance word position. |
manifest_json |
str |
cuts manifest | Full lhotse Cut (recording + supervisions) as JSON. |
Loading
With the YAML configs block above, HF datasets exposes each subset as a config and the train/val shards as proper splits:
from datasets import load_dataset
# whole subset (default = train split):
ds = load_dataset("<user-or-org>/<repo-name>", "v0.1")
# explicit split:
train = load_dataset("<user-or-org>/<repo-name>", "v0.1", split="train")
val = load_dataset("<user-or-org>/<repo-name>", "v0.1", split="val")
# streaming:
stream = load_dataset(
"<user-or-org>/<repo-name>", "v0.1", split="train", streaming=True
)
for sample in stream:
sample["audio"]["array"] # decoded float32 waveform
sample["timestamps_start"] # diarization segment starts
sample["timestamps_end"] # diarization segment ends
sample["speakers"] # one label per segment
sample["transcript"] # word tokens
sample["word_speakers"] # per-word speakers
Drop the lhotse JSON blob if you don't need it:
ds = ds.remove_columns(["manifest_json"])
Rebuild a lhotse CutSet from any subset:
import json
from lhotse import CutSet, MonoCut
cuts = CutSet.from_cuts(
MonoCut.from_dict(json.loads(s["manifest_json"])) for s in ds
)
Generating an HF-compatible dataset from scratch
The generation pipeline lives in the FastMSS repo. It produces lhotse manifests + audio first, then converts them into the parquet layout shipped here. Reproduce a subset with:
1. Synthesize the lhotse split — mixes utterances + RIRs + noise into <dataset_root>/<subset>/ with audio/, manifests/ and rttm_word/ subfolders.
# Adjust config_name / dataset_root for the subset you want
python sim.py \
--config-path config/table1 --config-name datagen_v0.1 \
output_dir=generated_dataset/v0.1
2. Convert to streamable parquet — writes one parquet shard per --shard-size mixtures, embedding WAV bytes inline and computing every column above. --subsplits performs a deterministic train/val split with a reproducible seed.
python scripts/convert_to_parquet.py \
--dataset-root generated_dataset \
--output-root generated_dataset_parquet \
--splits v0.1 \
--subsplits train:800,val:200 \
--subsplit-seed 42 \
--shard-size 256
# Smaller subset that doesn't need a train/val split (e.g. debug):
python scripts/convert_to_parquet.py \
--dataset-root generated_dataset \
--output-root generated_dataset_parquet \
--splits debug
3. Upload to the Hub — stages a <subset>/data/ + <subset>/provenance/ layout, generates this README's YAML configs: block automatically, and pushes via HfApi.upload_large_folder (resumable / parallel).
hf auth login # or set HF_TOKEN
python scripts/upload_parquet_to_hf.py \
--repo-id <user-or-org>/<dataset-name> \
--parquet-root generated_dataset_parquet \
--dataset-root generated_dataset
Useful flags:
--splits debug v0.1— push only some subsets--private— only honored on first repo create--dry-run— stage the layout to a temp dir and print it without contacting the Hub--no-provenance— skip theprovenance/sidecars
4. Verify the round-trip locally:
pytest tests/test_hf_parquet_conversion.py
These tests build a synthetic FastMSS split in a tmp dir, run the converter, and assert byte-for-byte equivalence between the lhotse manifests/RTTM/audio and the parquet rows (including a json.loads(row['manifest_json']) == cut round-trip and a deterministic-shuffle subsplit check).
See fastmss/hf_dataset.py for the per-row schema and helper API; both scripts above are thin CLI wrappers over it.
- Downloads last month
- -