Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,3 +1,71 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
pretty_name: Pile Preshuffled Seeds (Index Maps for Pythia)
|
| 8 |
+
size_categories:
|
| 9 |
+
- 10B<n<100B
|
| 10 |
+
tags:
|
| 11 |
+
- pythia
|
| 12 |
+
- pretraining
|
| 13 |
+
- the-pile
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# Pile Preshuffled Seeds
|
| 17 |
+
|
| 18 |
+
This dataset contains precomputed index maps for loading the preshuffled Pile dataset with different random seeds. These index maps are used by GPT-NeoX's `MMapIndexedDataset` to control the order in which training data is presented to the model, enabling reproducible training with different data orders without reprocessing the underlying data.
|
| 19 |
+
|
| 20 |
+
These index maps were used to train the [PolyPythia](https://huggingface.co/collections/EleutherAI/polypythia-66c200a1dd6c9b8e16ebc917) model suite, which trains Pythia-scale models with multiple random seeds to study the effect of data order on learning dynamics.
|
| 21 |
+
|
| 22 |
+
## Contents
|
| 23 |
+
|
| 24 |
+
### Root Files
|
| 25 |
+
|
| 26 |
+
| File | Size | Description |
|
| 27 |
+
|------|------|-------------|
|
| 28 |
+
| `pile_20B_tokenizer_text_document.idx` | 4.21 GB | The base index file for the tokenized Pile |
|
| 29 |
+
| `dataset.py` | 11.1 KB | Example code for loading the dataset with these index maps |
|
| 30 |
+
|
| 31 |
+
### Seed Directories
|
| 32 |
+
|
| 33 |
+
There are 10 seed directories (`seed0` through `seed9`), each containing three NumPy index map files:
|
| 34 |
+
|
| 35 |
+
| File | Size | Description |
|
| 36 |
+
|------|------|-------------|
|
| 37 |
+
| `*_doc_idx.npy` | 842 MB | Document index mapping |
|
| 38 |
+
| `*_sample_idx.npy` | 1.3 GB | Sample index mapping |
|
| 39 |
+
| `*_shuffle_idx.npy` | 649 MB | Shuffle order mapping |
|
| 40 |
+
|
| 41 |
+
The filenames encode the index map parameters: 147,164,160 samples, 2048 sequence length, seed 1234 (the base seed used to generate the maps).
|
| 42 |
+
|
| 43 |
+
**Total size:** 32.1 GB
|
| 44 |
+
|
| 45 |
+
## Usage
|
| 46 |
+
|
| 47 |
+
1. Download the preshuffled Pile data from either:
|
| 48 |
+
- [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled) (deduplicated)
|
| 49 |
+
- [EleutherAI/pile-standard-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-standard-pythia-preshuffled) (standard)
|
| 50 |
+
|
| 51 |
+
2. Download the seed directory you want to use.
|
| 52 |
+
|
| 53 |
+
3. Follow the instructions in the [Pythia repo README](https://github.com/EleutherAI/pythia/tree/main?tab=readme-ov-file#reproducing-training) to configure training with the index maps.
|
| 54 |
+
|
| 55 |
+
See `dataset.py` in this repo for an example of how to load the dataset with these index maps.
|
| 56 |
+
|
| 57 |
+
## Related Datasets
|
| 58 |
+
|
| 59 |
+
- [EleutherAI/pile-deduped-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-deduped-pythia-preshuffled) — deduplicated Pile in MMap format
|
| 60 |
+
- [EleutherAI/pile-standard-pythia-preshuffled](https://huggingface.co/datasets/EleutherAI/pile-standard-pythia-preshuffled) — standard Pile in MMap format
|
| 61 |
+
|
| 62 |
+
## Citation
|
| 63 |
+
|
| 64 |
+
```bibtex
|
| 65 |
+
@article{biderman2023pythia,
|
| 66 |
+
title={Pythia: A suite for analyzing large language models across training and scaling},
|
| 67 |
+
author={Biderman, Stella and Schoelkopf, Hailey and Anthony, Quentin Gregory and Bradley, Herbie and O'Brien, Kyle and Hallahan, Eric and Khan, Mohammad Aflah and Purohit, Shivanshu and Prashanth, USVSN Sai and Raff, Edward and others},
|
| 68 |
+
journal={International Conference on Machine Learning},
|
| 69 |
+
year={2023}
|
| 70 |
+
}
|
| 71 |
+
```
|