| --- |
| license: other |
| language: |
| - ja |
| size_categories: |
| - 1M<n<10M |
| extra_gated_prompt: "You agree that you will use the dataset solely for the purpose of JAPANESE COPYRIGHT ACT ARTICLE 30-4" |
| --- |
| |
| J-CHAT is a Japanese large-scale dialogue speech corpus. |
| For the detailed explanation, please see our [paper](https://arxiv.org/abs/2407.15828) |
|
|
| # PLEASE READ THIS FIRST |
| >[!IMPORTANT] |
| > TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4. |
|
|
| # What's new? |
| >[!NOTE] |
| > Add transcription of corpus. transcripition are based on [reazonspeech-nemo-v2](https://huggingface.co/reazon-research/reazonspeech-nemo-v2) |
|
|
|
|
| # How can I use this data for commercial purposes? |
| Commercial use is not admitted. ~~If you want to use this data for commercial purposes, Please build one by yourself. |
| Corpus construction programs are distributed on [Github](https://github.com/sarulab-speech/J-CHAT)~~ |
|
|
| The Github repo isn't ready yet. But we will be releasing the code soon. Stay tuned!! |
|
|
| # How to use |
| ## Requirements |
| The dataet loading will require following python libraries. |
| * [lhotse](https://github.com/lhotse-speech/lhotse) |
| * [webdataset](https://github.com/webdataset/webdataset) |
| * If you need transcription, [smart-open](https://github.com/getcrest/smart-open) |
| ## loading dataset |
| Please see the following to load the dataset as `lhotse.CutSet` |
|
|
| ### Without transcription |
| ```python |
| import lhotse |
| |
| # change the following line to the subset and the data domain you like. |
| # For data domain the options are youtube and podcast. |
| # For subset, train, valid, test, others are available. |
| # For instance, if you want to get data from youtube test set. the filepath would be filelists/yotube_test.txt |
| with open("filelists/podcast_train.txt") as f: |
| urls = f.read().splitlines() |
| |
| cutset = lhotse.CutSet.from_webdataset(urls) |
| ``` |
|
|
| ### With transcription |
| ```python |
| import json |
| import lhotse |
| |
| with open("transcribed_jchat/podcast_train.json") as f: |
| fields = json.load(f) |
| |
| cuts = lhotse.CutSet.from_shar(fields=fields) |
| |
| ``` |
|
|
| For the info about `lhotse.CutSet` please see the [lhotse documentation](https://lhotse.readthedocs.io/en/latest/) |
|
|
| # LICENSE |
| CC-BY-NC 4.0 |
|
|
| TO USE THIS DATASET, YOU MUST AGREE THAT YOU WILL USE THE DATASET SOLELY FOR THE PURPOSE OF JAPANESE COPYRIGHT ACT ARTICLE 30-4. |
|
|
| # Contact |
| We have ensured that our dataset does not infringe on any rights of the original data holders. |
| However, if you wish to request the removal of your data from the dataset, please feel free to contact us at the email address below: |
|
|
| shinnosuke_takamichi [*at*] keio.jp |
| |
| # Other resources |
| * [Speech samples generated with dGSLM trained on J-CHAT](https://sarulab-speech.github.io/j-chat/dgslm_speech_sample/) |
| * [dGSLM model weights](https://github.com/sarulab-speech/j-chat/tree/master/weights) |
| |
| # Contributors |
| * [Wataru Nakata/中田 亘](https://wataru-nakata.github.io) |
| * [Kentaro Seki/関 健太郎](https://trgkpc.github.io/) |
| * [Hitomi Yanaka/谷中 瞳](https://hitomiyanaka.mystrikingly.com/) |
| * [Yuki Saito/齋藤 佑樹](https://sython.org/) |
| * [Shinnosuke Takamichi/高道 慎之介](https://sites.google.com/site/shinnosuketakamichi/home) |
| * [Hiroshi Saruwatari/猿渡 洋](https://researchmap.jp/read0102891) |
| |
| # 謝辞/acknowledgements |
| 本研究は、国立研究開発法人産業技術総合研究所事業の令和5年度覚醒プロジェクトの助成を受けたものです。 |
| /This work was supported by AIST KAKUSEI project (FY2023). |
| |