Emilia Dataset
The Emilia Dataset is a large-scale, multilingual, and diverse speech generation dataset derived from in-the-wild speech data. Emilia starts with over 101k hours of speech across six languages, including a wide range of speaking styles for more natural and spontaneous speech generation.
Overview[edit | edit source]
The Emilia dataset is constructed from a large collection of publicly-available audio on the Internet, such as podcasts, debates, and audiobooks. The dataset was created using Emilia-Pipe, an open-source preprocessing pipeline used to process, transcribe, and filter the dataset.
Dataset Statistics[edit | edit source]
Original Emilia Dataset[edit | edit source]
Language | Code | Duration (Hours) |
---|---|---|
English | EN | 46.8k |
Chinese | ZH | 49.9k |
German | DE | 1.6k |
French | FR | 1.4k |
Japanese | JA | 1.7k |
Korean | KO | 0.2k |
Total | - | 101.7k |
Emilia-Large Dataset[edit | edit source]
The dataset has been expanded to Emilia-Large, a dataset with over 216k hours of speech, making it one of the largest openly-available speech datasets. Emilia-Large combines the original 101k-hour Emilia dataset (licensed under CC BY-NC 4.0) with the new Emilia-YODAS dataset (licensed under CC BY 4.0).
The Emilia-YODAS dataset is based on the YODAS2 dataset, sourced from publicly-available YouTube videos licensed under the Creative Commons license.
Technical Specifications[edit | edit source]
- Sampling Rate: 24 kHz
- Audio Format: WAV files, mono channel
- Sample Width: 16-bit
- Audio Quality: DNSMOS P.835 OVRL score of 2.50
- Languages Supported: 6 (English, Chinese, German, French, Japanese, Korean)
Emilia-Pipe Processing Pipeline[edit | edit source]
Emilia-Pipe consists of six steps: Standardization, Source Separation, Speaker Diarization, Segmentation by VAD, ASR, and Filtering.
Processing Steps[edit | edit source]
1. Standardization[edit | edit source]
Audio files are converted to WAV format, resampled to 24 kHz and set to mono-channel. Normalization is performed such that the amplitude levels range between -1 to 1, optimizing for a standard decibel level to minimize distortion.
2. Source Separation[edit | edit source]
This step involves the extraction of clean vocal tracks from audio that may contain background noise or music. The authors employ the Ultimate Vocal Remover model, which has been pretrained, to precisely isolate vocal elements.
3. Speaker Diarization[edit | edit source]
Speaker diarization techniques partition long-form speech data into multiple utterances based on the speaker using the PyAnnote speaker diarization 3.1 pipeline.
4. Segmentation (VAD)[edit | edit source]
Voice Activity Detection is used to further segment the audio into smaller, manageable chunks suitable for training.
5. Automated Speech Recognition (ASR)[edit | edit source]
ASR techniques transcribe the segmented speech data. The medium version of the Whisper model is employed, with batched inference for parallel processing.
6. Filtering[edit | edit source]
Segments not matching predetermined quality standards (e.g., DNSMOS score) or confidence indicators regarding language differentiation are eliminated, yielding a refined dataset.
Licensing and Access[edit | edit source]
- Emilia Dataset: CC BY-NC 4.0 (Non-commercial use only)
- Emilia-YODAS Dataset: CC BY 4.0
- Emilia-Pipe Pipeline: Open-source
Users are permitted to use Emilia dataset only for non-commercial purposes under the CC BY-NC-4.0 license. Emilia does not own the copyright to the audio files; the copyright remains with the original owners of the videos or audio.
Usage[edit | edit source]
Loading the Dataset[edit | edit source]
from datasets import load_dataset dataset = load_dataset("amphion/Emilia-Dataset") print(dataset)
Loading Specific Languages[edit | edit source]
from datasets import load_dataset path = "Emilia/DE/*.tar" dataset = load_dataset("amphion/Emilia-Dataset", data_files={"de": path}, split="de", streaming=True)