Editing
Emilia Dataset
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
The '''Emilia Dataset''' is a large-scale, multilingual, and diverse speech generation dataset derived from in-the-wild speech data. Emilia starts with over 101k hours of speech across six languages, including a wide range of speaking styles for more natural and spontaneous speech generation. == Overview == The Emilia dataset is constructed from a large collection of publicly-available audio on the Internet, such as podcasts, debates, and audiobooks. The dataset was created using '''Emilia-Pipe''', an open-source preprocessing pipeline used to process, transcribe, and filter the dataset. == Dataset Statistics == === Original Emilia Dataset === {| class="wikitable" ! Language !! Code !! Duration (Hours) |- | English || EN || 46.8k |- | Chinese || ZH || 49.9k |- | German || DE || 1.6k |- | French || FR || 1.4k |- | Japanese || JA || 1.7k |- | Korean || KO || 0.2k |- | '''Total''' || - || '''101.7k''' |} === Emilia-Large Dataset === The dataset has been expanded to Emilia-Large, a dataset with over 216k hours of speech, making it one of the largest openly-available speech datasets. Emilia-Large combines the original 101k-hour Emilia dataset (licensed under CC BY-NC 4.0) with the new Emilia-YODAS dataset (licensed under CC BY 4.0). The Emilia-YODAS dataset is based on the YODAS2 dataset, sourced from publicly-available YouTube videos licensed under the Creative Commons license. == Technical Specifications == * '''Sampling Rate''': 24 kHz * '''Audio Format''': WAV files, mono channel * '''Sample Width''': 16-bit * '''Audio Quality''': DNSMOS P.835 OVRL score of 2.50 * '''Languages Supported''': 6 (English, Chinese, German, French, Japanese, Korean) == Emilia-Pipe Processing Pipeline == Emilia-Pipe consists of six steps: Standardization, Source Separation, Speaker Diarization, Segmentation by VAD, ASR, and Filtering. === Processing Steps === ==== 1. Standardization ==== Audio files are converted to WAV format, resampled to 24 kHz and set to mono-channel. Normalization is performed such that the amplitude levels range between -1 to 1, optimizing for a standard decibel level to minimize distortion. ==== 2. Source Separation ==== This step involves the extraction of clean vocal tracks from audio that may contain background noise or music. The authors employ the Ultimate Vocal Remover model, which has been pretrained, to precisely isolate vocal elements. ==== 3. Speaker Diarization ==== Speaker diarization techniques partition long-form speech data into multiple utterances based on the speaker using the PyAnnote speaker diarization 3.1 pipeline. ==== 4. Segmentation (VAD) ==== Voice Activity Detection is used to further segment the audio into smaller, manageable chunks suitable for training. ==== 5. Automated Speech Recognition (ASR) ==== ASR techniques transcribe the segmented speech data. The medium version of the Whisper model is employed, with batched inference for parallel processing. ==== 6. Filtering ==== Segments not matching predetermined quality standards (e.g., DNSMOS score) or confidence indicators regarding language differentiation are eliminated, yielding a refined dataset. == Licensing and Access == * '''Emilia Dataset''': CC BY-NC 4.0 (Non-commercial use only) * '''Emilia-YODAS Dataset''': CC BY 4.0 * '''Emilia-Pipe Pipeline''': Open-source Users are permitted to use Emilia dataset only for non-commercial purposes under the CC BY-NC-4.0 license. Emilia does not own the copyright to the audio files; the copyright remains with the original owners of the videos or audio. == Usage == === Loading the Dataset === <pre> from datasets import load_dataset dataset = load_dataset("amphion/Emilia-Dataset") print(dataset) </pre> === Loading Specific Languages === <pre> from datasets import load_dataset path = "Emilia/DE/*.tar" dataset = load_dataset("amphion/Emilia-Dataset", data_files={"de": path}, split="de", streaming=True) </pre> == External Links == * [https://huggingface.co/datasets/amphion/Emilia-Dataset Hugging Face Dataset Page] * [https://emilia-dataset.github.io/Emilia-Demo-Page/ Demo Page] * [https://github.com/open-mmlab/Amphion/tree/main/preprocessors/Emilia Emilia-Pipe Source Code] * [https://arxiv.org/abs/2407.05361 Original Research Paper] * [https://arxiv.org/abs/2501.15907 Extended Research Paper] [[Category:Datasets]] [[Category:Speech Datasets]] [[Category:Open Source]]
Summary:
Please note that all contributions to TTS Wiki are considered to be released under the Creative Commons Attribution 4.0 (see
Project:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information