Emilia Dataset

From TTS Wiki
Revision as of 03:43, 19 September 2025 by Ttswikiadmin (talk | contribs) (Remove related links)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The Emilia Dataset is a large-scale, multilingual, and diverse speech generation dataset derived from in-the-wild speech data. Emilia starts with over 101k hours of speech across six languages, including a wide range of speaking styles for more natural and spontaneous speech generation.

Overview

The Emilia dataset is constructed from a large collection of publicly-available audio on the Internet, such as podcasts, debates, and audiobooks. The dataset was created using Emilia-Pipe, an open-source preprocessing pipeline used to process, transcribe, and filter the dataset.

Dataset Statistics

Original Emilia Dataset

Language Code Duration (Hours)
English EN 46.8k
Chinese ZH 49.9k
German DE 1.6k
French FR 1.4k
Japanese JA 1.7k
Korean KO 0.2k
Total - 101.7k

Emilia-Large Dataset

The dataset has been expanded to Emilia-Large, a dataset with over 216k hours of speech, making it one of the largest openly-available speech datasets. Emilia-Large combines the original 101k-hour Emilia dataset (licensed under CC BY-NC 4.0) with the new Emilia-YODAS dataset (licensed under CC BY 4.0).

The Emilia-YODAS dataset is based on the YODAS2 dataset, sourced from publicly-available YouTube videos licensed under the Creative Commons license.

Technical Specifications

  • Sampling Rate: 24 kHz
  • Audio Format: WAV files, mono channel
  • Sample Width: 16-bit
  • Audio Quality: DNSMOS P.835 OVRL score of 2.50
  • Languages Supported: 6 (English, Chinese, German, French, Japanese, Korean)

Emilia-Pipe Processing Pipeline

Emilia-Pipe consists of six steps: Standardization, Source Separation, Speaker Diarization, Segmentation by VAD, ASR, and Filtering.

Processing Steps

1. Standardization

Audio files are converted to WAV format, resampled to 24 kHz and set to mono-channel. Normalization is performed such that the amplitude levels range between -1 to 1, optimizing for a standard decibel level to minimize distortion.

2. Source Separation

This step involves the extraction of clean vocal tracks from audio that may contain background noise or music. The authors employ the Ultimate Vocal Remover model, which has been pretrained, to precisely isolate vocal elements.

3. Speaker Diarization

Speaker diarization techniques partition long-form speech data into multiple utterances based on the speaker using the PyAnnote speaker diarization 3.1 pipeline.

4. Segmentation (VAD)

Voice Activity Detection is used to further segment the audio into smaller, manageable chunks suitable for training.

5. Automated Speech Recognition (ASR)

ASR techniques transcribe the segmented speech data. The medium version of the Whisper model is employed, with batched inference for parallel processing.

6. Filtering

Segments not matching predetermined quality standards (e.g., DNSMOS score) or confidence indicators regarding language differentiation are eliminated, yielding a refined dataset.

Licensing and Access

  • Emilia Dataset: CC BY-NC 4.0 (Non-commercial use only)
  • Emilia-YODAS Dataset: CC BY 4.0
  • Emilia-Pipe Pipeline: Open-source

Users are permitted to use Emilia dataset only for non-commercial purposes under the CC BY-NC-4.0 license. Emilia does not own the copyright to the audio files; the copyright remains with the original owners of the videos or audio.

Usage

Loading the Dataset

from datasets import load_dataset
dataset = load_dataset("amphion/Emilia-Dataset")
print(dataset)

Loading Specific Languages

from datasets import load_dataset
path = "Emilia/DE/*.tar"
dataset = load_dataset("amphion/Emilia-Dataset", 
                      data_files={"de": path}, 
                      split="de", 
                      streaming=True)

External Links