aoxo's picture
Update README.md
a5deb2f verified
metadata
license: mit
task_categories:
  - text-to-speech
  - text-to-audio
language:
  - en
image:
  - https://ibb.co/ZzFkfWZ
tags:
  - code
  - music
pretty_name: Text-to-ASMR
size_categories:
  - 1K<n<10K

Thumbnail

End-To-End TEXT-2-ASMR with Transformers

This repository contains pretrained text2asmr model files, audio files and training+inference notebooks.

Dataset Details

This unique dataset is tailored for training and deploying text-to-speech (TTS) systems specifically focused on ASMR (Autonomous Sensory Meridian Response) content. It includes a comprehensive collection of pretrained model files, audio files and training code suitable for TTS applications.

Dataset Description

Inside this dataset, you shall find zipped folders as is follows:

  1. wavs_original: original wav files as it was converted from the original video
  2. wavs: original wav files broken into 1 minute chunks
  3. transcripts_original: transribed scripts of the original wav files
  4. transcripts: transribed scripts of the files in wav folder
  5. models: text to spectrogram model trained on Glow-TTS
  6. ljspeech: alignment files and respective checkpoint models (text to phoneme)
  7. transformer_tts_data.ljspeech: trained checkpoint models and other files

And the following files:

  1. Glow-TTS.ipynb: Training and inference code for GlowTTS models
  2. TransformerTTS.ipynb: Training and inference code for Transformer models
  3. VITS_TTS.ipynb: Optional code for training VITS models; follows the same format as GlowTTS
  4. metadata_original.csv: ljspeech formatted transcriptions of wav_original folder; ready for TTS training
  5. metadata.csv: ljspeech formatted transcriptions of wav folder; ready for TTS training

Latest Update: End-To-End TEXT-2-ASMR with Diffusion

Based on the paper, E3 TTS: EASY END-TO-END DIFFUSION-BASED TEXT TO SPEECH

(Yuan Gao, Nobuyuki Morioka, Yu Zhang, Nanxin Chen) Google

A text-to-asmr UNet-Diffusion model differing slightly from the framework mentioned in the paper was trained on the same audio-transcript paired dataset for 1000DDPM and 10 epochs.

Model metrics:

  1. General Loss: 0.000134
  2. MSE Loss: 0.000027
  3. RMSE Loss: 0.000217
  4. MAE Loss: 0.000018
  • Curated by: Alosh Denny, Anish S
  • Language(s) (NLP): English
  • License: MIT

Dataset Sources

Youtube: Rebeccas ASMR, Nanou ASMR, Gibi ASMR, Cherie Lorraine ASMR, etc.

Uses

The dataset can be used to train text2spec2mel, text2wav, and/or other end-to-end text-to-speech models.

Direct Use

Pretrained models can be tested out with the TransformerTTS notebook and the Glow-TTS notebook.

Dataset Card Authors

Alosh Denny, Anish S

Dataset Card Contact

[email protected]