Infant Cry Detection Using Causal Temporal Representation
This model detects infant cries using a novel causal temporal representation framework. By integrating causal reasoning into the data-generating process (DGP), the model aims to enhance the interpretability and reliability of cry detection systems.
Features
- Causal Data Generating Process: Incorporates mathematical causal assumptions to define the relationship between audio features and annotations.
- Supervised Models: Includes pre-trained state-of-the-art models:
- Bidirectional LSTM
- Transformer
- MobileNet V2
- Event-Based Metrics: Tailored for time-sensitive detection tasks:
- Event-based F1-score
- Intersection over Union (IOU)
- Interactive Example: Jupyter Notebook with step-by-step usage demonstrations.
How to Use
You can load the model directly from Hugging Face: