en_pipeline / README.md
dreji18's picture
Update README.md
2592c68
|
raw
history blame
1.77 kB
metadata
tags:
  - spacy
  - token-classification
co2_eq_emissions: Kg
language:
  - en
widget:
  - text: >-
      Billie Eilish issues apology for mouthing an anti-Asian derogatory term in
      a resurfaced video.
    example_title: Biased example 1
  - text: >-
      Christians should make clear that the perpetuation of objectionable
      vaccines and the lack of alternatives is a kind of coercion.
    example_title: Biased example 2
  - text: There have been a protest by a group of people
    example_title: Non-Biased example 1
  - text: >-
      While emphasizing he’s not singling out either party, Cohen warned about
      the danger of normalizing white supremacist ideology.
    example_title: Non-Biased example 2
model-index:
  - name: en_bias
    results:
      - task:
          name: NER
          type: token-classification
        metrics:
          - name: NER Precision
            type: precision
            value: 0.6642
          - name: NER Recall
            type: recall
            value: 0.6485
          - name: NER F Score
            type: f_score
            value: 0.6022

About the Model

This model is trained on MBAD Dataset to recognize the biased word/phrases in a sentence. This model was built on top of roberta-base offered by Spacy transformers.

This model is in association with https://huggingface.co/d4data/bias-detection-model

Feature Description
Name bias
Version 0.0.0
spaCy >=3.2.1,<3.3.0
Default Pipeline transformer, ner
Components transformer, ner

Author

This model is part of the Research topic "Bias and Fairness in AI" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please cite as:

Bias & Fairness in AI, (2020), GitHub repository, https://github.com/dreji18/Fairness-in-AI/tree/dev