Datasets:
ArXiv:
DOI:
License:
Commit
·
36fd2a2
1
Parent(s):
846bb1e
add flag2 and nfae images
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ Click on the flag icon on any Model, Dataset, Space, or Discussion:
|
|
48 |
Share why you flagged this item:
|
49 |
<p align="center">
|
50 |
<br>
|
51 |
-
<img src="" alt="screenshot showing the text window where you describe why you flagged this item" />
|
52 |
</p>
|
53 |
|
54 |
In prioritizing open science, we examine potential harm on a case-by-case basis. When users flag a system, developers can directly and transparently respond to concerns. Moderators are able to disengage from discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)).
|
@@ -65,7 +65,7 @@ Should a specific model be flagged as high risk by our community, we consider:
|
|
65 |
Edit the model/data card → add “not_for_all_eyes” in the tags section → open the PR and wait for the authors to merge it.
|
66 |
<p align="center">
|
67 |
<br>
|
68 |
-
<img src="" alt="screenshot showing where to add tags" />
|
69 |
</p>
|
70 |
|
71 |
Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning.
|
@@ -82,4 +82,26 @@ Here are some recent demos and tools from researchers in the Hugging Face commun
|
|
82 |
|
83 |
Thanks for reading! 🤗
|
84 |
|
85 |
-
~ Irene, Nima, Giada, Yacine, and Meg, on behalf of the Ethics and Society regulars
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
Share why you flagged this item:
|
49 |
<p align="center">
|
50 |
<br>
|
51 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/flag2.jpg" alt="screenshot showing the text window where you describe why you flagged this item" />
|
52 |
</p>
|
53 |
|
54 |
In prioritizing open science, we examine potential harm on a case-by-case basis. When users flag a system, developers can directly and transparently respond to concerns. Moderators are able to disengage from discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)).
|
|
|
65 |
Edit the model/data card → add “not_for_all_eyes” in the tags section → open the PR and wait for the authors to merge it.
|
66 |
<p align="center">
|
67 |
<br>
|
68 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_3/nfae.jpg" alt="screenshot showing where to add tags" />
|
69 |
</p>
|
70 |
|
71 |
Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning.
|
|
|
82 |
|
83 |
Thanks for reading! 🤗
|
84 |
|
85 |
+
~ Irene, Nima, Giada, Yacine, and Meg, on behalf of the Ethics and Society regulars
|
86 |
+
|
87 |
+
If you want to cite this blog post, please use the following:
|
88 |
+
```
|
89 |
+
@misc{hf_ethics_soc_blog_3,
|
90 |
+
author = {Irene Solaiman and
|
91 |
+
Giada Pistilli and
|
92 |
+
Nima Boscarino and
|
93 |
+
Yacine Jernite and
|
94 |
+
Margaret Mitchell and
|
95 |
+
Elizabeth Allendorf and
|
96 |
+
Carlos Muñoz Ferrandis and
|
97 |
+
Nathan Lambert and
|
98 |
+
Alexandra Sasha Luccioni
|
99 |
+
},
|
100 |
+
title = {Hugging Face Ethics and Society Newsletter 3: Ethical Openness at Hugging Face},
|
101 |
+
booktitle = {Hugging Face Blog},
|
102 |
+
year = {2023},
|
103 |
+
url = {https://doi.org/10.57967/hf/0487},
|
104 |
+
doi = {10.57967/hf/0487}
|
105 |
+
}
|
106 |
+
|
107 |
+
```
|