Datasets:
ArXiv:
DOI:
License:
Commit
·
1876494
1
Parent(s):
7219869
remove incorrect images
Browse files
README.md
CHANGED
@@ -42,13 +42,13 @@ We engage directly with contributors and have addressed pressing issues. To brin
|
|
42 |
Click on the flag icon on any Model, Dataset, Space, or Discussion:
|
43 |
<p align="center">
|
44 |
<br>
|
45 |
-
<img src="
|
46 |
</p>
|
47 |
|
48 |
Share why you flagged this item:
|
49 |
<p align="center">
|
50 |
<br>
|
51 |
-
<img src="
|
52 |
</p>
|
53 |
|
54 |
In prioritizing open science, we examine potential harm on a case-by-case basis. When users flag a system, developers can directly and transparently respond to concerns. Moderators are able to disengage from discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)).
|
@@ -65,7 +65,7 @@ Should a specific model be flagged as high risk by our community, we consider:
|
|
65 |
Edit the model/data card → add “not_for_all_eyes” in the tags section → open the PR and wait for the authors to merge it.
|
66 |
<p align="center">
|
67 |
<br>
|
68 |
-
<img src="
|
69 |
</p>
|
70 |
|
71 |
Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning.
|
|
|
42 |
Click on the flag icon on any Model, Dataset, Space, or Discussion:
|
43 |
<p align="center">
|
44 |
<br>
|
45 |
+
<img src="" alt="screenshot pointing to the flag icon to Report this model" />
|
46 |
</p>
|
47 |
|
48 |
Share why you flagged this item:
|
49 |
<p align="center">
|
50 |
<br>
|
51 |
+
<img src="" alt="screenshot showing the text window where you describe why you flagged this item" />
|
52 |
</p>
|
53 |
|
54 |
In prioritizing open science, we examine potential harm on a case-by-case basis. When users flag a system, developers can directly and transparently respond to concerns. Moderators are able to disengage from discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)).
|
|
|
65 |
Edit the model/data card → add “not_for_all_eyes” in the tags section → open the PR and wait for the authors to merge it.
|
66 |
<p align="center">
|
67 |
<br>
|
68 |
+
<img src="" alt="screenshot showing where to add tags" />
|
69 |
</p>
|
70 |
|
71 |
Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning.
|