slug
stringlengths
15
15
content
listlengths
1
129
rawContent
stringlengths
1
2k
author
dict
attachments
listlengths
0
49
mentions
listlengths
0
49
reactions
listlengths
0
12
publishedAt
stringlengths
24
24
updatedAt
stringlengths
24
24
commentators
listlengths
0
52
url
stringlengths
25
46
totalUniqueImpressions
int64
1
42.1k
โŒ€
numComments
int64
0
621
835295190782035
[ { "type": "text", "value": "Interested in learning about everything Image?", "raw": "Interested in learning about everything Image?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€‹With the rise of recent interest in Vision Language Models (VLMs), we decided to make a push to include an ImageField within Argilla! This means any open source developer can now work on better models for vision ML tasks too and we would like to show you how.", "raw": "โ€‹With the rise of recent interest in Vision Language Models (VLMs), we decided to make a push to include an ImageField within Argilla! This means any open source developer can now work on better models for vision ML tasks too and we would like to show you how.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€‹We would love to introduce this new feature to you, so we've prepared a set of notebooks to go over some common image scenarios.", "raw": "โ€‹We would love to introduce this new feature to you, so we've prepared a set of notebooks to go over some common image scenarios.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "finetune an CLIP retrieval model with sentence transformers", "raw": "finetune an CLIP retrieval model with sentence transformers", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "use ColPali+ Qwen VL for RAG and log the results to Argilla", "raw": "use ColPali+ Qwen VL for RAG and log the results to Argilla", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "image-generation preference: creating multi-modal preference datasets for free using Hugging Face inference endpoints.", "raw": "image-generation preference: creating multi-modal preference datasets for free using Hugging Face inference endpoints.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€‹See you on Thursday!", "raw": "โ€‹See you on Thursday!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://lu.ma/x7id1jqu", "href": "https://lu.ma/x7id1jqu", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Interested in learning about everything Image? โ€‹With the rise of recent interest in Vision Language Models (VLMs), we decided to make a push to include an ImageField within Argilla! This means any open source developer can now work on better models for vision ML tasks too and we would like to show you how. โ€‹We would love to introduce this new feature to you, so we've prepared a set of notebooks to go over some common image scenarios. finetune an CLIP retrieval model with sentence transformers use ColPali+ Qwen VL for RAG and log the results to Argilla image-generation preference: creating multi-modal preference datasets for free using Hugging Face inference endpoints. โ€‹See you on Thursday! https://lu.ma/x7id1jqu
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1677141720071-634ff41ff32062e9eb7b06a3.jpeg", "fullname": "David Berenstein", "name": "davidberenstein1957", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 167, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "osanseviero", "John6666", "bikashpatra", "nitishpandey04", "tuanlda78202" ], "count": 5 } ]
2024-09-09T08:49:02.000Z
2024-09-09T09:39:50.035Z
[]
/posts/davidberenstein1957/835295190782035
1,501
0
713170745863861
[ { "type": "text", "value": "Hi, just a brief follow-up on our Guided Reasoning (GuiR) system:", "raw": "Hi, just a brief follow-up on our Guided Reasoning (GuiR) system:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I've created a template space that facilitates testing:", "raw": "I've created a template space that facilitates testing:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "1. Duplicate space ", "raw": "1. Duplicate space ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/logikon/guir-chat", "href": null, "resource": { "type": "space", "id": "logikon/guir-chat", "discussionNum": null }, "url": "https://huggingface.co/spaces/logikon/guir-chat", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "2. Setup your own inference servers and provide details in config file", "raw": "2. Setup your own inference servers and provide details in config file", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "3. Add api keys as secrets", "raw": "3. Add api keys as secrets", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "4. Your personal GuiR playground is ready", "raw": "4. Your personal GuiR playground is ready", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Cheers, Gregor", "raw": "Cheers, Gregor", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Hi, just a brief follow-up on our Guided Reasoning (GuiR) system: I've created a template space that facilitates testing: 1. Duplicate space https://huggingface.co/spaces/logikon/guir-chat 2. Setup your own inference servers and provide details in config file 3. Add api keys as secrets 4. Your personal GuiR playground is ready Cheers, Gregor
{ "avatarUrl": "/avatars/78be882adf32b808686713e9b457797d.svg", "fullname": "Gregor Betz", "name": "ggbetz", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 4, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "osanseviero", "louisbrulenaudet" ], "count": 3 } ]
2024-09-09T06:39:12.000Z
2024-09-09T06:39:12.426Z
[]
/posts/ggbetz/713170745863861
1,434
0
599832018572583
[ { "type": "text", "value": "\"It's Sunday night, fancy a game?\"", "raw": "\"It's Sunday night, fancy a game?\"", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://kz919-can-you-beat-405b-in-chess.hf.space/", "href": "https://kz919-can-you-beat-405b-in-chess.hf.space/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "built with the one and only SN fast API:", "raw": "built with the one and only SN fast API:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://sambanova.ai/fast-api?api_ref=907266", "href": "https://sambanova.ai/fast-api?api_ref=907266", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
"It's Sunday night, fancy a game?" https://kz919-can-you-beat-405b-in-chess.hf.space/ built with the one and only SN fast API: https://sambanova.ai/fast-api?api_ref=907266
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FFTiirwS_L6IaLHmHwIo2g.png", "fullname": "Kaizhao Liang", "name": "kz919", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 34, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FyuYz3mFmL1grbIyxOISPC.webp" } ]
[]
[ { "reaction": "๐Ÿง ", "users": [ "prithivMLmods", "KingNish", "John6666", "osanseviero", "victor", "kz919", "wath5", "Srulikbdd" ], "count": 8 }, { "reaction": "๐Ÿ”ฅ", "users": [ "kz919", "Srulikbdd" ], "count": 2 } ]
2024-09-09T06:01:34.000Z
2024-09-11T15:23:40.910Z
[ { "avatarUrl": "/avatars/591252948eb38a09b9907239ceaca520.svg", "fullname": "Mishl", "name": "mishl", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1671785182610-noauth.png", "fullname": "Malte0621", "name": "Malte0621", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/avatars/407dc6ff003f595a7a86c67c5d3e62d6.svg", "fullname": "Srulik ben David", "name": "Srulikbdd", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FFTiirwS_L6IaLHmHwIo2g.png", "fullname": "Kaizhao Liang", "name": "kz919", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 34, "isFollowing": false } ]
/posts/kz919/599832018572583
2,446
7
440953921758845
[ { "type": "text", "value": "Hello all,", "raw": "Hello all,", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I'm excited to share my article introducing AISAK's new flagship model, AISAK-O (Artificially Intelligent Swiss Army Knife OPTIMUM).", "raw": "I'm excited to share my article introducing AISAK's new flagship model, AISAK-O (Artificially Intelligent Swiss Army Knife OPTIMUM).", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Read the full details here:", "raw": "Read the full details here:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/blog/mandelakori/aisak-o", "href": "https://huggingface.co/blog/mandelakori/aisak-o", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Key highlights of AISAK-O include:", "raw": "Key highlights of AISAK-O include:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- 8 billion parameters and a 32k token context length", "raw": "- 8 billion parameters and a 32k token context length", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Multimodal capabilities for processing both text and visual data", "raw": "- Multimodal capabilities for processing both text and visual data", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Impressive benchmark scores, surpassing GPT-4V in some areas", "raw": "- Impressive benchmark scores, surpassing GPT-4V in some areas", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Specialized in tasks like image captioning, visual reasoning, and cohesive ", "raw": "- Specialized in tasks like image captioning, visual reasoning, and cohesive ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " content generation", "raw": " content generation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Efficient architecture competing with larger models", "raw": "- Efficient architecture competing with larger models", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We're also offering a unique beta testing opportunity with access to inference code.", "raw": "We're also offering a unique beta testing opportunity with access to inference code.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "For more information or partnership inquiries, please contact us at [email protected].", "raw": "For more information or partnership inquiries, please contact us at [email protected].", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I hope you find this advancement in multimodal AI as exciting as we do!", "raw": "I hope you find this advancement in multimodal AI as exciting as we do!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/aisak-ai/O", "href": null, "resource": { "type": "model", "id": "aisak-ai/O", "discussionNum": null }, "url": "https://huggingface.co/aisak-ai/O", "code": null, "user": null, "label": null, "lang": null } ]
Hello all, I'm excited to share my article introducing AISAK's new flagship model, AISAK-O (Artificially Intelligent Swiss Army Knife OPTIMUM). Read the full details here: https://huggingface.co/blog/mandelakori/aisak-o Key highlights of AISAK-O include: - 8 billion parameters and a 32k token context length - Multimodal capabilities for processing both text and visual data - Impressive benchmark scores, surpassing GPT-4V in some areas - Specialized in tasks like image captioning, visual reasoning, and cohesive content generation - Efficient architecture competing with larger models We're also offering a unique beta testing opportunity with access to inference code. For more information or partnership inquiries, please contact us at [email protected]. I hope you find this advancement in multimodal AI as exciting as we do! https://huggingface.co/aisak-ai/O
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6511868c5e3bcde19c5c6dd3%2FSkzqAEb7akK0Wwyv5Vavy.png", "fullname": "Mandela Logan", "name": "mandelakori", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-08T23:19:44.000Z
2024-09-09T13:09:35.504Z
[]
/posts/mandelakori/440953921758845
505
0
540940352351865
[ { "type": "text", "value": "An example of the application of LegalKit is the production of knowledge graphs, here is a demo Space ๐Ÿ”—", "raw": "An example of the application of LegalKit is the production of knowledge graphs, here is a demo Space ๐Ÿ”—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "With the update of the French legal code data model uploaded to ๐Ÿค— and the introduction of a column dedicated to HTML text, it's now easy to extract links between different articles and produce complex graphs with just a few lines of Python.", "raw": "With the update of the French legal code data model uploaded to ๐Ÿค— and the introduction of a column dedicated to HTML text, it's now easy to extract links between different articles and produce complex graphs with just a few lines of Python.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This simplified demo highlights the ease of implementation and creative potential, and enables the generation of complete data sets, although requiring a powerful graphics card for display. The framework used for the moment is D3.js, but perhaps other solutions are possible. I'd be delighted to hear your suggestions, and look forward to hearing from the community.", "raw": "This simplified demo highlights the ease of implementation and creative potential, and enables the generation of complete data sets, although requiring a powerful graphics card for display. The framework used for the moment is D3.js, but perhaps other solutions are possible. I'd be delighted to hear your suggestions, and look forward to hearing from the community.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Link to the ๐Ÿค— Space: ", "raw": "Link to the ๐Ÿค— Space: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/louisbrulenaudet/legalkit-knowledge-graph", "href": null, "resource": { "type": "space", "id": "louisbrulenaudet/legalkit-knowledge-graph", "discussionNum": null }, "url": "https://huggingface.co/spaces/louisbrulenaudet/legalkit-knowledge-graph", "code": null, "user": null, "label": null, "lang": null } ]
An example of the application of LegalKit is the production of knowledge graphs, here is a demo Space ๐Ÿ”— With the update of the French legal code data model uploaded to ๐Ÿค— and the introduction of a column dedicated to HTML text, it's now easy to extract links between different articles and produce complex graphs with just a few lines of Python. This simplified demo highlights the ease of implementation and creative potential, and enables the generation of complete data sets, although requiring a powerful graphics card for display. The framework used for the moment is D3.js, but perhaps other solutions are possible. I'd be delighted to hear your suggestions, and look forward to hearing from the community. Link to the ๐Ÿค— Space: https://huggingface.co/spaces/louisbrulenaudet/legalkit-knowledge-graph
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6459fa0f5b3111fbe83286e1%2FUhCa7JNbtTjC6dgOjZtH0.jpeg", "fullname": "Louis Brulรฉ Naudet", "name": "louisbrulenaudet", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 174, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6459fa0f5b3111fbe83286e1%2FVTT6HuWPNFR6OAR9jGWFa.jpeg" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "victor", "LeroyDyer", "alielfilali01" ], "count": 4 }, { "reaction": "๐Ÿ”ฅ", "users": [ "alielfilali01", "samadpls" ], "count": 2 } ]
2024-09-08T20:39:40.000Z
2024-09-10T07:44:41.780Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65d883893a52cd9bcd8ab7cf%2FtRsCJlHNZo1D02kBTmfy9.jpeg", "fullname": "leroy Samuel Dyer", "name": "LeroyDyer", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 84, "isFollowing": false }, { "avatarUrl": "/avatars/2f7a1cfc68e6f5c0a7ddb323d2ffd252.svg", "fullname": "Mads", "name": "mhenrichsen", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 41, "isFollowing": false } ]
/posts/louisbrulenaudet/540940352351865
1,573
2
490115230567379
[ { "type": "text", "value": "๐ŸŒ Introducing Edupres.ru Presentations Dataset - ", "raw": "๐ŸŒ Introducing Edupres.ru Presentations Dataset - ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/nyuuzyou/edupres", "href": null, "resource": { "type": "dataset", "id": "nyuuzyou/edupres", "discussionNum": null }, "url": "https://huggingface.co/datasets/nyuuzyou/edupres", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Dataset highlights:", "raw": "Dataset highlights:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Metadata for 44,210 presentations from edupres.ru", "raw": "- Metadata for 44,210 presentations from edupres.ru", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- 21,941 presentations available in original format", "raw": "- 21,941 presentations available in original format", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Multilingual content: Primarily Russian, with some Ukrainian, Belarusian, and English", "raw": "- Multilingual content: Primarily Russian, with some Ukrainian, Belarusian, and English", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Each entry includes: URL, title, description, author, publication date, file size, and download link", "raw": "- Each entry includes: URL, title, description, author, publication date, file size, and download link", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Data reflects educational presentations accessible through the Edupres.ru platform", "raw": "- Data reflects educational presentations accessible through the Edupres.ru platform", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Licensed under Creative Commons Zero (CC0) for unrestricted use", "raw": "- Licensed under Creative Commons Zero (CC0) for unrestricted use", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This dataset offers a unique window into online educational resources, particularly in Russian-language contexts. It provides opportunities for analyzing presentation trends, topic distributions, and language patterns in educational materials. The dataset is particularly well-suited for tasks such as text classification and text retrieval in multilingual educational settings.", "raw": "This dataset offers a unique window into online educational resources, particularly in Russian-language contexts. It provides opportunities for analyzing presentation trends, topic distributions, and language patterns in educational materials. The dataset is particularly well-suited for tasks such as text classification and text retrieval in multilingual educational settings.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐ŸŒ Introducing Edupres.ru Presentations Dataset - https://huggingface.co/datasets/nyuuzyou/edupres Dataset highlights: - Metadata for 44,210 presentations from edupres.ru - 21,941 presentations available in original format - Multilingual content: Primarily Russian, with some Ukrainian, Belarusian, and English - Each entry includes: URL, title, description, author, publication date, file size, and download link - Data reflects educational presentations accessible through the Edupres.ru platform - Licensed under Creative Commons Zero (CC0) for unrestricted use This dataset offers a unique window into online educational resources, particularly in Russian-language contexts. It provides opportunities for analyzing presentation trends, topic distributions, and language patterns in educational materials. The dataset is particularly well-suited for tasks such as text classification and text retrieval in multilingual educational settings.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F643ac5d2e2b979ae6144d68c%2FZ7PCNopn4cQeAYnVJDoqG.png", "fullname": "nyuuzyou", "name": "nyuuzyou", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 57, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "louisbrulenaudet", "valbertofeitosa" ], "count": 2 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-08T19:50:34.000Z
2024-09-08T19:50:34.687Z
[]
/posts/nyuuzyou/490115230567379
1,350
0
785101403500240
[ { "type": "text", "value": "This is an absolutely mind-boggling experiment!", "raw": "This is an absolutely mind-boggling experiment!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@GuangyuRobert", "href": null, "resource": null, "url": null, "code": null, "user": "GuangyuRobert", "label": null, "lang": null }, { "type": "text", "value": " (Twitter Handle) from MIT has created Project Sid, which simulates over 1,000 autonomous AI agents collaborating in a Minecraft environment, operating for extended periods without human intervention. This simulation demonstrates unprecedented levels of agent interaction, decision-making, and societal development.", "raw": " (Twitter Handle) from MIT has created Project Sid, which simulates over 1,000 autonomous AI agents collaborating in a Minecraft environment, operating for extended periods without human intervention. This simulation demonstrates unprecedented levels of agent interaction, decision-making, and societal development.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Agents operate independently for hours or days, showcasing advanced decision-making algorithms and goal-oriented behavior.", "raw": "Agents operate independently for hours or days, showcasing advanced decision-making algorithms and goal-oriented behavior.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The simulation produced complex, emergent phenomena, including:", "raw": "The simulation produced complex, emergent phenomena, including:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Economic systems with currency (gems) and trading", "raw": "- Economic systems with currency (gems) and trading", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Cultural development and religious practices", "raw": "- Cultural development and religious practices", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Agents even understood bribing. Priests were moving the most gems to bribe people into following them!", "raw": "- Agents even understood bribing. Priests were moving the most gems to bribe people into following them!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Governmental structures and democratic processes", "raw": "- Governmental structures and democratic processes", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Project Sid addresses fundamental challenges in AI research:", "raw": "Project Sid addresses fundamental challenges in AI research:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Coherence: Maintaining consistent agent behavior over extended periods.", "raw": "- Coherence: Maintaining consistent agent behavior over extended periods.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Multi-agent Collaboration: Enabling effective communication and coordination among numerous AI entities.", "raw": "- Multi-agent Collaboration: Enabling effective communication and coordination among numerous AI entities.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Long-term Progression: Developing agents capable of learning and evolving over time.", "raw": "- Long-term Progression: Developing agents capable of learning and evolving over time.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "While Minecraft serves as the initial testbed, the underlying AI architecture is designed to be game-agnostic, suggesting potential applications in various digital environments and real-world simulations.", "raw": "While Minecraft serves as the initial testbed, the underlying AI architecture is designed to be game-agnostic, suggesting potential applications in various digital environments and real-world simulations.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Imagine a policy being debated by the government and how it might affect society; Sid can simulate its impact!", "raw": "Imagine a policy being debated by the government and how it might affect society; Sid can simulate its impact!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Even if this remains just a game experiment, the project successfully manages 1,000+ agents simultaneously, a feat that requires robust distributed computing and efficient agent architecture.", "raw": "Even if this remains just a game experiment, the project successfully manages 1,000+ agents simultaneously, a feat that requires robust distributed computing and efficient agent architecture.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
This is an absolutely mind-boggling experiment! @GuangyuRobert (Twitter Handle) from MIT has created Project Sid, which simulates over 1,000 autonomous AI agents collaborating in a Minecraft environment, operating for extended periods without human intervention. This simulation demonstrates unprecedented levels of agent interaction, decision-making, and societal development. Agents operate independently for hours or days, showcasing advanced decision-making algorithms and goal-oriented behavior. The simulation produced complex, emergent phenomena, including: - Economic systems with currency (gems) and trading - Cultural development and religious practices - Agents even understood bribing. Priests were moving the most gems to bribe people into following them! - Governmental structures and democratic processes Project Sid addresses fundamental challenges in AI research: - Coherence: Maintaining consistent agent behavior over extended periods. - Multi-agent Collaboration: Enabling effective communication and coordination among numerous AI entities. - Long-term Progression: Developing agents capable of learning and evolving over time. While Minecraft serves as the initial testbed, the underlying AI architecture is designed to be game-agnostic, suggesting potential applications in various digital environments and real-world simulations. Imagine a policy being debated by the government and how it might affect society; Sid can simulate its impact! Even if this remains just a game experiment, the project successfully manages 1,000+ agents simultaneously, a feat that requires robust distributed computing and efficient agent architecture.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FWXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2F7P-md9P4rROJ0q-DnHumQ.mp4" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "TroglodyteDerivations", "Hev832", "jlzhou", "Tanvir1337", "sadra-barikbin", "Blane187", "ClearPeakeGroup", "xtre3m" ], "count": 8 }, { "reaction": "๐Ÿ‘€", "users": [ "createtheimaginable", "John6666", "jlzhou", "flflow" ], "count": 4 }, { "reaction": "๐Ÿคฏ", "users": [ "facehuggervortex", "kmsky" ], "count": 2 }, { "reaction": "๐Ÿš€", "users": [ "pduf" ], "count": 1 } ]
2024-09-08T16:49:26.000Z
2024-09-08T16:49:26.350Z
[]
/posts/singhsidhukuldeep/785101403500240
3,459
0
957394858411398
[ { "type": "text", "value": "SemanticFinder now supports WebGPU thanks to ", "raw": "SemanticFinder now supports WebGPU thanks to ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@Xenova", "href": null, "resource": null, "url": null, "code": null, "user": "Xenova", "label": null, "lang": null }, { "type": "text", "value": "'s efforts with transformers.js v3!", "raw": "'s efforts with transformers.js v3!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Expect massive performance gains. Inferenced a whole book with 46k chunks in <5min. If your device doesn't support #WebGPU use the classic Wasm-based version:", "raw": "Expect massive performance gains. Inferenced a whole book with 46k chunks in <5min. If your device doesn't support #WebGPU use the classic Wasm-based version:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- WebGPU: ", "raw": "- WebGPU: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://do-me.github.io/SemanticFinder/webgpu/", "href": "https://do-me.github.io/SemanticFinder/webgpu/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Wasm: ", "raw": "- Wasm: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://do-me.github.io/SemanticFinder/", "href": "https://do-me.github.io/SemanticFinder/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: ", "raw": "WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/Xenova/webgpu-embedding-benchmark", "href": null, "resource": { "type": "space", "id": "Xenova/webgpu-embedding-benchmark", "discussionNum": null }, "url": "https://huggingface.co/spaces/Xenova/webgpu-embedding-benchmark", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Chrome currently works out of the box, Firefox requires some tweaking.", "raw": "Chrome currently works out of the box, Firefox requires some tweaking.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: ", "raw": "WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/do-me/SemanticFinder", "href": null, "resource": { "type": "dataset", "id": "do-me/SemanticFinder", "discussionNum": null }, "url": "https://huggingface.co/datasets/do-me/SemanticFinder", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Happy to hear your ideas!", "raw": "Happy to hear your ideas!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
SemanticFinder now supports WebGPU thanks to @Xenova's efforts with transformers.js v3! Expect massive performance gains. Inferenced a whole book with 46k chunks in <5min. If your device doesn't support #WebGPU use the classic Wasm-based version: - WebGPU: https://do-me.github.io/SemanticFinder/webgpu/ - Wasm: https://do-me.github.io/SemanticFinder/ WebGPU harnesses the full power of your hardware, no longer being restricted to just the CPU. The speedup is significant (4-60x) for all kinds of devices: consumer-grade laptops, heavy Nvidia GPU setups or Apple Silicon. Measure the difference for your device here: https://huggingface.co/spaces/Xenova/webgpu-embedding-benchmark Chrome currently works out of the box, Firefox requires some tweaking. WebGPU + transformers.js allows to build amazing applications and make them accessible to everyone. E.g. SemanticFinder could become a simple GUI for populating your (vector) DB of choice. See the pre-indexed community texts here: https://huggingface.co/datasets/do-me/SemanticFinder Happy to hear your ideas!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2Fnoauth%2FIiercF_qxHWize2kitl9X.jpeg", "fullname": "Dominik Weckmรผller", "name": "do-me", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 38, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F64c4da8719565937fb268b32%2FcobK-NZqjbDcfHKWYCDlw.png" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F61b253b7ac5ecaae3d1efe0c%2FhwiQ0uvz3t-L5a-NtBIO6.png", "fullname": "Joshua", "name": "Xenova", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 3792 } ]
[ { "reaction": "๐Ÿš€", "users": [ "TroglodyteDerivations", "osanseviero", "Felladrin", "maywell", "Xenova", "ngxson", "ai-everyday", "louisbrulenaudet", "adorkin" ], "count": 9 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Presidentlin", "Xenova", "ngxson" ], "count": 4 } ]
2024-09-08T10:49:39.000Z
2024-09-09T10:21:18.377Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1674191139776-noauth.png", "fullname": "Xuan Son NGUYEN", "name": "ngxson", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 49, "isFollowing": false } ]
/posts/do-me/957394858411398
3,253
1
814819566246318
[ { "type": "text", "value": "๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง ", "raw": "๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check out this app where you convert:", "raw": "Check out this app where you convert:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Pytorch/tensorflow summary -> required VRAM", "raw": "Pytorch/tensorflow summary -> required VRAM", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "or", "raw": "or", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Parameter count -> required VRAM", "raw": "Parameter count -> required VRAM", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Use it in: ", "raw": "Use it in: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "http://howmuchvram.com", "href": "http://howmuchvram.com", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "And everything is open source! Ask for new functionalities or contribute in:", "raw": "And everything is open source! Ask for new functionalities or contribute in:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/AlexBodner/How_Much_VRAM", "href": "https://github.com/AlexBodner/How_Much_VRAM", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful!", "raw": "If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "More discussion in: ", "raw": "More discussion in: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/AlexBodner_/status/1832054850294812679", "href": "https://x.com/AlexBodner_/status/1832054850294812679", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง  Check out this app where you convert: Pytorch/tensorflow summary -> required VRAM or Parameter count -> required VRAM Use it in: http://howmuchvram.com And everything is open source! Ask for new functionalities or contribute in: https://github.com/AlexBodner/How_Much_VRAM If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful! More discussion in: https://x.com/AlexBodner_/status/1832054850294812679
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FwMv9-ZsJUw4QQnld_cci7.jpeg", "fullname": "Alexander Dylan Bodner", "name": "AlexBodner", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FiBS6IZj-9Hezt-ihbqEFQ.mp4" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "konradhugging", "Jawaher786", "osanseviero", "wyrd-code", "genaihuman" ], "count": 5 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-07T19:11:08.000Z
2024-09-07T21:08:04.285Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6316fb937b0ee0136e5f1220%2FpoHBoJ7QAF_s2CCaosdvQ.jpeg", "fullname": "Firstname Lastname", "name": "takeraparterer", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 29, "isFollowing": false } ]
/posts/AlexBodner/814819566246318
1,845
1
989215269740443
[ { "type": "text", "value": "Last Week in Medical AI: Top Research ", "raw": "Last Week in Medical AI: Top Research ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Papers/Models", "raw": "Papers/Models", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ…(September 1 - September 7, 2024) ", "raw": "๐Ÿ…(September 1 - September 7, 2024) ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Medical LLM & Other Models :", "raw": "Medical LLM & Other Models :", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- CancerLLM: Large Language Model in Cancer Domain", "raw": "- CancerLLM: Large Language Model in Cancer Domain", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- MedUnA: Vision-Language Models for Medical Image", "raw": "- MedUnA: Vision-Language Models for Medical Image", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Foundation Model for Robotic Endoscopic Surgery ", "raw": "- Foundation Model for Robotic Endoscopic Surgery ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Med-MoE: MoE for Medical Vision-Language Models ", "raw": "- Med-MoE: MoE for Medical Vision-Language Models ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- CanvOI: Foundation Model for Oncology", "raw": "- CanvOI: Foundation Model for Oncology", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- UniUSNet: Ultrasound Disease Prediction", "raw": "- UniUSNet: Ultrasound Disease Prediction", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- DHIN: Decentralized Health Intelligence Network", "raw": "- DHIN: Decentralized Health Intelligence Network", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Medical Benchmarks and Evaluations:", "raw": "Medical Benchmarks and Evaluations:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- TrialBench: Clinical Trial Datasets & Benchmark", "raw": "- TrialBench: Clinical Trial Datasets & Benchmark", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- LLMs for Medical Q&A Evaluation", "raw": "- LLMs for Medical Q&A Evaluation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- MedFuzz: Exploring Robustness Medical LLMs", "raw": "- MedFuzz: Exploring Robustness Medical LLMs", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- MedS-Bench: Evaluating LLMs in Clinical Tasks", "raw": "- MedS-Bench: Evaluating LLMs in Clinical Tasks", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- DiversityMedQA: Assessing LLM Bias in Diagnosis", "raw": "- DiversityMedQA: Assessing LLM Bias in Diagnosis", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- LLM Performance in Gastroenterology", "raw": "- LLM Performance in Gastroenterology", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "LLM Digital Twins:", "raw": "LLM Digital Twins:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Digital Twins for Rare Gynecological Tumors", "raw": "- Digital Twins for Rare Gynecological Tumors", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- DT-GPT: Digital Twins for Patient Health Forecasting", "raw": "- DT-GPT: Digital Twins for Patient Health Forecasting", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Medical LLM Applications:", "raw": "Medical LLM Applications:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- HIPPO: Explainable AI for Pathology ", "raw": "- HIPPO: Explainable AI for Pathology ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- LLMs vs Humans in CBT Therapy ", "raw": "- LLMs vs Humans in CBT Therapy ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ASD-Chat: LLMs for Autistic Children", "raw": "- ASD-Chat: LLMs for Autistic Children", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- LLMs for Mental Health", "raw": "- LLMs for Mental Health", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- LLMs for Postoperative Risk Prediction", "raw": "- LLMs for Postoperative Risk Prediction", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Frameworks and Methodologies: ", "raw": "Frameworks and Methodologies: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Rx Strategist: LLM-based Prescription Verification ", "raw": "- Rx Strategist: LLM-based Prescription Verification ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Medical Confidence Elicitation", "raw": "- Medical Confidence Elicitation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Guardrails for Medical LLMs", "raw": "- Guardrails for Medical LLMs", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check the full thread : ", "raw": "Check the full thread : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/OpenlifesciAI/status/1832476252260712788", "href": "https://x.com/OpenlifesciAI/status/1832476252260712788", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Thank you for your continued support and love for this series! Stay up-to-date with weekly updates on Medical LLMs, datasets, and top research papers by following ", "raw": "Thank you for your continued support and love for this series! Stay up-to-date with weekly updates on Medical LLMs, datasets, and top research papers by following ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@aaditya", "href": null, "resource": null, "url": null, "code": null, "user": "aaditya", "label": null, "lang": null }, { "type": "text", "value": " ๐Ÿค—", "raw": " ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Last Week in Medical AI: Top Research Papers/Models ๐Ÿ…(September 1 - September 7, 2024) Medical LLM & Other Models : - CancerLLM: Large Language Model in Cancer Domain - MedUnA: Vision-Language Models for Medical Image - Foundation Model for Robotic Endoscopic Surgery - Med-MoE: MoE for Medical Vision-Language Models - CanvOI: Foundation Model for Oncology - UniUSNet: Ultrasound Disease Prediction - DHIN: Decentralized Health Intelligence Network Medical Benchmarks and Evaluations: - TrialBench: Clinical Trial Datasets & Benchmark - LLMs for Medical Q&A Evaluation - MedFuzz: Exploring Robustness Medical LLMs - MedS-Bench: Evaluating LLMs in Clinical Tasks - DiversityMedQA: Assessing LLM Bias in Diagnosis - LLM Performance in Gastroenterology LLM Digital Twins: - Digital Twins for Rare Gynecological Tumors - DT-GPT: Digital Twins for Patient Health Forecasting Medical LLM Applications: - HIPPO: Explainable AI for Pathology - LLMs vs Humans in CBT Therapy - ASD-Chat: LLMs for Autistic Children - LLMs for Mental Health - LLMs for Postoperative Risk Prediction Frameworks and Methodologies: - Rx Strategist: LLM-based Prescription Verification - Medical Confidence Elicitation - Guardrails for Medical LLMs Check the full thread : https://x.com/OpenlifesciAI/status/1832476252260712788 Thank you for your continued support and love for this series! Stay up-to-date with weekly updates on Medical LLMs, datasets, and top research papers by following @aaditya ๐Ÿค—
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f3fe13d79c1ba4c353d0c19%2FXswyGe3OtOdZ6g7rnrgfc.png", "fullname": "Aaditya Ura", "name": "aaditya", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 224, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5f3fe13d79c1ba4c353d0c19%2FiA0ctoCtvjMjzGqCSFtrn.jpeg" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f3fe13d79c1ba4c353d0c19%2FXswyGe3OtOdZ6g7rnrgfc.png", "fullname": "Aaditya Ura", "name": "aaditya", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 224 } ]
[ { "reaction": "โค๏ธ", "users": [ "aaditya", "BayesTensor", "models4world", "f0ster", "hapisnake", "Joseph717171", "osanseviero", "louisbrulenaudet", "Svngoku" ], "count": 9 }, { "reaction": "๐Ÿš€", "users": [ "aaditya", "BayesTensor", "models4world", "John6666", "Joseph717171", "Jawaher786", "osanseviero" ], "count": 7 }, { "reaction": "๐Ÿค—", "users": [ "aaditya", "BayesTensor", "Joseph717171", "Svngoku" ], "count": 4 }, { "reaction": "๐Ÿ”ฅ", "users": [ "aaditya", "BayesTensor", "Joseph717171", "genaihuman" ], "count": 4 }, { "reaction": "๐Ÿ‘", "users": [ "aaditya", "Joseph717171", "whitebill", "Spurthi007" ], "count": 4 } ]
2024-09-07T17:57:00.000Z
2024-09-08T11:14:47.307Z
[]
/posts/aaditya/989215269740443
3,483
0
615382012011851
[ { "type": "text", "value": "๐ŸŒ Yet another dataset - ", "raw": "๐ŸŒ Yet another dataset - ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/nyuuzyou/rule34world", "href": null, "resource": { "type": "dataset", "id": "nyuuzyou/rule34world", "discussionNum": null }, "url": "https://huggingface.co/datasets/nyuuzyou/rule34world", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Dataset highlights:", "raw": "Dataset highlights:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Metadata for 580,977 image files from rule34.world", "raw": "- Metadata for 580,977 image files from rule34.world", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Monolingual content: English tags and metadata", "raw": "- Monolingual content: English tags and metadata", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Each entry includes: URL, image URL, filepath, tags, and like count", "raw": "- Each entry includes: URL, image URL, filepath, tags, and like count", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Data reflects images available on the Rule34.world platform up to August/September 2024", "raw": "- Data reflects images available on the Rule34.world platform up to August/September 2024", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Licensed under Creative Commons Zero (CC0) for unrestricted use", "raw": "- Licensed under Creative Commons Zero (CC0) for unrestricted use", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This dataset offers a unique window into online anime and art communities, particularly those focused on adult content. It provides opportunities for analyzing tagging trends, image popularity, and content patterns in user-generated art platforms.", "raw": "This dataset offers a unique window into online anime and art communities, particularly those focused on adult content. It provides opportunities for analyzing tagging trends, image popularity, and content patterns in user-generated art platforms.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This dataset contains high-quality images and tags, making it a great source of data for training LoRA models.", "raw": "This dataset contains high-quality images and tags, making it a great source of data for training LoRA models.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐ŸŒ Yet another dataset - https://huggingface.co/datasets/nyuuzyou/rule34world Dataset highlights: - Metadata for 580,977 image files from rule34.world - Monolingual content: English tags and metadata - Each entry includes: URL, image URL, filepath, tags, and like count - Data reflects images available on the Rule34.world platform up to August/September 2024 - Licensed under Creative Commons Zero (CC0) for unrestricted use This dataset offers a unique window into online anime and art communities, particularly those focused on adult content. It provides opportunities for analyzing tagging trends, image popularity, and content patterns in user-generated art platforms. This dataset contains high-quality images and tags, making it a great source of data for training LoRA models.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F643ac5d2e2b979ae6144d68c%2FZ7PCNopn4cQeAYnVJDoqG.png", "fullname": "nyuuzyou", "name": "nyuuzyou", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 57, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "mmx31" ], "count": 2 }, { "reaction": "โค๏ธ", "users": [ "kristaller486", "Shinku" ], "count": 2 } ]
2024-09-07T14:58:37.000Z
2024-09-07T14:58:37.792Z
[]
/posts/nyuuzyou/615382012011851
1,395
0
927811517468266
[ { "type": "text", "value": "How can I make my RAG application generate real-time responses? Up until now, I have been using Groq for fast LLM generation and the Gradio Live function. I am looking for a better solution that can help me build a real-time application without any delay. ", "raw": "How can I make my RAG application generate real-time responses? Up until now, I have been using Groq for fast LLM generation and the Gradio Live function. I am looking for a better solution that can help me build a real-time application without any delay. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@abidlabs", "href": null, "resource": null, "url": null, "code": null, "user": "abidlabs", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/kingabzpro/Real-Time-RAG", "href": null, "resource": { "type": "space", "id": "kingabzpro/Real-Time-RAG", "discussionNum": null }, "url": "https://huggingface.co/spaces/kingabzpro/Real-Time-RAG", "code": null, "user": null, "label": null, "lang": null } ]
How can I make my RAG application generate real-time responses? Up until now, I have been using Groq for fast LLM generation and the Gradio Live function. I am looking for a better solution that can help me build a real-time application without any delay. @abidlabs https://huggingface.co/spaces/kingabzpro/Real-Time-RAG
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F603945d6db430f160dced222%2FRf3ChIRWR8eBi7sEVgl4s.png", "fullname": "Abid Ali Awan", "name": "kingabzpro", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 29, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F603945d6db430f160dced222%2F_zAZbK81qxbj7bIufhCqm.gif" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1621947938344-noauth.png", "fullname": "Abubakar Abid", "name": "abidlabs", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 487 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "bitdeep", "pryanshusharma", "Jawaher786", "AtAndDev", "DataSoul", "privategeek24", "kingabzpro" ], "count": 8 } ]
2024-09-07T08:57:26.000Z
2024-09-09T15:34:38.343Z
[ { "avatarUrl": "/avatars/cec7d06fd895a347b742baea8a90d224.svg", "fullname": "Donald", "name": "SVHawk13", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F603945d6db430f160dced222%2FRf3ChIRWR8eBi7sEVgl4s.png", "fullname": "Abid Ali Awan", "name": "kingabzpro", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 29, "isFollowing": false } ]
/posts/kingabzpro/927811517468266
1,838
2
456542013174124
[ { "type": "text", "value": "Google's Chain-of-Thought (CoT) is one of the most effective ways to improve LLMs' reasoning.", "raw": "Google's Chain-of-Thought (CoT) is one of the most effective ways to improve LLMs' reasoning.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Researchers have now developed a novel approach called Strategic Chain-of-Thought (SCoT) to enhance the reasoning capabilities of large language models even further.", "raw": "Researchers have now developed a novel approach called Strategic Chain-of-Thought (SCoT) to enhance the reasoning capabilities of large language models even further.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿง  SCoT uses a two-stage process within a single prompt:", "raw": "๐Ÿง  SCoT uses a two-stage process within a single prompt:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Strategy Elicitation: The model first identifies and determines an effective problem-solving strategy for the given task. This becomes the strategic knowledge that guides the reasoning process.", "raw": "- Strategy Elicitation: The model first identifies and determines an effective problem-solving strategy for the given task. This becomes the strategic knowledge that guides the reasoning process.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Strategy Application: The model then applies the identified strategic knowledge to solve the problem and generate the final answer.", "raw": "- Strategy Application: The model then applies the identified strategic knowledge to solve the problem and generate the final answer.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Essentially, SCoT integrates strategic knowledge to guide reasoning without relying on external knowledge sources or multiple queries.", "raw": "Essentially, SCoT integrates strategic knowledge to guide reasoning without relying on external knowledge sources or multiple queries.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "According to the research, SCoT showed significant improvements over standard CoT across various datasets, including a 21.05% increase on the GSM8K math dataset and a 24.13% increase on the Tracking_Objects spatial reasoning task.", "raw": "According to the research, SCoT showed significant improvements over standard CoT across various datasets, including a 21.05% increase on the GSM8K math dataset and a 24.13% increase on the Tracking_Objects spatial reasoning task.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Changes in the Prompt Structure:", "raw": "Changes in the Prompt Structure:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The SCoT prompt typically consists of five components:", "raw": "The SCoT prompt typically consists of five components:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Role: Defines the expert role the model should assume.", "raw": "- Role: Defines the expert role the model should assume.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Workflow: Outlines the steps for strategy identification and application.", "raw": "- Workflow: Outlines the steps for strategy identification and application.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Rules: Specifies guidelines for generating answers.", "raw": "- Rules: Specifies guidelines for generating answers.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Initialization: Sets up the task.", "raw": "- Initialization: Sets up the task.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Task Input: Provides the specific problem to solve.", "raw": "- Task Input: Provides the specific problem to solve.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Strategy Generation:", "raw": "Strategy Generation:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The model is prompted to generate strategic knowledge relevant to the problem domain. For example, in mathematics, it might favor elegant solutions like using arithmetic series formulas over brute-force calculations.", "raw": "The model is prompted to generate strategic knowledge relevant to the problem domain. For example, in mathematics, it might favor elegant solutions like using arithmetic series formulas over brute-force calculations.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Guided Reasoning:", "raw": "Guided Reasoning:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Using the elicited strategy, the model then generates a chain-of-thought reasoning path. This approach aims to produce more stable and higher-quality outputs compared to standard chain-of-thought methods.", "raw": "Using the elicited strategy, the model then generates a chain-of-thought reasoning path. This approach aims to produce more stable and higher-quality outputs compared to standard chain-of-thought methods.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Read the full paper: ", "raw": "Read the full paper: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://arxiv.org/abs/2409.03271", "href": "https://arxiv.org/abs/2409.03271", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Google's Chain-of-Thought (CoT) is one of the most effective ways to improve LLMs' reasoning. Researchers have now developed a novel approach called Strategic Chain-of-Thought (SCoT) to enhance the reasoning capabilities of large language models even further. ๐Ÿง  SCoT uses a two-stage process within a single prompt: - Strategy Elicitation: The model first identifies and determines an effective problem-solving strategy for the given task. This becomes the strategic knowledge that guides the reasoning process. - Strategy Application: The model then applies the identified strategic knowledge to solve the problem and generate the final answer. Essentially, SCoT integrates strategic knowledge to guide reasoning without relying on external knowledge sources or multiple queries. According to the research, SCoT showed significant improvements over standard CoT across various datasets, including a 21.05% increase on the GSM8K math dataset and a 24.13% increase on the Tracking_Objects spatial reasoning task. Changes in the Prompt Structure: The SCoT prompt typically consists of five components: - Role: Defines the expert role the model should assume. - Workflow: Outlines the steps for strategy identification and application. - Rules: Specifies guidelines for generating answers. - Initialization: Sets up the task. - Task Input: Provides the specific problem to solve. Strategy Generation: The model is prompted to generate strategic knowledge relevant to the problem domain. For example, in mathematics, it might favor elegant solutions like using arithmetic series formulas over brute-force calculations. Guided Reasoning: Using the elicited strategy, the model then generates a chain-of-thought reasoning path. This approach aims to produce more stable and higher-quality outputs compared to standard chain-of-thought methods. Read the full paper: https://arxiv.org/abs/2409.03271
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FWXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2F-sAeV04eQKJUXbJXGLrJ0.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "bitdeep", "den0620", "Jawaher786", "createtheimaginable", "ChuGyouk", "Norod78" ], "count": 7 }, { "reaction": "๐Ÿ”ฅ", "users": [ "createtheimaginable", "DIvAndrey" ], "count": 2 } ]
2024-09-07T08:05:35.000Z
2024-09-08T17:53:33.344Z
[ { "avatarUrl": "/avatars/1f7026c98fa415c088c65ec8a65c9b60.svg", "fullname": "Adrian Murat Ozdemir", "name": "muratowski", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false } ]
/posts/singhsidhukuldeep/456542013174124
1,853
1
326367703139287
[ { "type": "text", "value": "i just made the best 0.5b model to date (again)", "raw": "i just made the best 0.5b model to date (again)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "its name is arco and is ready to fight any 0.5b model at arc challenge", "raw": "its name is arco and is ready to fight any 0.5b model at arc challenge", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/appvoid/arco", "href": null, "resource": { "type": "model", "id": "appvoid/arco", "discussionNum": null }, "url": "https://huggingface.co/appvoid/arco", "code": null, "user": null, "label": null, "lang": null } ]
i just made the best 0.5b model to date (again) its name is arco and is ready to fight any 0.5b model at arc challenge https://huggingface.co/appvoid/arco
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62a813dedbb9e28866a91b27%2Fzs-RWFuXs17IfPUhxQaei.jpeg", "fullname": "appvoid", "name": "appvoid", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 35, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62a813dedbb9e28866a91b27%2F7QIK7iyY-wXpprlHwqbqv.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Tonic", "AtAndDev", "louisbrulenaudet" ], "count": 4 }, { "reaction": "๐Ÿ”ฅ", "users": [ "nicolollo", "AtAndDev", "TobDeBer", "cnmoro" ], "count": 4 } ]
2024-09-06T23:59:50.000Z
2024-09-09T11:26:03.415Z
[]
/posts/appvoid/326367703139287
1,281
6
460338189482262
[ { "type": "text", "value": "Reposting from twitter:", "raw": "Reposting from twitter:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Just so you all know, I'll be on vacation for the following two weeks and away from home! I'm hoping to get on at least once a day to load up some quants, but I won't be as bleeding edge and on the ball :) feel free to shoot me a message if you see one I should make!", "raw": "Just so you all know, I'll be on vacation for the following two weeks and away from home! I'm hoping to get on at least once a day to load up some quants, but I won't be as bleeding edge and on the ball :) feel free to shoot me a message if you see one I should make!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "In the meantime if you need something bleeding edge make sure to check out ", "raw": "In the meantime if you need something bleeding edge make sure to check out ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@MaziyarPanahi", "href": null, "resource": null, "url": null, "code": null, "user": "MaziyarPanahi", "label": null, "lang": null }, { "type": "text", "value": " or ", "raw": " or ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@bullerwins", "href": null, "resource": null, "url": null, "code": null, "user": "bullerwins", "label": null, "lang": null }, { "type": "text", "value": " who both put out great work!", "raw": " who both put out great work!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Reposting from twitter: Just so you all know, I'll be on vacation for the following two weeks and away from home! I'm hoping to get on at least once a day to load up some quants, but I won't be as bleeding edge and on the ball :) feel free to shoot me a message if you see one I should make! In the meantime if you need something bleeding edge make sure to check out @MaziyarPanahi or @bullerwins who both put out great work!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6435718aaaef013d1aec3b8b%2FXKf-8MA47tjVAM6SCX0MP.jpeg", "fullname": "Bartowski", "name": "bartowski", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 2816, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65cccccefb8ab7fcc2c6424c%2F0dlk5hmzNhTWr8j9E1DXP.jpeg", "fullname": "Rodri Mora", "name": "bullerwins", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 53 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5fd5e18a90b6dc4633f6d292%2FgZXHW5dd9R86AV9LMZ--y.png", "fullname": "Maziyar Panahi", "name": "MaziyarPanahi", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 1541 } ]
[ { "reaction": "โค๏ธ", "users": [ "Joseph717171", "syrupsweety", "not-lain", "bullerwins", "MaziyarPanahi", "MarinaraSpaghetti", "osanseviero", "Presidentlin", "victor", "JoeySalmons", "celsowm", "hudzax", "win10" ], "count": 13 }, { "reaction": "๐Ÿ˜Ž", "users": [ "Joseph717171", "John6666", "AIGUYCONTENT", "MaziyarPanahi", "osanseviero", "neoopus" ], "count": 6 }, { "reaction": "๐Ÿ”ฅ", "users": [ "Joseph717171", "MaziyarPanahi" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "Kynesyn" ], "count": 1 } ]
2024-09-06T23:29:41.000Z
2024-10-16T21:11:46.753Z
[ { "avatarUrl": "/avatars/ea4398745974d781ae9dc0e95b12cabe.svg", "fullname": "Joseph", "name": "Joseph717171", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 22, "isFollowing": false }, { "avatarUrl": "/avatars/99a24b1d41e468fed0eca43545090284.svg", "fullname": "Walter Lima", "name": "waltervix", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }, { "avatarUrl": "/avatars/ae2b8b99b8c9d2b8a2db454806e1f5d9.svg", "fullname": "Tim Kyn", "name": "Kynesyn", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/avatars/df614f21f59bc6e4d1f934169e4aec99.svg", "fullname": "Andre ", "name": "Gigahardglob", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/bartowski/460338189482262
31,613
4
445706346542195
[ { "type": "text", "value": "FLUX Prompt Generator Updates", "raw": "FLUX Prompt Generator Updates", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/gokaygokay/FLUX-Prompt-Generator", "href": null, "resource": { "type": "space", "id": "gokaygokay/FLUX-Prompt-Generator", "discussionNum": null }, "url": "https://huggingface.co/spaces/gokaygokay/FLUX-Prompt-Generator", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- There are now hundreds of new selections across diverse categories, each offering a lot of choices:", "raw": "- There are now hundreds of new selections across diverse categories, each offering a lot of choices:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Architecture, Art, Artist, Brands, Character, Cinematic, Fashion, Feelings, Geography, Human, Interaction, Keywords, Objects, People, Photography, Plots, Poses, Scene, Science, Stuff, Time, Typography, Vehicle, Video Game", "raw": "Architecture, Art, Artist, Brands, Character, Cinematic, Fashion, Feelings, Geography, Human, Interaction, Keywords, Objects, People, Photography, Plots, Poses, Scene, Science, Stuff, Time, Typography, Vehicle, Video Game", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- In addition to Hugging Face, I've integrated new LLM providers: Groq, OpenAI, and Claude.", "raw": "- In addition to Hugging Face, I've integrated new LLM providers: Groq, OpenAI, and Claude.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Upgraded Vision Language Models (VLMs): We now feature Qwen2-VL, JoyCaption and Florence-2-large.", "raw": "- Upgraded Vision Language Models (VLMs): We now feature Qwen2-VL, JoyCaption and Florence-2-large.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- New specialized system prompts for various styles and themes, including Happy, Simple, Poster, Only Objects, No Figure, Landscape, Fantasy.", "raw": "- New specialized system prompts for various styles and themes, including Happy, Simple, Poster, Only Objects, No Figure, Landscape, Fantasy.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
FLUX Prompt Generator Updates - https://huggingface.co/spaces/gokaygokay/FLUX-Prompt-Generator - There are now hundreds of new selections across diverse categories, each offering a lot of choices: Architecture, Art, Artist, Brands, Character, Cinematic, Fashion, Feelings, Geography, Human, Interaction, Keywords, Objects, People, Photography, Plots, Poses, Scene, Science, Stuff, Time, Typography, Vehicle, Video Game - In addition to Hugging Face, I've integrated new LLM providers: Groq, OpenAI, and Claude. - Upgraded Vision Language Models (VLMs): We now feature Qwen2-VL, JoyCaption and Florence-2-large. - New specialized system prompts for various styles and themes, including Happy, Simple, Poster, Only Objects, No Figure, Landscape, Fantasy.
{ "avatarUrl": "/avatars/b9a6d8e11ec7a62ca2b819e0b6c37222.svg", "fullname": "gokay aydogan", "name": "gokaygokay", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 1130, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2Fu_IZ43q0247UaH2_LK07W.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2F6MVx_ctCbmMXRdF2Dfmx6.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2F8V-yOsc-8v9MDOIDEo0IA.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2F1XKyGghgMJ2y3y2s_SRT1.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2FvdKrZg5_vWetRUnU0iQEg.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2FlqNCplC-A4mIXZlMIFP8A.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2FmndIHcOBYswRlUv4gUCtg.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F630899601dd1e3075d975785%2FTl-jreh1SGZeCJf6Csb46.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "John6666", "YaTharThShaRma999", "KingNish", "ucsahin", "Chief-Inspector", "EmilyChan", "victor", "Felladrin", "Nyxie7", "gokaygokay" ], "count": 10 }, { "reaction": "๐Ÿค—", "users": [ "zohebk" ], "count": 1 } ]
2024-09-06T22:10:36.000Z
2024-10-16T22:23:51.435Z
[ { "avatarUrl": "/avatars/e61f8d637223b476bcafe96945b552e1.svg", "fullname": "hashed albaham", "name": "Hashed000", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/gokaygokay/445706346542195
6,436
1
861996108790591
[ { "type": "text", "value": "Yesterday ย ", "raw": "Yesterday ย ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@mattshumer", "href": null, "resource": null, "url": null, "code": null, "user": "mattshumer", "label": null, "lang": null }, { "type": "text", "value": " released ", "raw": " released ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B", "href": null, "resource": { "type": "model", "id": "mattshumer/Reflection-Llama-3.1-70B", "discussionNum": null }, "url": "https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ", an impressive model that achieved incredible results in benchmarks like MMLU. The model was fine-tuned using Reflection-Tuning and the dataset used wasn't released, but I created a small recipe with distilabel that allows generating a dataset with a similar output format:", "raw": ", an impressive model that achieved incredible results in benchmarks like MMLU. The model was fine-tuned using Reflection-Tuning and the dataset used wasn't released, but I created a small recipe with distilabel that allows generating a dataset with a similar output format:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "1. We use MagPie ๐Ÿฆ in combination with ", "raw": "1. We use MagPie ๐Ÿฆ in combination with ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct", "href": "https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " to generate reasoning instructions.", "raw": " to generate reasoning instructions.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "2. We generate a response again using ", "raw": "2. We generate a response again using ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct", "href": "https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ", but we steer the LLM to generate an specific output format using a custom system prompt. In the system prompt, we instruct the LLM that it will have first to think ๐Ÿ’ญ and have reflections that will help resolving ambiguities. After that, we instruct the LLM to generate an output based on the previous thinking ", "raw": ", but we steer the LLM to generate an specific output format using a custom system prompt. In the system prompt, we instruct the LLM that it will have first to think ๐Ÿ’ญ and have reflections that will help resolving ambiguities. After that, we instruct the LLM to generate an output based on the previous thinking ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "In this dataset ", "raw": "In this dataset ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/gabrielmbmb/distilabel-reflection-tuning", "href": null, "resource": { "type": "dataset", "id": "gabrielmbmb/distilabel-reflection-tuning", "discussionNum": null }, "url": "https://huggingface.co/datasets/gabrielmbmb/distilabel-reflection-tuning", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " you can found 5 rows that I generated with this recipe. You can also found the code of the pipeline in the file called ", "raw": " you can found 5 rows that I generated with this recipe. You can also found the code of the pipeline in the file called ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`reflection.py`", "href": null, "resource": null, "url": null, "code": "reflection.py", "user": null, "label": null, "lang": null }, { "type": "text", "value": ".", "raw": ".", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Yesterday ย @mattshumer released https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B, an impressive model that achieved incredible results in benchmarks like MMLU. The model was fine-tuned using Reflection-Tuning and the dataset used wasn't released, but I created a small recipe with distilabel that allows generating a dataset with a similar output format: 1. We use MagPie ๐Ÿฆ in combination with https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct to generate reasoning instructions. 2. We generate a response again using https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct, but we steer the LLM to generate an specific output format using a custom system prompt. In the system prompt, we instruct the LLM that it will have first to think ๐Ÿ’ญ and have reflections that will help resolving ambiguities. After that, we instruct the LLM to generate an output based on the previous thinking In this dataset https://huggingface.co/datasets/gabrielmbmb/distilabel-reflection-tuning you can found 5 rows that I generated with this recipe. You can also found the code of the pipeline in the file called `reflection.py`.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F60f2fc91b92afccb7c34b8ed%2FwhF6nGtyTAhbtiWJJnL9e.png", "fullname": "Gabriel Martรญn Blรกzquez", "name": "gabrielmbmb", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 90, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F60f2fc91b92afccb7c34b8ed%2FUz2Yc6O5J-PL7JZsin3cs.png" } ]
[ { "avatarUrl": "/avatars/821175d73c2ae3ceb28d445963c95722.svg", "fullname": "Matt Shumer", "name": "mattshumer", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 345 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "clem", "prithivMLmods", "KonradSzafer", "Svngoku", "John6666", "den0620", "osanseviero", "gabrielmbmb", "louisbrulenaudet" ], "count": 9 }, { "reaction": "โค๏ธ", "users": [ "clem", "osanseviero" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "dashfunnydashdash" ], "count": 1 } ]
2024-09-06T16:42:53.000Z
2024-09-06T16:42:53.578Z
[]
/posts/gabrielmbmb/861996108790591
1,792
0
113923089053942
[ { "type": "text", "value": "4 million chess puzzles", "raw": "4 million chess puzzles", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
4 million chess puzzles
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1646492542174-5e70f6048ce3c604d78fe133.jpeg", "fullname": "Christopher Akiki", "name": "christopher", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 68, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5e70f6048ce3c604d78fe133%2FBA0YvU282s9WEY5zEeMMp.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "hemanuelly", "hunken", "Sri-Vigneshwar-DJ", "Tonioesparza", "den0620", "Akash3104" ], "count": 6 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "hemanuelly", "Akash3104" ], "count": 3 } ]
2024-09-06T14:05:58.000Z
2024-09-06T14:05:58.827Z
[]
/posts/christopher/113923089053942
1,271
0
131870164983456
[ { "type": "text", "value": "\"LLM inference at scale with TGI\". Cool blogpost: ", "raw": "\"LLM inference at scale with TGI\". Cool blogpost: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.adyen.com/knowledge-hub/llm-inference-at-scale-with-tgi", "href": "https://www.adyen.com/knowledge-hub/llm-inference-at-scale-with-tgi", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Well done ", "raw": "Well done ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@martinigoyanes", "href": null, "resource": null, "url": null, "code": null, "user": "martinigoyanes", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@rafa-hernandez", "href": null, "resource": null, "url": null, "code": null, "user": "rafa-hernandez", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@Vidusharma", "href": null, "resource": null, "url": null, "code": null, "user": "Vidusharma", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@frisokingma", "href": null, "resource": null, "url": null, "code": null, "user": "frisokingma", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@hannahwright", "href": null, "resource": null, "url": null, "code": null, "user": "hannahwright", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@jeanmarcs", "href": null, "resource": null, "url": null, "code": null, "user": "jeanmarcs", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@antonioramos", "href": null, "resource": null, "url": null, "code": null, "user": "antonioramos", "label": null, "lang": null }, { "type": "text", "value": " & the whole ", "raw": " & the whole ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/adyen", "href": "https://huggingface.co/adyen", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " team. Could be useful to cross-post here: ", "raw": " team. Could be useful to cross-post here: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/blog/community", "href": "https://huggingface.co/blog/community", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
"LLM inference at scale with TGI". Cool blogpost: https://www.adyen.com/knowledge-hub/llm-inference-at-scale-with-tgi Well done @martinigoyanes @rafa-hernandez @Vidusharma @frisokingma @hannahwright @jeanmarcs @antonioramos & the whole https://huggingface.co/adyen team. Could be useful to cross-post here: https://huggingface.co/blog/community
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1583857146757-5e67bdd61009063689407479.jpeg", "fullname": "Clem ๐Ÿค—", "name": "clem", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 1763, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5e67bdd61009063689407479%2F85OdIpyc0cSmcqBLEhaJN.png" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F650af18e6554462d261e17d3%2FxesBAU_i3KI3nZMe58Vxe.jpeg", "fullname": "Antonio Ramos", "name": "antonioramos", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1 }, { "avatarUrl": "/avatars/a783c959e600b04bf2de8037d074ec70.svg", "fullname": "Friso Kingma", "name": "frisokingma", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2 }, { "avatarUrl": "/avatars/75be3faf1def47be6b3f526752de8206.svg", "fullname": "Hannah Wright", "name": "hannahwright", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2 }, { "avatarUrl": "/avatars/10be1afd9299f52d4d08b952c0c22e5b.svg", "fullname": "Jean-Marc Saad", "name": "jeanmarcs", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65de001d6a6643b02251fd2a%2F8YaiGgRzkOG6WAsY-ny-t.jpeg", "fullname": "Martin Iglesias Goyanes", "name": "martinigoyanes", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 }, { "avatarUrl": "/avatars/f8ab4c515e720b8601d83b80376d66df.svg", "fullname": "Rafael Hernandez Murcia", "name": "rafa-hernandez", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65ca09a097971388a5371284%2FxBvVTfFOE5n46phf0EBx6.png", "fullname": "Viddy", "name": "Vidusharma", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "vilarin", "KingNish", "victor", "prithivMLmods", "nbroad" ], "count": 5 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "nbroad", "den0620" ], "count": 3 } ]
2024-09-06T13:59:33.000Z
2024-09-06T15:51:14.656Z
[ { "avatarUrl": "/avatars/f8ab4c515e720b8601d83b80376d66df.svg", "fullname": "Rafael Hernandez Murcia", "name": "rafa-hernandez", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65de001d6a6643b02251fd2a%2F8YaiGgRzkOG6WAsY-ny-t.jpeg", "fullname": "Martin Iglesias Goyanes", "name": "martinigoyanes", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false } ]
/posts/clem/131870164983456
1,755
2
696626368581978
[ { "type": "text", "value": "๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง ", "raw": "๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check out this app where you convert:", "raw": "Check out this app where you convert:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Pytorch/tensorflow summary -> required VRAM", "raw": "Pytorch/tensorflow summary -> required VRAM", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "or", "raw": "or", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Parameter count -> required VRAM", "raw": "Parameter count -> required VRAM", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Use it in: ", "raw": "Use it in: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "http://howmuchvram.com", "href": "http://howmuchvram.com", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "And everything is open source! Ask for new functionalities or contribute in:", "raw": "And everything is open source! Ask for new functionalities or contribute in:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/AlexBodner/How_Much_VRAM", "href": "https://github.com/AlexBodner/How_Much_VRAM", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful!", "raw": "If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "More discussion in: ", "raw": "More discussion in: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/AlexBodner_/status/1832054850294812679", "href": "https://x.com/AlexBodner_/status/1832054850294812679", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง  Check out this app where you convert: Pytorch/tensorflow summary -> required VRAM or Parameter count -> required VRAM Use it in: http://howmuchvram.com And everything is open source! Ask for new functionalities or contribute in: https://github.com/AlexBodner/How_Much_VRAM If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful! More discussion in: https://x.com/AlexBodner_/status/1832054850294812679
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FwMv9-ZsJUw4QQnld_cci7.jpeg", "fullname": "Alexander Dylan Bodner", "name": "AlexBodner", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2F3pE5_tB4Q4LBtj8AZklJQ.mp4" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-06T13:55:12.000Z
2024-09-06T13:55:12.397Z
[]
/posts/AlexBodner/696626368581978
360
0
607838594248861
[ { "type": "text", "value": " I've been working on a Space to make it super easy to create notebooks and help users quickly understand and manipulate their data!", "raw": " I've been working on a Space to make it super easy to create notebooks and help users quickly understand and manipulate their data!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "With just a few clicks automatically generate notebooks for:", "raw": "With just a few clicks automatically generate notebooks for:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“Š Exploratory Data Analysis", "raw": "๐Ÿ“Š Exploratory Data Analysis", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿง  Text Embeddings", "raw": "๐Ÿง  Text Embeddings", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค– Retrieval-Augmented Generation (RAG) ", "raw": "๐Ÿค– Retrieval-Augmented Generation (RAG) ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โœจ Automatic training is coming soon!", "raw": "โœจ Automatic training is coming soon!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check it out here ", "raw": "Check it out here ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/asoria/auto-notebook-creator", "href": null, "resource": { "type": "space", "id": "asoria/auto-notebook-creator", "discussionNum": null }, "url": "https://huggingface.co/spaces/asoria/auto-notebook-creator", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Appreciate any feedback to improve this tool ๐Ÿค—", "raw": "Appreciate any feedback to improve this tool ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I've been working on a Space to make it super easy to create notebooks and help users quickly understand and manipulate their data! With just a few clicks automatically generate notebooks for: ๐Ÿ“Š Exploratory Data Analysis ๐Ÿง  Text Embeddings ๐Ÿค– Retrieval-Augmented Generation (RAG) โœจ Automatic training is coming soon! Check it out here https://huggingface.co/spaces/asoria/auto-notebook-creator Appreciate any feedback to improve this tool ๐Ÿค—
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1674055965173-noauth.jpeg", "fullname": "Andrea Soria", "name": "asoria", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 61, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Saugatkafley", "jmamedov", "AtAndDev" ], "count": 4 }, { "reaction": "๐Ÿคฏ", "users": [ "davanstrien" ], "count": 1 } ]
2024-09-06T13:28:59.000Z
2024-09-06T13:28:59.576Z
[]
/posts/asoria/607838594248861
816
0
850395082965136
[ { "type": "text", "value": "๐ŸŒŸ Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!", "raw": "๐ŸŒŸ Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ–ผ Image Field: Seamlessly work with multimodal datasets", "raw": "๐Ÿ–ผ Image Field: Seamlessly work with multimodal datasets", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŒ“ Dark Mode: Reduce eye strain with our sleek new look", "raw": "๐ŸŒ“ Dark Mode: Reduce eye strain with our sleek new look", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค— Enhanced Hugging Face Hub import with the SDK", "raw": "๐Ÿค— Enhanced Hugging Face Hub import with the SDK", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‡ช๐Ÿ‡ธ Spanish UI: Breaking language barriers", "raw": "๐Ÿ‡ช๐Ÿ‡ธ Spanish UI: Breaking language barriers", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Plus more improvements to supercharge your model curation workflow!", "raw": "Plus more improvements to supercharge your model curation workflow!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check out the full announcement for details and code examples: ", "raw": "Check out the full announcement for details and code examples: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/argilla-io/argilla/compare/v2.0.1...v2.1.0", "href": "https://github.com/argilla-io/argilla/compare/v2.0.1...v2.1.0", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐ŸŒŸ Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more! ๐Ÿ–ผ Image Field: Seamlessly work with multimodal datasets ๐ŸŒ“ Dark Mode: Reduce eye strain with our sleek new look ๐Ÿค— Enhanced Hugging Face Hub import with the SDK ๐Ÿ‡ช๐Ÿ‡ธ Spanish UI: Breaking language barriers Plus more improvements to supercharge your model curation workflow! Check out the full announcement for details and code examples: https://github.com/argilla-io/argilla/compare/v2.0.1...v2.1.0
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1677141720071-634ff41ff32062e9eb7b06a3.jpeg", "fullname": "David Berenstein", "name": "davidberenstein1957", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 167, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ˜Ž", "users": [ "davanstrien", "gabrielmbmb", "dvilasuero", "Ameeeee", "louisbrulenaudet", "clem", "John6666", "KingNish", "AtAndDev" ], "count": 9 }, { "reaction": "๐Ÿš€", "users": [ "gabrielmbmb", "dvilasuero", "Ameeeee", "clem", "AtAndDev" ], "count": 5 }, { "reaction": "๐Ÿ”ฅ", "users": [ "Ameeeee", "clem", "KingNish", "AtAndDev", "gabrielmbmb" ], "count": 5 } ]
2024-09-06T12:21:30.000Z
2024-09-06T12:21:30.539Z
[]
/posts/davidberenstein1957/850395082965136
1,821
0
206518965814889
[ { "type": "text", "value": "Wanted to train a FLUX model using out-of-copyright images, so I curated concept art images from NASA. ", "raw": "Wanted to train a FLUX model using out-of-copyright images, so I curated concept art images from NASA. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Model: ", "raw": "Model: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/davanstrien/nasa_concept_art", "href": "https://huggingface.co/davanstrien/nasa_concept_art", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Dataset: ", "raw": "Dataset: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/davanstrien/nasa_concept_art", "href": null, "resource": { "type": "dataset", "id": "davanstrien/nasa_concept_art", "discussionNum": null }, "url": "https://huggingface.co/datasets/davanstrien/nasa_concept_art", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "So far, training was done without captions, but I'm experimenting with using VLLMs to generate captions to see if that improves the model.", "raw": "So far, training was done without captions, but I'm experimenting with using VLLMs to generate captions to see if that improves the model.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Wanted to train a FLUX model using out-of-copyright images, so I curated concept art images from NASA. Model: https://huggingface.co/davanstrien/nasa_concept_art Dataset: https://huggingface.co/datasets/davanstrien/nasa_concept_art So far, training was done without captions, but I'm experimenting with using VLLMs to generate captions to see if that improves the model.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1627505688463-60107b385ac3e86b3ea4fc34.jpeg", "fullname": "Daniel van Strien", "name": "davanstrien", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 410, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 }, { "reaction": "โค๏ธ", "users": [ "louisbrulenaudet" ], "count": 1 } ]
2024-09-06T11:41:40.000Z
2024-09-06T11:41:40.962Z
[]
/posts/davanstrien/206518965814889
435
0
374226305257230
[ { "type": "text", "value": "๐Ÿคฏ ๐—” ๐—ป๐—ฒ๐˜„ ๐Ÿณ๐Ÿฌ๐—• ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜„๐—ฒ๐—ถ๐—ด๐—ต๐˜๐˜€ ๐—Ÿ๐—Ÿ๐—  ๐—ฏ๐—ฒ๐—ฎ๐˜๐˜€ ๐—–๐—น๐—ฎ๐˜‚๐—ฑ๐—ฒ-๐Ÿฏ.๐Ÿฑ-๐—ฆ๐—ผ๐—ป๐—ป๐—ฒ๐˜ ๐—ฎ๐—ป๐—ฑ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ!", "raw": "๐Ÿคฏ ๐—” ๐—ป๐—ฒ๐˜„ ๐Ÿณ๐Ÿฌ๐—• ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜„๐—ฒ๐—ถ๐—ด๐—ต๐˜๐˜€ ๐—Ÿ๐—Ÿ๐—  ๐—ฏ๐—ฒ๐—ฎ๐˜๐˜€ ๐—–๐—น๐—ฎ๐˜‚๐—ฑ๐—ฒ-๐Ÿฏ.๐Ÿฑ-๐—ฆ๐—ผ๐—ป๐—ป๐—ฒ๐˜ ๐—ฎ๐—ป๐—ฑ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@mattshumer", "href": null, "resource": null, "url": null, "code": null, "user": "mattshumer", "label": null, "lang": null }, { "type": "text", "value": ", CEO from Hyperwrite AI, had an idea he wanted to try out: why not fine-tune LLMs to always output their thoughts in specific parts, delineated by <thinking> tags?", "raw": ", CEO from Hyperwrite AI, had an idea he wanted to try out: why not fine-tune LLMs to always output their thoughts in specific parts, delineated by <thinking> tags?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Even better: inside of that, you could nest other sections, to reflect critically on previous output. Letโ€™s name this part <reflection>. Planning is also put in a separate step.", "raw": "Even better: inside of that, you could nest other sections, to reflect critically on previous output. Letโ€™s name this part <reflection>. Planning is also put in a separate step.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "He named the method โ€œReflection tuningโ€ and set out to fine-tune a Llama-3.1-70B with it.", "raw": "He named the method โ€œReflection tuningโ€ and set out to fine-tune a Llama-3.1-70B with it.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Well it turns out, it works mind-boggingly well!", "raw": "Well it turns out, it works mind-boggingly well!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿคฏ Reflection-70B beats GPT-4o, Sonnet-3.5, and even the much bigger Llama-3.1-405B!", "raw": "๐Ÿคฏ Reflection-70B beats GPT-4o, Sonnet-3.5, and even the much bigger Llama-3.1-405B!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐—ง๐—Ÿ;๐——๐—ฅ", "raw": "๐—ง๐—Ÿ;๐——๐—ฅ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸฅŠ This new 70B open-weights model beats GPT-4o, Claude Sonnet, et al.", "raw": "๐ŸฅŠ This new 70B open-weights model beats GPT-4o, Claude Sonnet, et al.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โฐ 405B in training, coming soon", "raw": "โฐ 405B in training, coming soon", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“š Report coming next week", "raw": "๐Ÿ“š Report coming next week", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โš™๏ธ Uses GlaiveAI synthetic data", "raw": "โš™๏ธ Uses GlaiveAI synthetic data", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค— Available on HF!", "raw": "๐Ÿค— Available on HF!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Iโ€™m starting an Inference Endpoint right now for this model to give it a spin!", "raw": "Iโ€™m starting an Inference Endpoint right now for this model to give it a spin!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check it out ๐Ÿ‘‰ ", "raw": "Check it out ๐Ÿ‘‰ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B", "href": null, "resource": { "type": "model", "id": "mattshumer/Reflection-Llama-3.1-70B", "discussionNum": null }, "url": "https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B", "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿคฏ ๐—” ๐—ป๐—ฒ๐˜„ ๐Ÿณ๐Ÿฌ๐—• ๐—ผ๐—ฝ๐—ฒ๐—ป-๐˜„๐—ฒ๐—ถ๐—ด๐—ต๐˜๐˜€ ๐—Ÿ๐—Ÿ๐—  ๐—ฏ๐—ฒ๐—ฎ๐˜๐˜€ ๐—–๐—น๐—ฎ๐˜‚๐—ฑ๐—ฒ-๐Ÿฏ.๐Ÿฑ-๐—ฆ๐—ผ๐—ป๐—ป๐—ฒ๐˜ ๐—ฎ๐—ป๐—ฑ ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ! @mattshumer, CEO from Hyperwrite AI, had an idea he wanted to try out: why not fine-tune LLMs to always output their thoughts in specific parts, delineated by <thinking> tags? Even better: inside of that, you could nest other sections, to reflect critically on previous output. Letโ€™s name this part <reflection>. Planning is also put in a separate step. He named the method โ€œReflection tuningโ€ and set out to fine-tune a Llama-3.1-70B with it. Well it turns out, it works mind-boggingly well! ๐Ÿคฏ Reflection-70B beats GPT-4o, Sonnet-3.5, and even the much bigger Llama-3.1-405B! ๐—ง๐—Ÿ;๐——๐—ฅ ๐ŸฅŠ This new 70B open-weights model beats GPT-4o, Claude Sonnet, et al. โฐ 405B in training, coming soon ๐Ÿ“š Report coming next week โš™๏ธ Uses GlaiveAI synthetic data ๐Ÿค— Available on HF! Iโ€™m starting an Inference Endpoint right now for this model to give it a spin! Check it out ๐Ÿ‘‰ https://huggingface.co/mattshumer/Reflection-Llama-3.1-70B
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2F7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[]
[ { "avatarUrl": "/avatars/821175d73c2ae3ceb28d445963c95722.svg", "fullname": "Matt Shumer", "name": "mattshumer", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 345 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "osanseviero", "DataSoul", "xi0v", "Joseph717171" ], "count": 5 }, { "reaction": "๐Ÿ‘", "users": [ "TahirC", "Yuuru", "iandeanschaefer", "trollek", "Joseph717171" ], "count": 5 }, { "reaction": "๐Ÿค—", "users": [ "louisbrulenaudet", "YaTharThShaRma999", "Joseph717171" ], "count": 3 } ]
2024-09-06T07:40:00.000Z
2024-09-08T19:15:12.418Z
[ { "avatarUrl": "/avatars/1aea33e7602a81f6b6ed98412dda9b41.svg", "fullname": "GR", "name": "gr0010", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63596f9f0cd44992263f2105%2F4CCZECojd7tkbOxMryiww.png", "fullname": "Trolle Karlsson", "name": "trollek", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 18, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F64175bc2b03817ada642291f%2FV3mhc8Y0saSgXbp--2HcE.png", "fullname": "Kh", "name": "raidhon", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false } ]
/posts/m-ric/374226305257230
1,912
3
626859081137343
[ { "type": "text", "value": "You can create charts, leaderboards, and filters on top of any Hugging Face dataset in less than a minute", "raw": "You can create charts, leaderboards, and filters on top of any Hugging Face dataset in less than a minute", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข ASCII Bar Charts ๐Ÿ“Š", "raw": "โ€ข ASCII Bar Charts ๐Ÿ“Š", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข Powered by DuckDB WASM โšก", "raw": "โ€ข Powered by DuckDB WASM โšก", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข Download results to Parquet ๐Ÿ’ฝ", "raw": "โ€ข Download results to Parquet ๐Ÿ’ฝ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข Embed and Share results with friends ๐Ÿ“ฌ", "raw": "โ€ข Embed and Share results with friends ๐Ÿ“ฌ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Do you have any interesting queries?", "raw": "Do you have any interesting queries?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
You can create charts, leaderboards, and filters on top of any Hugging Face dataset in less than a minute โ€ข ASCII Bar Charts ๐Ÿ“Š โ€ข Powered by DuckDB WASM โšก โ€ข Download results to Parquet ๐Ÿ’ฝ โ€ข Embed and Share results with friends ๐Ÿ“ฌ Do you have any interesting queries?
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FYPwSOrronoozwHbJchPn3.jpeg", "fullname": "Caleb Fahlgren", "name": "cfahlgren1", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 123, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FRXUlv9VG9Fmmenw16Ha03.mp4" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Joseph717171" ], "count": 2 }, { "reaction": "๐Ÿ”ฅ", "users": [ "BrigitteTousi", "Joseph717171" ], "count": 2 } ]
2024-11-19T21:37:22.000Z
2024-11-19T21:37:22.654Z
[]
/posts/cfahlgren1/626859081137343
789
0
322149189838310
[ { "type": "text", "value": "What rank are you on Hugging Face Top Yappers? ๐Ÿ—ฃ๏ธ", "raw": "What rank are you on Hugging Face Top Yappers? ๐Ÿ—ฃ๏ธ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Find your rank here with this link: ", "raw": "Find your rank here with this link: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/cfahlgren1/hub-stats/embed/sql-console/d453ehm", "href": null, "resource": { "type": "dataset", "id": "cfahlgren1/hub-stats", "discussionNum": null }, "url": "https://huggingface.co/datasets/cfahlgren1/hub-stats/embed/sql-console/d453ehm", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The Top 3:", "raw": "The Top 3:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@fdaudens", "href": null, "resource": null, "url": null, "code": null, "user": "fdaudens", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@singhsidhukuldeep", "href": null, "resource": null, "url": null, "code": null, "user": "singhsidhukuldeep", "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@akhaliq", "href": null, "resource": null, "url": null, "code": null, "user": "akhaliq", "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I am at #71 and need to get my numbers up! ๐Ÿ“ˆ", "raw": "I am at #71 and need to get my numbers up! ๐Ÿ“ˆ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
What rank are you on Hugging Face Top Yappers? ๐Ÿ—ฃ๏ธ Find your rank here with this link: https://huggingface.co/datasets/cfahlgren1/hub-stats/embed/sql-console/d453ehm The Top 3: - @fdaudens - @singhsidhukuldeep - @akhaliq I am at #71 and need to get my numbers up! ๐Ÿ“ˆ
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FYPwSOrronoozwHbJchPn3.jpeg", "fullname": "Caleb Fahlgren", "name": "cfahlgren1", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 123, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1674929746905-60f1abe7544c2adfd699860c.jpeg", "fullname": "AK", "name": "akhaliq", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5205 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F647f36a8454af0237bd49574%2FjshkqBUTY-GZL8As8y6Aq.jpeg", "fullname": "Florent Daudens", "name": "fdaudens", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 384 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FWXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "BrigitteTousi" ], "count": 2 } ]
2024-11-19T20:47:25.000Z
2024-11-19T21:52:13.414Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F647f36a8454af0237bd49574%2FjshkqBUTY-GZL8As8y6Aq.jpeg", "fullname": "Florent Daudens", "name": "fdaudens", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 384, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FYPwSOrronoozwHbJchPn3.jpeg", "fullname": "Caleb Fahlgren", "name": "cfahlgren1", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 123, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6527e89a8808d80ccff88b7a%2FCuGNmF1Et8KMQ0mCd1NEJ.jpeg", "fullname": "Lain", "name": "not-lain", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 941, "isFollowing": false } ]
/posts/cfahlgren1/322149189838310
725
4
425816994661570
[ { "type": "text", "value": "Nine years ago, I uploaded the first 8K resolution video to YouTube and I've been stockpiling 8K footage ever since: ", "raw": "Nine years ago, I uploaded the first 8K resolution video to YouTube and I've been stockpiling 8K footage ever since: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.youtube.com/watch?v=sLprVF6d7Ug&t", "href": "https://www.youtube.com/watch?v=sLprVF6d7Ug&t", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Should ", "raw": "Should ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@Overlaiapp", "href": null, "resource": null, "url": null, "code": null, "user": "Overlaiapp", "label": null, "lang": null }, { "type": "text", "value": " release the first open-source 8K video dataset?", "raw": " release the first open-source 8K video dataset?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Could anyone even fine tune a model with this?๐Ÿ˜…", "raw": "Could anyone even fine tune a model with this?๐Ÿ˜…", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Nine years ago, I uploaded the first 8K resolution video to YouTube and I've been stockpiling 8K footage ever since: https://www.youtube.com/watch?v=sLprVF6d7Ug&t Should @Overlaiapp release the first open-source 8K video dataset? Could anyone even fine tune a model with this?๐Ÿ˜…
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2Fnoauth%2FlJZriu6mJCgWkyYpbd4Pe.png", "fullname": "Luke Neumann", "name": "LukeNeumann", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 13, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F673637efb403886c210a588d%2Fdx7X1pJmsrYX8xlpotuA2.jpeg" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2Fnoauth%2FslG0zD_zwpPl4JsvBGWJ-.jpeg", "fullname": "Overlai.ai", "name": "Overlaiapp", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3 } ]
[ { "reaction": "๐Ÿคฏ", "users": [ "victor", "cfahlgren1", "Nymbo", "John6666", "alielfilali01", "ArthurZ" ], "count": 6 }, { "reaction": "๐Ÿ‘€", "users": [ "Innovatix" ], "count": 1 } ]
2024-11-19T18:38:59.000Z
2024-11-20T13:52:30.210Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f17f0a0925b9863e28ad517%2FX7QKoiXbUtEZSG9jyvfk3.jpeg", "fullname": "Victor Mustar", "name": "victor", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 2607, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FYPwSOrronoozwHbJchPn3.jpeg", "fullname": "Caleb Fahlgren", "name": "cfahlgren1", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 123, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2Fnoauth%2FlJZriu6mJCgWkyYpbd4Pe.png", "fullname": "Luke Neumann", "name": "LukeNeumann", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 13, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65bb837dbfb878f46c77de4c%2FUVtVbF_3rdt0DC8xTkpL1.jpeg", "fullname": "Prithiv Sakthi", "name": "prithivMLmods", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 393, "isFollowing": false } ]
/posts/LukeNeumann/425816994661570
1,159
6
135954244446009
[ { "type": "inline_code", "value": null, "raw": "`huggingface.co/DIBT `", "href": null, "resource": null, "url": null, "code": "huggingface.co/DIBT ", "user": null, "label": null, "lang": null }, { "type": "text", "value": " is dead! ", "raw": " is dead! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Long live ", "raw": "Long live ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/data-is-better-together", "href": "https://huggingface.co/data-is-better-together", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "! ", "raw": "! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We're working on some very cool projects so we're doing a bit of tidying of the Data is Better Together Hub org ๐Ÿค“", "raw": "We're working on some very cool projects so we're doing a bit of tidying of the Data is Better Together Hub org ๐Ÿค“", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
`huggingface.co/DIBT ` is dead! Long live https://huggingface.co/data-is-better-together! We're working on some very cool projects so we're doing a bit of tidying of the Data is Better Together Hub org ๐Ÿค“
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1627505688463-60107b385ac3e86b3ea4fc34.jpeg", "fullname": "Daniel van Strien", "name": "davanstrien", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 410, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "cfahlgren1", "not-lain", "anakin87", "mmhamdy", "John6666", "BrigitteTousi", "Niansuh" ], "count": 7 }, { "reaction": "๐Ÿš€", "users": [ "ZennyKenny" ], "count": 1 } ]
2024-11-19T17:03:17.000Z
2024-11-19T17:03:25.450Z
[]
/posts/davanstrien/135954244446009
1,235
0
426240533595170
[ { "type": "text", "value": "Build a collection for the trending demos recently released by the Chinese community ๐Ÿš€ From Qwen2.5 Turbo to FishAgent, see what these models can really do ๐Ÿ”ฅ", "raw": "Build a collection for the trending demos recently released by the Chinese community ๐Ÿš€ From Qwen2.5 Turbo to FishAgent, see what these models can really do ๐Ÿ”ฅ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/collections/zh-ai-community/trending-demo-673b6ca2416a3b3c9d3bf8f1", "href": null, "resource": { "type": "collection", "id": "zh-ai-community/trending-demo-673b6ca2416a3b3c9d3bf8f1", "discussionNum": null }, "url": "https://huggingface.co/collections/zh-ai-community/trending-demo-673b6ca2416a3b3c9d3bf8f1", "code": null, "user": null, "label": null, "lang": null } ]
Build a collection for the trending demos recently released by the Chinese community ๐Ÿš€ From Qwen2.5 Turbo to FishAgent, see what these models can really do ๐Ÿ”ฅ https://huggingface.co/collections/zh-ai-community/trending-demo-673b6ca2416a3b3c9d3bf8f1
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63a369d98c0c89dcae3b8329%2F6OUJ7Hc9T1jXynYH3FGaf.png", "fullname": "Adina Yakefu", "name": "AdinaY", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 240, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63a369d98c0c89dcae3b8329%2FaH-rhktB5eePdONHeWCUe.jpeg" } ]
[]
[ { "reaction": "๐Ÿ˜Ž", "users": [ "Aurelien-Morgan", "YaTharThShaRma999", "John6666", "davanstrien" ], "count": 4 }, { "reaction": "๐Ÿ‘", "users": [ "ijohn07", "ArthurZ" ], "count": 2 } ]
2024-11-19T16:43:41.000Z
2024-11-19T16:44:58.886Z
[]
/posts/AdinaY/426240533595170
977
0
896203713233535
[ { "type": "text", "value": "๐Ÿค– Controlling Computers with Small Models ๐Ÿค–", "raw": "๐Ÿค– Controlling Computers with Small Models ๐Ÿค–", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We just released PTA-1, a fine-tuned Florence-2 for localization of GUI text and elements. It runs with ~150ms inference time on a RTX 4080. This means you can now start building fast on-device computer use agents!", "raw": "We just released PTA-1, a fine-tuned Florence-2 for localization of GUI text and elements. It runs with ~150ms inference time on a RTX 4080. This means you can now start building fast on-device computer use agents!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Model: ", "raw": "Model: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/AskUI/PTA-1", "href": null, "resource": { "type": "model", "id": "AskUI/PTA-1", "discussionNum": null }, "url": "https://huggingface.co/AskUI/PTA-1", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Demo: ", "raw": "Demo: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/AskUI/PTA-1", "href": null, "resource": { "type": "space", "id": "AskUI/PTA-1", "discussionNum": null }, "url": "https://huggingface.co/spaces/AskUI/PTA-1", "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿค– Controlling Computers with Small Models ๐Ÿค– We just released PTA-1, a fine-tuned Florence-2 for localization of GUI text and elements. It runs with ~150ms inference time on a RTX 4080. This means you can now start building fast on-device computer use agents! Model: https://huggingface.co/AskUI/PTA-1 Demo: https://huggingface.co/spaces/AskUI/PTA-1
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6313a26b2c7ffdd9f50187ed%2FMTBOHg2bMcuOMWFLCZ86L.png", "fullname": "Maxi", "name": "maxiw", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 48, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6313a26b2c7ffdd9f50187ed%2F7jSz2WDh8WynY-JqMsXdJ.png" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "menesjo", "programmnix-askui", "YaTharThShaRma999", "John6666", "danyalxahidaskui", "AndiAskUI" ], "count": 6 }, { "reaction": "๐Ÿ”ฅ", "users": [ "danyalxahidaskui", "AndiAskUI" ], "count": 2 } ]
2024-11-19T16:32:40.000Z
2024-11-19T16:45:08.774Z
[ { "avatarUrl": "/avatars/70811e7d6e14859dd171034f10c03ebb.svg", "fullname": "Jonas Menesklou", "name": "menesjo", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false } ]
/posts/maxiw/896203713233535
986
1
599494003874806
[ { "type": "text", "value": "๐Ÿšจ How green is your model? ๐ŸŒฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!", "raw": "๐Ÿšจ How green is your model? ๐ŸŒฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘‰ ", "raw": "๐Ÿ‘‰ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/open-llm-leaderboard/comparator", "href": null, "resource": { "type": "space", "id": "open-llm-leaderboard/comparator", "discussionNum": null }, "url": "https://huggingface.co/spaces/open-llm-leaderboard/comparator", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Now, you can not only compare models by performance, but also by their environmental footprint!", "raw": "Now, you can not only compare models by performance, but also by their environmental footprint!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŒ The Comparator calculates COโ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐Ÿ› ๏ธ", "raw": "๐ŸŒ The Comparator calculates COโ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐Ÿ› ๏ธ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Make informed decisions about your model's impact on the planet and join the movement towards greener AI!", "raw": "Make informed decisions about your model's impact on the planet and join the movement towards greener AI!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿšจ How green is your model? ๐ŸŒฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research! ๐Ÿ‘‰ https://huggingface.co/spaces/open-llm-leaderboard/comparator Now, you can not only compare models by performance, but also by their environmental footprint! ๐ŸŒ The Comparator calculates COโ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐Ÿ› ๏ธ Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1606406298765-noauth.jpeg", "fullname": "Albert Villanova del Moral", "name": "albertvillanova", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 196, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5fbfd09ee366524fe8e97cd3%2FUOFd0edkY9QSS4caFrhhc.png" } ]
[]
[ { "reaction": "๐Ÿค—", "users": [ "emansand", "John6666", "Leiyre", "BrigitteTousi" ], "count": 4 }, { "reaction": "โค๏ธ", "users": [ "prithivMLmods" ], "count": 1 } ]
2024-11-19T14:37:10.000Z
2024-11-19T14:37:10.522Z
[]
/posts/albertvillanova/599494003874806
1,087
0
955268696719189
[ { "type": "text", "value": "Very exciting new ", "raw": "Very exciting new ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411", "href": null, "resource": { "type": "model", "id": "mistralai/Pixtral-Large-Instruct-2411", "discussionNum": null }, "url": "https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " model from Mistral-AI", "raw": " model from Mistral-AI", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Impressive performances, huge congrats ", "raw": "Impressive performances, huge congrats ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@patrickvonplaten", "href": null, "resource": null, "url": null, "code": null, "user": "patrickvonplaten", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@sgvaze", "href": null, "resource": null, "url": null, "code": null, "user": "sgvaze", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@pandora-s", "href": null, "resource": null, "url": null, "code": null, "user": "pandora-s", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@devendrachaplot", "href": null, "resource": null, "url": null, "code": null, "user": "devendrachaplot", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@sophiamyang", "href": null, "resource": null, "url": null, "code": null, "user": "sophiamyang", "label": null, "lang": null }, { "type": "text", "value": " and team!", "raw": " and team!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Very nice to have SOTA Multilingual OCR and Chart understanding in an open-weights model", "raw": "Very nice to have SOTA Multilingual OCR and Chart understanding in an open-weights model", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Very exciting new https://huggingface.co/mistralai/Pixtral-Large-Instruct-2411 model from Mistral-AI Impressive performances, huge congrats @patrickvonplaten @sgvaze @pandora-s @devendrachaplot @sophiamyang and team! Very nice to have SOTA Multilingual OCR and Chart understanding in an open-weights model
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg", "fullname": "Thomas Wolf", "name": "thomwolf", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 704, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5df7e9e5da6d0311fd3d53f9%2FaU0FV1_4j8LHkcTiqaEd8.png" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65143e1c4f08b815c8db57a0%2FJqkwKiJmLFRkH0NK3L8XH.jpeg", "fullname": "Devendra Singh Chaplot", "name": "devendrachaplot", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 62 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F64161701107962562e9b1006%2FO_GB06ni2O4HuvIkp2hpp.png", "fullname": "pandora", "name": "pandora-s", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 27 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1584435275418-5dfcb1aada6d0311fd3d5448.jpeg", "fullname": "Patrick von Platen", "name": "patrickvonplaten", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 546 }, { "avatarUrl": "/avatars/abebd42399decafbccc8579faa34e7d3.svg", "fullname": "Sagar Vaze", "name": "sgvaze", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6320c05a145cfa4c04cb4359%2FjLYLrlc_LZQMi3yCrlfCi.jpeg", "fullname": "Sophia Yang", "name": "sophiamyang", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 102 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "mathiasn1", "John6666", "fblgit", "BrigitteTousi", "jly-dev", "ariG23498", "veeraleto" ], "count": 7 }, { "reaction": "๐Ÿคฏ", "users": [ "PLB", "LukeNeumann" ], "count": 2 } ]
2024-11-19T13:21:29.000Z
2024-11-19T13:21:29.333Z
[]
/posts/thomwolf/955268696719189
1,242
0
320515127274608
[ { "type": "mention", "value": null, "raw": "@Jesse-marqo", "href": null, "resource": null, "url": null, "code": null, "user": "Jesse-marqo", "label": null, "lang": null }, { "type": "text", "value": " and the Marqo team are killing it on the Hub: top embedding models and datasets!", "raw": " and the Marqo team are killing it on the Hub: top embedding models and datasets!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Here's how to start using their new evaluation dataset for curation and labelling:", "raw": "Here's how to start using their new evaluation dataset for curation and labelling:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "1. Deploy Argilla on Spaces: ", "raw": "1. Deploy Argilla on Spaces: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/new-space?template=argilla%2Fargilla-template-space", "href": "https://huggingface.co/new-space?template=argilla%2Fargilla-template-space", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "2. Load ", "raw": "2. Load ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/Marqo/amazon-products-eval", "href": null, "resource": { "type": "dataset", "id": "Marqo/amazon-products-eval", "discussionNum": null }, "url": "https://huggingface.co/datasets/Marqo/amazon-products-eval", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " with the UI wizard.", "raw": " with the UI wizard.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "3. Start curating!", "raw": "3. Start curating!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
@Jesse-marqo and the Marqo team are killing it on the Hub: top embedding models and datasets! Here's how to start using their new evaluation dataset for curation and labelling: 1. Deploy Argilla on Spaces: https://huggingface.co/new-space?template=argilla%2Fargilla-template-space 2. Load https://huggingface.co/datasets/Marqo/amazon-products-eval with the UI wizard. 3. Start curating!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F60420dccc15e823a685f2b03%2FDn7QTyy9SZ7jKN6xpufVD.png", "fullname": "Daniel Vila", "name": "dvilasuero", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 231, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F60420dccc15e823a685f2b03%2FnDJuExE8KGgYHTgEKboN0.mp4" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6336585c989530756363e1da%2FrG0k2xAYfMEsv6c0DG6Jm.png", "fullname": "Jesse Clark", "name": "Jesse-marqo", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 4 } ]
[ { "reaction": "๐Ÿค—", "users": [ "davidberenstein1957", "John6666", "Leiyre", "cfahlgren1", "Jesse-marqo", "BrigitteTousi" ], "count": 6 }, { "reaction": "๐Ÿš€", "users": [ "davidberenstein1957", "rwightman", "cfahlgren1", "Jesse-marqo", "BrigitteTousi" ], "count": 5 }, { "reaction": "๐Ÿ”ฅ", "users": [ "davidberenstein1957", "Jesse-marqo" ], "count": 2 }, { "reaction": "๐Ÿคฏ", "users": [ "davidberenstein1957" ], "count": 1 } ]
2024-11-19T12:33:25.000Z
2024-11-19T12:33:25.591Z
[]
/posts/dvilasuero/320515127274608
1,004
0
185206507786588
[ { "type": "text", "value": "Sharing what we have built over the course of the weekend at the ", "raw": "Sharing what we have built over the course of the weekend at the ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@llamameta", "href": null, "resource": null, "url": null, "code": null, "user": "llamameta", "label": null, "lang": null }, { "type": "text", "value": " hackathon, by Cerebral Valley in London ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‘‡", "raw": " hackathon, by Cerebral Valley in London ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‘‡", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@gabrycina", "href": null, "resource": null, "url": null, "code": null, "user": "gabrycina", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@calebgcc", "href": null, "resource": null, "url": null, "code": null, "user": "calebgcc", "label": null, "lang": null }, { "type": "text", "value": " and I competed with 200+ participants and 50+ teams for a 24-hrs sprint centered around hacking for impact! We focused on applications of robotics to those in need of assisted living, moving our focus to enable greater autonomy and accessibility of robotics in everyday life.", "raw": " and I competed with 200+ participants and 50+ teams for a 24-hrs sprint centered around hacking for impact! We focused on applications of robotics to those in need of assisted living, moving our focus to enable greater autonomy and accessibility of robotics in everyday life.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "complete list of assets ๐Ÿ‘‡", "raw": "complete list of assets ๐Ÿ‘‡", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค— trained robotics policies", "raw": "๐Ÿค— trained robotics policies", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "v1:", "raw": "v1:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/fracapuano/moss-pills", "href": null, "resource": { "type": "model", "id": "fracapuano/moss-pills", "discussionNum": null }, "url": "https://huggingface.co/fracapuano/moss-pills", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/fracapuano/moss-cup", "href": null, "resource": { "type": "model", "id": "fracapuano/moss-cup", "discussionNum": null }, "url": "https://huggingface.co/fracapuano/moss-cup", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "v2:", "raw": "v2:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/fracapuano/meta-grasp", "href": null, "resource": { "type": "model", "id": "fracapuano/meta-grasp", "discussionNum": null }, "url": "https://huggingface.co/fracapuano/meta-grasp", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค— datasets", "raw": "๐Ÿค— datasets", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "v1:", "raw": "v1:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/fracapuano/pills", "href": null, "resource": { "type": "dataset", "id": "fracapuano/pills", "discussionNum": null }, "url": "https://huggingface.co/datasets/fracapuano/pills", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/fracapuano/cup", "href": null, "resource": { "type": "dataset", "id": "fracapuano/cup", "discussionNum": null }, "url": "https://huggingface.co/datasets/fracapuano/cup", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "v2: ", "raw": "v2: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/fracapuano/cupim", "href": null, "resource": { "type": "dataset", "id": "fracapuano/cupim", "discussionNum": null }, "url": "https://huggingface.co/datasets/fracapuano/cupim", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "You can find a live demo of our submission at: ", "raw": "You can find a live demo of our submission at: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/_fracapuano/status/1858102728691458554", "href": "https://x.com/_fracapuano/status/1858102728691458554", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If you want to know more about how we collected 100GB+ of data, trained multiple RL-policies using ", "raw": "If you want to know more about how we collected 100GB+ of data, trained multiple RL-policies using ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@lerobot", "href": null, "resource": null, "url": null, "code": null, "user": "lerobot", "label": null, "lang": null }, { "type": "text", "value": " and used Llama-3.2 models to handle user interactions and switch between tasks, go ahead and have a look! Also, don't be a stranger, and reach out ๐Ÿฆพ", "raw": " and used Llama-3.2 models to handle user interactions and switch between tasks, go ahead and have a look! Also, don't be a stranger, and reach out ๐Ÿฆพ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Our project is fully open-source, for the community (and ourselves, ๐Ÿ‘จโ€๐Ÿณ) to build! A huge thank you to ", "raw": "Our project is fully open-source, for the community (and ourselves, ๐Ÿ‘จโ€๐Ÿณ) to build! A huge thank you to ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@cadene", "href": null, "resource": null, "url": null, "code": null, "user": "cadene", "label": null, "lang": null }, { "type": "text", "value": " for the help (and the robot ๐Ÿคญ) - truly feeling these hugs-vibes ๐Ÿค— , and to ", "raw": " for the help (and the robot ๐Ÿคญ) - truly feeling these hugs-vibes ๐Ÿค— , and to ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@thomwolf", "href": null, "resource": null, "url": null, "code": null, "user": "thomwolf", "label": null, "lang": null }, { "type": "text", "value": " and ", "raw": " and ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@clem", "href": null, "resource": null, "url": null, "code": null, "user": "clem", "label": null, "lang": null }, { "type": "text", "value": " for sharing our work across", "raw": " for sharing our work across", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Little extra:", "raw": "Little extra:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โžก๏ธ Our ๐Ÿง EEG waves๐Ÿง -based control of the ๐Ÿฆพrobotic arm๐Ÿฆพ", "raw": "โžก๏ธ Our ๐Ÿง EEG waves๐Ÿง -based control of the ๐Ÿฆพrobotic arm๐Ÿฆพ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Sharing what we have built over the course of the weekend at the @llamameta hackathon, by Cerebral Valley in London ๐Ÿ‡ฌ๐Ÿ‡ง ๐Ÿ‘‡ @gabrycina @calebgcc and I competed with 200+ participants and 50+ teams for a 24-hrs sprint centered around hacking for impact! We focused on applications of robotics to those in need of assisted living, moving our focus to enable greater autonomy and accessibility of robotics in everyday life. complete list of assets ๐Ÿ‘‡ ๐Ÿค— trained robotics policies v1: - https://huggingface.co/fracapuano/moss-pills - https://huggingface.co/fracapuano/moss-cup v2: - https://huggingface.co/fracapuano/meta-grasp ๐Ÿค— datasets v1: - https://huggingface.co/datasets/fracapuano/pills - https://huggingface.co/datasets/fracapuano/cup v2: - https://huggingface.co/datasets/fracapuano/cupim You can find a live demo of our submission at: https://x.com/_fracapuano/status/1858102728691458554 If you want to know more about how we collected 100GB+ of data, trained multiple RL-policies using @lerobot and used Llama-3.2 models to handle user interactions and switch between tasks, go ahead and have a look! Also, don't be a stranger, and reach out ๐Ÿฆพ Our project is fully open-source, for the community (and ourselves, ๐Ÿ‘จโ€๐Ÿณ) to build! A huge thank you to @cadene for the help (and the robot ๐Ÿคญ) - truly feeling these hugs-vibes ๐Ÿค— , and to @thomwolf and @clem for sharing our work across Little extra: โžก๏ธ Our ๐Ÿง EEG waves๐Ÿง -based control of the ๐Ÿฆพrobotic arm๐Ÿฆพ
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63d67eac6f49aa8230601996%2FdjvtWdy718whUgh7tu1Ko.jpeg", "fullname": "Francesco Capuano", "name": "fracapuano", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 4, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63d67eac6f49aa8230601996%2F5QS21X2uqguXar7OM3PcJ.qt" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62f857fbb9fda55613ce80d9%2Fd7bRniKLmOt-iFN07k1Su.png", "fullname": "Remi Cadene", "name": "cadene", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 694 }, { "avatarUrl": "/avatars/509a47dae81d1b2cdd3a2f8fb59b30b4.svg", "fullname": "Caleb Gucciardi", "name": "calebgcc", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1583857146757-5e67bdd61009063689407479.jpeg", "fullname": "Clem ๐Ÿค—", "name": "clem", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 1763 }, { "avatarUrl": "/avatars/016de1cae8b49f6f1f0a47553c92ea29.svg", "fullname": "Gabriele Cinร ", "name": "gabrycina", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null }, { "avatarUrl": "/avatars/3e39076440bfda66071268fe2a57d9ec.svg", "fullname": "metallama", "name": "llamameta", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 78 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg", "fullname": "Thomas Wolf", "name": "thomwolf", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 704 } ]
[ { "reaction": "โค๏ธ", "users": [ "clem", "cfahlgren1", "Smorty100", "fdaudens", "BrigitteTousi" ], "count": 5 }, { "reaction": "๐Ÿ‘", "users": [ "John6666", "Smorty100", "OmbelineM" ], "count": 3 } ]
2024-11-19T11:42:01.000Z
2024-11-19T11:42:01.572Z
[]
/posts/fracapuano/185206507786588
970
0
873815932669665
[ { "type": "text", "value": "I'm building an AI for healthcare support for professionals, any advice? I could create a new app here but it needs a lot of trainer (Im newbie in this kind of stuff) ", "raw": "I'm building an AI for healthcare support for professionals, any advice? I could create a new app here but it needs a lot of trainer (Im newbie in this kind of stuff) ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Thank you, guys!!!! ", "raw": "Thank you, guys!!!! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I'm building an AI for healthcare support for professionals, any advice? I could create a new app here but it needs a lot of trainer (Im newbie in this kind of stuff) Thank you, guys!!!!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F66d41e5566bac2ac7d9460aa%2FkMulJdATzvrPNMs7pPVRX.png", "fullname": "Lozt B", "name": "messmercod", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F66d41e5566bac2ac7d9460aa%2FbEXFSv9XsukdJHgpTMhuB.webp" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "hemanuelly" ], "count": 2 }, { "reaction": "โค๏ธ", "users": [ "hemanuelly" ], "count": 1 } ]
2024-09-06T05:37:36.000Z
2024-09-06T05:56:44.980Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F643ac5d2e2b979ae6144d68c%2FZ7PCNopn4cQeAYnVJDoqG.png", "fullname": "nyuuzyou", "name": "nyuuzyou", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 57, "isFollowing": false } ]
/posts/messmercod/873815932669665
752
1
957744935743333
[ { "type": "text", "value": "Good lord... Spent almost a day debugging this and it turns out it was an issue of gradio update incompatible with the new fastapi.", "raw": "Good lord... Spent almost a day debugging this and it turns out it was an issue of gradio update incompatible with the new fastapi.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "/static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fhuggingface-space-failed-after-working-initially%2F105514%2F8", "href": "/static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fhuggingface-space-failed-after-working-initially%2F105514%2F8", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Finally got it back online! Come chat with your favorite anime characters here:", "raw": "Finally got it back online! Come chat with your favorite anime characters here:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/kz919/Persona-AI", "href": null, "resource": { "type": "space", "id": "kz919/Persona-AI", "discussionNum": null }, "url": "https://huggingface.co/spaces/kz919/Persona-AI", "code": null, "user": null, "label": null, "lang": null } ]
Good lord... Spent almost a day debugging this and it turns out it was an issue of gradio update incompatible with the new fastapi. /static-proxy?url=https%3A%2F%2Fdiscuss.huggingface.co%2Ft%2Fhuggingface-space-failed-after-working-initially%2F105514%2F8 Finally got it back online! Come chat with your favorite anime characters here: https://huggingface.co/spaces/kz919/Persona-AI
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FFTiirwS_L6IaLHmHwIo2g.png", "fullname": "Kaizhao Liang", "name": "kz919", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 34, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "kz919", "Tonic" ], "count": 3 } ]
2024-09-06T04:48:47.000Z
2024-09-06T04:48:47.381Z
[]
/posts/kz919/957744935743333
636
0
891741019531656
[ { "type": "text", "value": "Good folks at Epoch AI have just released their most comprehensive database yet, tracking over 800 state-of-the-art and historically notable AI models. This incredible resource provides key insights into the factors driving machine learning progress.", "raw": "Good folks at Epoch AI have just released their most comprehensive database yet, tracking over 800 state-of-the-art and historically notable AI models. This incredible resource provides key insights into the factors driving machine learning progress.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Since 2010, the training compute used to create AI models has been growing at a staggering rate of 4.1x per year. That means the computational power behind these models is doubling roughly every six months! And it's not just the compute that's increasing - the costs are too. Training compute costs for the largest models are doubling every nine months, with the most advanced models now costing hundreds of millions of dollars.", "raw": "Since 2010, the training compute used to create AI models has been growing at a staggering rate of 4.1x per year. That means the computational power behind these models is doubling roughly every six months! And it's not just the compute that's increasing - the costs are too. Training compute costs for the largest models are doubling every nine months, with the most advanced models now costing hundreds of millions of dollars.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Interestingly, training compute has scaled up faster for language models compared to vision. While the largest vision and language models had similar compute requirements before 2020, language models have since rapidly outpaced vision models, driven by the success of transformer architectures. The size of datasets used to train language models is also doubling approximately every eight months.", "raw": "Interestingly, training compute has scaled up faster for language models compared to vision. While the largest vision and language models had similar compute requirements before 2020, language models have since rapidly outpaced vision models, driven by the success of transformer architectures. The size of datasets used to train language models is also doubling approximately every eight months.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Another fascinating trend is that the length of time spent training notable models is growing by about 1.2x per year. While longer training times could ease hardware constraints, there is a tradeoff to consider. For very long runs, waiting for algorithmic and hardware improvements might be more beneficial than simply extending training.", "raw": "Another fascinating trend is that the length of time spent training notable models is growing by about 1.2x per year. While longer training times could ease hardware constraints, there is a tradeoff to consider. For very long runs, waiting for algorithmic and hardware improvements might be more beneficial than simply extending training.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If this continues, by 2028, we will reach cluster prices in the 100 billion dollars, using 10GW of power!", "raw": "If this continues, by 2028, we will reach cluster prices in the 100 billion dollars, using 10GW of power!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Link: ", "raw": "Link: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://epochai.org/data/notable-ai-models", "href": "https://epochai.org/data/notable-ai-models", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Good folks at Epoch AI have just released their most comprehensive database yet, tracking over 800 state-of-the-art and historically notable AI models. This incredible resource provides key insights into the factors driving machine learning progress. Since 2010, the training compute used to create AI models has been growing at a staggering rate of 4.1x per year. That means the computational power behind these models is doubling roughly every six months! And it's not just the compute that's increasing - the costs are too. Training compute costs for the largest models are doubling every nine months, with the most advanced models now costing hundreds of millions of dollars. Interestingly, training compute has scaled up faster for language models compared to vision. While the largest vision and language models had similar compute requirements before 2020, language models have since rapidly outpaced vision models, driven by the success of transformer architectures. The size of datasets used to train language models is also doubling approximately every eight months. Another fascinating trend is that the length of time spent training notable models is growing by about 1.2x per year. While longer training times could ease hardware constraints, there is a tradeoff to consider. For very long runs, waiting for algorithmic and hardware improvements might be more beneficial than simply extending training. If this continues, by 2028, we will reach cluster prices in the 100 billion dollars, using 10GW of power! Link: https://epochai.org/data/notable-ai-models
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FWXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FRAemxwnzjGBe3kgNRfhrr.jpeg" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "louisbrulenaudet" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "JRZ" ], "count": 1 } ]
2024-09-06T03:40:47.000Z
2024-09-06T03:40:47.733Z
[]
/posts/singhsidhukuldeep/891741019531656
772
0
928757596721302
[ { "type": "text", "value": "Decided to try to check how many weights in a 70b F32 model would be squashed when converted to F16 (spoiler, it's shockingly few)", "raw": "Decided to try to check how many weights in a 70b F32 model would be squashed when converted to F16 (spoiler, it's shockingly few)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The reason for this comparison is that it should represent the same percentage of squishing as bf16 to fp16", "raw": "The reason for this comparison is that it should represent the same percentage of squishing as bf16 to fp16", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Had claude make me a script, using the new Reflection-70B, and these are the results:", "raw": "Had claude make me a script, using the new Reflection-70B, and these are the results:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Total weights: 70553706496", "raw": "Total weights: 70553706496", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Fully representable: 70530215524", "raw": "Fully representable: 70530215524", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Squashed: 23490972", "raw": "Squashed: 23490972", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Percentage squashed: 0.03%", "raw": "Percentage squashed: 0.03%", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "0.03%!!!!", "raw": "0.03%!!!!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "A couple things to note, this uses a roundtrip of F32 -> F16 -> F32 and then torch.isclose to account for rounding errors that come up by the very nature of extremely accurate numbers, but it uses VERY small tolerances (rtol=1e-5, atol=1e-8)", "raw": "A couple things to note, this uses a roundtrip of F32 -> F16 -> F32 and then torch.isclose to account for rounding errors that come up by the very nature of extremely accurate numbers, but it uses VERY small tolerances (rtol=1e-5, atol=1e-8)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This is also examining EVERY weight that was stored at F32, and for most layers I was somewhere between 0% and 0.03% of weights being squashed, no major outliers.", "raw": "This is also examining EVERY weight that was stored at F32, and for most layers I was somewhere between 0% and 0.03% of weights being squashed, no major outliers.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Overall, I feel even safer converting to F16 for llama.cpp, the extremely small number of weights that fall outside the range are likely so small that they don't actually play a role in the final output of the model at inference anyways.", "raw": "Overall, I feel even safer converting to F16 for llama.cpp, the extremely small number of weights that fall outside the range are likely so small that they don't actually play a role in the final output of the model at inference anyways.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Decided to try to check how many weights in a 70b F32 model would be squashed when converted to F16 (spoiler, it's shockingly few) The reason for this comparison is that it should represent the same percentage of squishing as bf16 to fp16 Had claude make me a script, using the new Reflection-70B, and these are the results: Total weights: 70553706496 Fully representable: 70530215524 Squashed: 23490972 Percentage squashed: 0.03% 0.03%!!!! A couple things to note, this uses a roundtrip of F32 -> F16 -> F32 and then torch.isclose to account for rounding errors that come up by the very nature of extremely accurate numbers, but it uses VERY small tolerances (rtol=1e-5, atol=1e-8) This is also examining EVERY weight that was stored at F32, and for most layers I was somewhere between 0% and 0.03% of weights being squashed, no major outliers. Overall, I feel even safer converting to F16 for llama.cpp, the extremely small number of weights that fall outside the range are likely so small that they don't actually play a role in the final output of the model at inference anyways.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6435718aaaef013d1aec3b8b%2FXKf-8MA47tjVAM6SCX0MP.jpeg", "fullname": "Bartowski", "name": "bartowski", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 2816, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "prithivMLmods", "Sri-Vigneshwar-DJ", "Joseph717171", "not-lain", "John6666", "morph3v5" ], "count": 6 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Joseph717171" ], "count": 2 }, { "reaction": "๐Ÿคฏ", "users": [ "louisbrulenaudet", "Joseph717171" ], "count": 2 }, { "reaction": "โค๏ธ", "users": [ "Joseph717171", "KhaldiAbderrhmane" ], "count": 2 }, { "reaction": "๐Ÿค—", "users": [ "Joseph717171" ], "count": 1 }, { "reaction": "๐Ÿง ", "users": [ "Joseph717171" ], "count": 1 }, { "reaction": "๐Ÿš€", "users": [ "Joseph717171" ], "count": 1 }, { "reaction": "๐Ÿ‘", "users": [ "MoonRide" ], "count": 1 } ]
2024-09-05T21:49:55.000Z
2024-09-29T17:42:38.985Z
[ { "avatarUrl": "/avatars/da52e5fce67042332fa1e9f5fd3e5635.svg", "fullname": "Luke Chadwick", "name": "vertis", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6435718aaaef013d1aec3b8b%2FXKf-8MA47tjVAM6SCX0MP.jpeg", "fullname": "Bartowski", "name": "bartowski", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 2816, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6640bbd0220cfa8cbfdce080%2FwiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false }, { "avatarUrl": "/avatars/ea4398745974d781ae9dc0e95b12cabe.svg", "fullname": "Joseph", "name": "Joseph717171", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 22, "isFollowing": false }, { "avatarUrl": "/avatars/99351620d65d263418e6d0d4e170f055.svg", "fullname": "Abrosimov", "name": "ajiriro", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F66d1efa935c36f266f507cff%2Fa2-fPLeGwAp5fqCKdqfzp.jpeg", "fullname": "Harmendo", "name": "Hampetiudo", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/avatars/3b03217c22442b7bfed9beac2bf50d17.svg", "fullname": "Alex Daminger", "name": "Handgun1773", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }, { "avatarUrl": "/avatars/98d7cbc7bf4cbf4f2810cbc0a1a34d64.svg", "fullname": "Iwan Kawrakow", "name": "ikawrakow", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 116, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2Fnoauth%2F4Az8a8F60rNOD3L3ThsCe.png", "fullname": "Compilade", "name": "compilade", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false } ]
/posts/bartowski/928757596721302
16,115
20
938110381581989
[ { "type": "mention", "value": null, "raw": "@ehartford", "href": null, "resource": null, "url": null, "code": null, "user": "ehartford", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/CohereForAI/c4ai-command-r-plus", "href": null, "resource": { "type": "model", "id": "CohereForAI/c4ai-command-r-plus", "discussionNum": null }, "url": "https://huggingface.co/CohereForAI/c4ai-command-r-plus", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " dolphin when?", "raw": " dolphin when?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
@ehartford https://huggingface.co/CohereForAI/c4ai-command-r-plus dolphin when?
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F659f000b83abded48e190901%2FBnXL_XYbVX6PHngfQLECW.png", "fullname": "Noa Roggendorff", "name": "nroggendorff", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 141, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63111b2d88942700629f5771%2Fu2a9y-yx6TG0N31OhMSHI.png", "fullname": "Eric Hartford", "name": "ehartford", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3287 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "mahiatlinux", "danielus", "Locutusque", "den0620", "AtAndDev" ], "count": 6 } ]
2024-09-05T21:01:30.000Z
2024-09-05T21:01:30.000Z
[]
/posts/nroggendorff/938110381581989
1,037
0
697276772763075
[ { "type": "text", "value": "The ", "raw": "The ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`timm`", "href": null, "resource": null, "url": null, "code": "timm", "user": null, "label": null, "lang": null }, { "type": "text", "value": " leaderboard ", "raw": " leaderboard ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/timm/leaderboard", "href": null, "resource": { "type": "space", "id": "timm/leaderboard", "discussionNum": null }, "url": "https://huggingface.co/spaces/timm/leaderboard", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations. ", "raw": " has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison:", "raw": "Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* test_vit (", "raw": "* test_vit (", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/timm/test_vit.r160_in1k", "href": null, "resource": { "type": "model", "id": "timm/test_vit.r160_in1k", "discussionNum": null }, "url": "https://huggingface.co/timm/test_vit.r160_in1k", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ")", "raw": ")", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* test_efficientnet (", "raw": "* test_efficientnet (", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/timm/test_efficientnet.r160_in1k", "href": null, "resource": { "type": "model", "id": "timm/test_efficientnet.r160_in1k", "discussionNum": null }, "url": "https://huggingface.co/timm/test_efficientnet.r160_in1k", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ")", "raw": ")", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* test_byobnet (", "raw": "* test_byobnet (", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/timm/test_byobnet.r160_in1k", "href": null, "resource": { "type": "model", "id": "timm/test_byobnet.r160_in1k", "discussionNum": null }, "url": "https://huggingface.co/timm/test_byobnet.r160_in1k", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ", a mix of resnet, darknet, effnet/regnet like blocks)", "raw": ", a mix of resnet, darknet, effnet/regnet like blocks)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?", "raw": "They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
The `timm` leaderboard https://huggingface.co/spaces/timm/leaderboard has been updated with the ability to select different hardware benchmark sets: RTX4090, RTX3090, two different CPUs along with some NCHW / NHWC layout and torch.compile (dynamo) variations. Also worth pointing out, there are three rather newish 'test' models that you'll see at the top of any samples/sec comparison: * test_vit (https://huggingface.co/timm/test_vit.r160_in1k) * test_efficientnet (https://huggingface.co/timm/test_efficientnet.r160_in1k) * test_byobnet (https://huggingface.co/timm/test_byobnet.r160_in1k, a mix of resnet, darknet, effnet/regnet like blocks) They are < 0.5M params, insanely fast and originally intended for unit testing w/ real weights. They have awful ImageNet top-1, it's rare to have anyone bother to train a model this small on ImageNet (the classifier is roughly 30-70% of the param count!). However, they are FAST on very limited hadware and you can fine-tune them well on small data. Could be the model you're looking for?
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1667002643224-604a5184dca2c7ac7508b849.jpeg", "fullname": "Ross Wightman", "name": "rwightman", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 221, "isFollowing": false }
[]
[]
[ { "reaction": "โค๏ธ", "users": [ "clem", "MohamedRashad", "John6666", "bryant1410" ], "count": 4 }, { "reaction": "๐Ÿ”ฅ", "users": [ "de-Rodrigo" ], "count": 1 }, { "reaction": "๐Ÿ‘", "users": [ "maxiw" ], "count": 1 } ]
2024-09-05T18:49:22.000Z
2024-09-05T18:57:05.588Z
[]
/posts/rwightman/697276772763075
1,278
0
247019069617685
[ { "type": "text", "value": "I have put together a notebook on Multimodal RAG, where we do not process the documents with hefty pipelines but natively use:", "raw": "I have put together a notebook on Multimodal RAG, where we do not process the documents with hefty pipelines but natively use:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/vidore/colpali", "href": null, "resource": { "type": "model", "id": "vidore/colpali", "discussionNum": null }, "url": "https://huggingface.co/vidore/colpali", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " for retrieval ๐Ÿ“– it doesn't need indexing with image-text pairs but just images!", "raw": " for retrieval ๐Ÿ“– it doesn't need indexing with image-text pairs but just images!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct", "href": null, "resource": { "type": "model", "id": "Qwen/Qwen2-VL-2B-Instruct", "discussionNum": null }, "url": "https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " for generation ๐Ÿ’ฌ directly feed images as is to a vision language model with no processing to text! ", "raw": " for generation ๐Ÿ’ฌ directly feed images as is to a vision language model with no processing to text! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I used ColPali implementation of the new ๐Ÿญ Byaldi library by ", "raw": "I used ColPali implementation of the new ๐Ÿญ Byaldi library by ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@bclavie", "href": null, "resource": null, "url": null, "code": null, "user": "bclavie", "label": null, "lang": null }, { "type": "text", "value": " ๐Ÿค—", "raw": " ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/answerdotai/byaldi", "href": "https://github.com/answerdotai/byaldi", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Link to notebook: ", "raw": "Link to notebook: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/merveenoyan/smol-vision/blob/main/ColPali_%2B_Qwen2_VL.ipynb", "href": "https://github.com/merveenoyan/smol-vision/blob/main/ColPali_%2B_Qwen2_VL.ipynb", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I have put together a notebook on Multimodal RAG, where we do not process the documents with hefty pipelines but natively use: - https://huggingface.co/vidore/colpali for retrieval ๐Ÿ“– it doesn't need indexing with image-text pairs but just images! - https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct for generation ๐Ÿ’ฌ directly feed images as is to a vision language model with no processing to text! I used ColPali implementation of the new ๐Ÿญ Byaldi library by @bclavie ๐Ÿค— https://github.com/answerdotai/byaldi Link to notebook: https://github.com/merveenoyan/smol-vision/blob/main/ColPali_%2B_Qwen2_VL.ipynb
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5ff60d4352c26e9bc240badd%2FHzoknJibrSasc1ZzU71XA.png", "fullname": "Benjamin Claviรฉ", "name": "bclavie", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28 } ]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "rwightman", "umair894", "clem", "louisbrulenaudet", "s3nh", "John6666", "BasitMustafa", "Johnyquest7", "phanhoang", "resbyte", "denizaybey", "allandclive", "fdaudens", "vilarin", "ak0601", "rreed-pha", "xi0v", "Rajaram1996", "Rayvee", "oceansweep", "parjun", "byteprobe", "Filippo" ], "count": 23 }, { "reaction": "๐Ÿ‘", "users": [ "hitchhiker3010", "Csplk", "sasikiran", "fsommers", "rogermt", "navin7", "sambarnett96", "oceansweep", "ysdede", "shreyamondal" ], "count": 10 }, { "reaction": "โค๏ธ", "users": [ "rreed-pha", "oceansweep", "Yassmen", "madstuntman11" ], "count": 4 } ]
2024-09-05T17:10:03.000Z
2024-09-05T17:10:03.412Z
[]
/posts/merve/247019069617685
5,508
0
875308955939620
[ { "type": "text", "value": "How do i access llama 3.1 70b in my space ?", "raw": "How do i access llama 3.1 70b in my space ?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "this doesn't seem to work, can someone help me with a working code ", "raw": "this doesn't seem to work, can someone help me with a working code ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "from transformers import AutoConfig", "raw": "from transformers import AutoConfig", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "config = AutoConfig.from_pretrained(\"meta-llama/Meta-Llama-3.1-70B\", revision=\"main\")", "raw": "config = AutoConfig.from_pretrained(\"meta-llama/Meta-Llama-3.1-70B\", revision=\"main\")", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "config.rope_scaling = {\"type\": \"llama3\", \"factor\": 8.0}", "raw": "config.rope_scaling = {\"type\": \"llama3\", \"factor\": 8.0}", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "model = AutoModelForCausalLM.from_pretrained(\"meta-llama/Meta-Llama-3.1-70B\", config=config, use_auth_token=True)", "raw": "model = AutoModelForCausalLM.from_pretrained(\"meta-llama/Meta-Llama-3.1-70B\", config=config, use_auth_token=True)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
How do i access llama 3.1 70b in my space ? this doesn't seem to work, can someone help me with a working code from transformers import AutoConfig config = AutoConfig.from_pretrained("meta-llama/Meta-Llama-3.1-70B", revision="main") config.rope_scaling = {"type": "llama3", "factor": 8.0} model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-70B", config=config, use_auth_token=True)
{ "avatarUrl": "/avatars/fcf9eac61e0ec82ba5503bf07c867247.svg", "fullname": "Rangaiah", "name": "Shamurangaiah", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-05T16:50:57.000Z
2024-09-06T13:20:27.221Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6640bbd0220cfa8cbfdce080%2FwiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false }, { "avatarUrl": "/avatars/fcf9eac61e0ec82ba5503bf07c867247.svg", "fullname": "Rangaiah", "name": "Shamurangaiah", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/Shamurangaiah/875308955939620
360
11
834561196751118
[ { "type": "text", "value": "๐Ÿš€ย ๐—ช๐—ต๐—ฒ๐—ฟ๐—ฒ ๐˜€๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐—น๐—ฎ๐˜„๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐˜๐—ฎ๐—ธ๐—ถ๐—ป๐—ด ๐˜‚๐˜€ : ๐—ฏ๐˜† ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿด, ๐—”๐—œ ๐—–๐—น๐˜‚๐˜€๐˜๐—ฒ๐—ฟ๐˜€ ๐˜„๐—ถ๐—น๐—น ๐—ฟ๐—ฒ๐—ฎ๐—ฐ๐—ต ๐˜๐—ต๐—ฒ ๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ ๐—ฐ๐—ผ๐—ป๐˜€๐˜‚๐—บ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฐ๐—ผ๐˜‚๐—ป๐˜๐—ฟ๐—ถ๐—ฒ๐˜€", "raw": "๐Ÿš€ย ๐—ช๐—ต๐—ฒ๐—ฟ๐—ฒ ๐˜€๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐—น๐—ฎ๐˜„๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐˜๐—ฎ๐—ธ๐—ถ๐—ป๐—ด ๐˜‚๐˜€ : ๐—ฏ๐˜† ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿด, ๐—”๐—œ ๐—–๐—น๐˜‚๐˜€๐˜๐—ฒ๐—ฟ๐˜€ ๐˜„๐—ถ๐—น๐—น ๐—ฟ๐—ฒ๐—ฎ๐—ฐ๐—ต ๐˜๐—ต๐—ฒ ๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ ๐—ฐ๐—ผ๐—ป๐˜€๐˜‚๐—บ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฐ๐—ผ๐˜‚๐—ป๐˜๐—ฟ๐—ถ๐—ฒ๐˜€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Reminder : โ€œScaling lawsโ€ are empirical laws saying that if you keep multiplying your compute by x10, your models will mechanically keep getting better and better.", "raw": "Reminder : โ€œScaling lawsโ€ are empirical laws saying that if you keep multiplying your compute by x10, your models will mechanically keep getting better and better.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "To give you an idea, GPT-3 can barely write sentences, and GPT-4, which only used x15 its amount of compute, already sounds much smarter than some of my friends (although it's not really - or at least I haven't tested them side-by side). So you can imagine how far a x100 over GPT-4 can take us.", "raw": "To give you an idea, GPT-3 can barely write sentences, and GPT-4, which only used x15 its amount of compute, already sounds much smarter than some of my friends (although it's not really - or at least I haven't tested them side-by side). So you can imagine how far a x100 over GPT-4 can take us.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŽ๏ธย As a result, tech titans are racing to build the biggest models, and for this they need gigantic training clusters.", "raw": "๐ŸŽ๏ธย As a result, tech titans are racing to build the biggest models, and for this they need gigantic training clusters.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The picture below shows the growth of training compute: it is increasing at a steady exponential rate of a x10 every 2 years. So letโ€™s take this progress a bit further:", "raw": "The picture below shows the growth of training compute: it is increasing at a steady exponential rate of a x10 every 2 years. So letโ€™s take this progress a bit further:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- 2022: starting training for GPT-4 : 10^26 FLOPs, cost of $100M", "raw": "- 2022: starting training for GPT-4 : 10^26 FLOPs, cost of $100M", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- 2024: today, companies start training on much larger clusters like the โ€œsuper AI clusterโ€ of Elon Muskโ€™s xAI, 10^27 FLOPS, $1B", "raw": "- 2024: today, companies start training on much larger clusters like the โ€œsuper AI clusterโ€ of Elon Muskโ€™s xAI, 10^27 FLOPS, $1B", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- 2026 : by then clusters will require 1GW, i.e. around the full power generated by a nuclear reactor", "raw": "- 2026 : by then clusters will require 1GW, i.e. around the full power generated by a nuclear reactor", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- 2028: we reach cluster prices in the 100 billion dollars, using 10GW, more than the most powerful power stations currently in use in the US. This last size seems crazy, but Microsoft and OpenAI already are planning one.", "raw": "- 2028: we reach cluster prices in the 100 billion dollars, using 10GW, more than the most powerful power stations currently in use in the US. This last size seems crazy, but Microsoft and OpenAI already are planning one.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Will AI clusters effectively reach these crazy sizes where the consume as much as entire countries? ", "raw": "Will AI clusters effectively reach these crazy sizes where the consume as much as entire countries? ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โžก๏ธย Three key ingredients of training might be a roadblock to scaling up :", "raw": "โžก๏ธย Three key ingredients of training might be a roadblock to scaling up :", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ธย Money: but itโ€™s very unlikely, given the potential market size for AGI, that investors lose interest.", "raw": "๐Ÿ’ธย Money: but itโ€™s very unlikely, given the potential market size for AGI, that investors lose interest.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โšก๏ธ Energy supply at a specific location", "raw": "โšก๏ธ Energy supply at a specific location", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“šย Training data: weโ€™re already using 15 trillion tokens for Llama-3.1 when Internet has something like 60 trillion.", "raw": "๐Ÿ“šย Training data: weโ€™re already using 15 trillion tokens for Llama-3.1 when Internet has something like 60 trillion.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค”ย Iโ€™d be curious to hear your thoughts: do you think weโ€™ll race all the way there?", "raw": "๐Ÿค”ย Iโ€™d be curious to hear your thoughts: do you think weโ€™ll race all the way there?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿš€ย ๐—ช๐—ต๐—ฒ๐—ฟ๐—ฒ ๐˜€๐—ฐ๐—ฎ๐—น๐—ถ๐—ป๐—ด ๐—น๐—ฎ๐˜„๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐˜๐—ฎ๐—ธ๐—ถ๐—ป๐—ด ๐˜‚๐˜€ : ๐—ฏ๐˜† ๐Ÿฎ๐Ÿฌ๐Ÿฎ๐Ÿด, ๐—”๐—œ ๐—–๐—น๐˜‚๐˜€๐˜๐—ฒ๐—ฟ๐˜€ ๐˜„๐—ถ๐—น๐—น ๐—ฟ๐—ฒ๐—ฎ๐—ฐ๐—ต ๐˜๐—ต๐—ฒ ๐—ฝ๐—ผ๐˜„๐—ฒ๐—ฟ ๐—ฐ๐—ผ๐—ป๐˜€๐˜‚๐—บ๐—ฝ๐˜๐—ถ๐—ผ๐—ป ๐—ผ๐—ณ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฐ๐—ผ๐˜‚๐—ป๐˜๐—ฟ๐—ถ๐—ฒ๐˜€ Reminder : โ€œScaling lawsโ€ are empirical laws saying that if you keep multiplying your compute by x10, your models will mechanically keep getting better and better. To give you an idea, GPT-3 can barely write sentences, and GPT-4, which only used x15 its amount of compute, already sounds much smarter than some of my friends (although it's not really - or at least I haven't tested them side-by side). So you can imagine how far a x100 over GPT-4 can take us. ๐ŸŽ๏ธย As a result, tech titans are racing to build the biggest models, and for this they need gigantic training clusters. The picture below shows the growth of training compute: it is increasing at a steady exponential rate of a x10 every 2 years. So letโ€™s take this progress a bit further: - 2022: starting training for GPT-4 : 10^26 FLOPs, cost of $100M - 2024: today, companies start training on much larger clusters like the โ€œsuper AI clusterโ€ of Elon Muskโ€™s xAI, 10^27 FLOPS, $1B - 2026 : by then clusters will require 1GW, i.e. around the full power generated by a nuclear reactor - 2028: we reach cluster prices in the 100 billion dollars, using 10GW, more than the most powerful power stations currently in use in the US. This last size seems crazy, but Microsoft and OpenAI already are planning one. Will AI clusters effectively reach these crazy sizes where the consume as much as entire countries? โžก๏ธย Three key ingredients of training might be a roadblock to scaling up : ๐Ÿ’ธย Money: but itโ€™s very unlikely, given the potential market size for AGI, that investors lose interest. โšก๏ธ Energy supply at a specific location ๐Ÿ“šย Training data: weโ€™re already using 15 trillion tokens for Llama-3.1 when Internet has something like 60 trillion. ๐Ÿค”ย Iโ€™d be curious to hear your thoughts: do you think weโ€™ll race all the way there?
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2F7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2FSeVb6BylGnaZ-BAubraaw.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Kaoeiri" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "lamhieu", "Kaoeiri" ], "count": 2 } ]
2024-09-05T14:18:20.000Z
2024-09-06T14:05:45.366Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F600ae38cc92b79f54efd4556%2FcSqRIslYl5L3I4WK3a31f.png", "fullname": "Hieu Lam", "name": "lamhieu", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 74, "isFollowing": false }, { "avatarUrl": "/avatars/ac25f29292cca71ab6d509ea781e7943.svg", "fullname": "Shareef Taylor", "name": "MANOFAi94", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662162fd296b3d40f15367a4%2FjM74dtHuAGI6UlLGT7A9s.jpeg", "fullname": "Stephen Genusa", "name": "StephenGenusa", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false } ]
/posts/m-ric/834561196751118
842
3
935467526386612
[ { "type": "text", "value": "๐ŸŒ Introducing PPT Online Dataset - ", "raw": "๐ŸŒ Introducing PPT Online Dataset - ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/nyuuzyou/pptonline", "href": null, "resource": { "type": "dataset", "id": "nyuuzyou/pptonline", "discussionNum": null }, "url": "https://huggingface.co/datasets/nyuuzyou/pptonline", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Dataset highlights:", "raw": "Dataset highlights:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Metadata for 1,418,349 PowerPoint (.ppt) files from ppt-online.org", "raw": "- Metadata for 1,418,349 PowerPoint (.ppt) files from ppt-online.org", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Multilingual content: Russian, Ukrainian, Belarusian, Kazakh, English, and others", "raw": "- Multilingual content: Russian, Ukrainian, Belarusian, Kazakh, English, and others", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Each entry includes: Unique ID, title, category, download link, file size, and content snippet", "raw": "- Each entry includes: Unique ID, title, category, download link, file size, and content snippet", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Data reflects presentations accessible through the PPT Online platform", "raw": "- Data reflects presentations accessible through the PPT Online platform", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Licensed under Creative Commons Zero (CC0) for unrestricted use", "raw": "- Licensed under Creative Commons Zero (CC0) for unrestricted use", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This dataset offers a unique window into online educational resources, particularly in Eastern European and Central Asian contexts. It provides opportunities for analyzing presentation trends, topic distributions, and language patterns in educational materials.", "raw": "This dataset offers a unique window into online educational resources, particularly in Eastern European and Central Asian contexts. It provides opportunities for analyzing presentation trends, topic distributions, and language patterns in educational materials.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐ŸŒ Introducing PPT Online Dataset - https://huggingface.co/datasets/nyuuzyou/pptonline Dataset highlights: - Metadata for 1,418,349 PowerPoint (.ppt) files from ppt-online.org - Multilingual content: Russian, Ukrainian, Belarusian, Kazakh, English, and others - Each entry includes: Unique ID, title, category, download link, file size, and content snippet - Data reflects presentations accessible through the PPT Online platform - Licensed under Creative Commons Zero (CC0) for unrestricted use This dataset offers a unique window into online educational resources, particularly in Eastern European and Central Asian contexts. It provides opportunities for analyzing presentation trends, topic distributions, and language patterns in educational materials.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F643ac5d2e2b979ae6144d68c%2FZ7PCNopn4cQeAYnVJDoqG.png", "fullname": "nyuuzyou", "name": "nyuuzyou", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 57, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "benjamin-paine", "RazNT" ], "count": 3 } ]
2024-09-05T12:15:28.000Z
2024-09-05T12:15:28.806Z
[]
/posts/nyuuzyou/935467526386612
797
0
148486966241479
[ { "type": "text", "value": "Hey everyone ๐Ÿค—!", "raw": "Hey everyone ๐Ÿค—!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! ๐ŸŽ‰", "raw": "We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! ๐ŸŽ‰", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: ", "raw": "We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners", "href": "https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ". You can install them by running:", "raw": ". You can install them by running:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```\ncomfy node registry-install comfyui-refiners\n```", "href": null, "resource": null, "url": null, "code": "comfy node registry-install comfyui-refiners", "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Or by unzipping the archive you can download by clicking \"Download Latest\" into your ", "raw": "Or by unzipping the archive you can download by clicking \"Download Latest\" into your ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`custom_nodes`", "href": null, "resource": null, "url": null, "code": "custom_nodes", "user": null, "label": null, "lang": null }, { "type": "text", "value": " comfy folder.", "raw": " comfy folder.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! ๐Ÿ™", "raw": "We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! ๐Ÿ™", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Hey everyone ๐Ÿค—! We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! ๐ŸŽ‰ We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners. You can install them by running: ``` comfy node registry-install comfyui-refiners ``` Or by unzipping the archive you can download by clicking "Download Latest" into your `custom_nodes` comfy folder. We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! ๐Ÿ™
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1669043420538-6364f1784f773b7e4cede70c.jpeg", "fullname": "Laureฮทt Fainsin", "name": "1aurent", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 80, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "limiteinductive", "John6666", "lunarflu", "djuna", "catwell" ], "count": 5 } ]
2024-09-05T11:48:01.000Z
2024-09-07T10:40:15.861Z
[]
/posts/1aurent/148486966241479
1,071
0
436311113936516
[ { "type": "text", "value": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธHey there folks,", "raw": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธHey there folks,", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Did you see the new coding model from ", "raw": "Did you see the new coding model from ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@01-ai", "href": null, "resource": null, "url": null, "code": null, "user": "01-ai", "label": null, "lang": null }, { "type": "text", "value": " ? ", "raw": " ? ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "collection : ", "raw": "collection : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/collections/01-ai/yi-coder-66bdb00f5bdd611f9a008f30", "href": null, "resource": { "type": "collection", "id": "01-ai/yi-coder-66bdb00f5bdd611f9a008f30", "discussionNum": null }, "url": "https://huggingface.co/collections/01-ai/yi-coder-66bdb00f5bdd611f9a008f30", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "demo : ", "raw": "demo : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/Tonic/Yi-Coder-9B", "href": null, "resource": { "type": "space", "id": "Tonic/Yi-Coder-9B", "discussionNum": null }, "url": "https://huggingface.co/spaces/Tonic/Yi-Coder-9B", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "achieves SOTA on benchmarks , 125K context window , 55 languages including Docker, Js and many more ๐Ÿš€", "raw": "achieves SOTA on benchmarks , 125K context window , 55 languages including Docker, Js and many more ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธHey there folks, Did you see the new coding model from @01-ai ? collection : https://huggingface.co/collections/01-ai/yi-coder-66bdb00f5bdd611f9a008f30 demo : https://huggingface.co/spaces/Tonic/Yi-Coder-9B achieves SOTA on benchmarks , 125K context window , 55 languages including Docker, Js and many more ๐Ÿš€
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62a3bb1cd0d8c2c2169f0b88%2FeT2TS0IlQbZtz-F_zHLz9.jpeg", "fullname": "Joseph [open/acc] Pollack", "name": "Tonic", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 313, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "web3builder", "John6666", "louisbrulenaudet", "djuna", "KingNish" ], "count": 5 } ]
2024-09-05T09:56:55.000Z
2024-09-05T13:32:54.611Z
[ { "avatarUrl": "/avatars/1280748c5a2e24a8f00618b544c9749a.svg", "fullname": "leuneli", "name": "leuneli", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/Tonic/436311113936516
1,089
1
995511131459162
[ { "type": "text", "value": "If you have documents that do not only have text and you're doing retrieval or RAG (using OCR and LLMs), give it up and give ColPali and vision language models a try ๐Ÿค—", "raw": "If you have documents that do not only have text and you're doing retrieval or RAG (using OCR and LLMs), give it up and give ColPali and vision language models a try ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Why? Documents consist of multiple modalities: layout, table, text, chart, images. Document processing pipelines often consist of multiple models and they're immensely brittle and slow. ๐Ÿฅฒ", "raw": "Why? Documents consist of multiple modalities: layout, table, text, chart, images. Document processing pipelines often consist of multiple models and they're immensely brittle and slow. ๐Ÿฅฒ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "How? ColPali is a ColBERT-like document retrieval model built on PaliGemma, it operates over image patches directly, and indexing takes far less time with more accuracy. You can use it for retrieval, and if you want to do retrieval augmented generation, find the closest document, and do not process it, give it directly to a VLM like Qwen2-VL (as image input) and give your text query. ๐Ÿค", "raw": "How? ColPali is a ColBERT-like document retrieval model built on PaliGemma, it operates over image patches directly, and indexing takes far less time with more accuracy. You can use it for retrieval, and if you want to do retrieval augmented generation, find the closest document, and do not process it, give it directly to a VLM like Qwen2-VL (as image input) and give your text query. ๐Ÿค", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This is much faster + you do not lose out on any information + much easier to maintain too! ๐Ÿฅณ", "raw": "This is much faster + you do not lose out on any information + much easier to maintain too! ๐Ÿฅณ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Multimodal RAG ", "raw": "Multimodal RAG ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/collections/merve/multimodal-rag-66d97602e781122aae0a5139", "href": null, "resource": { "type": "collection", "id": "merve/multimodal-rag-66d97602e781122aae0a5139", "discussionNum": null }, "url": "https://huggingface.co/collections/merve/multimodal-rag-66d97602e781122aae0a5139", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ๐Ÿ’ฌ", "raw": " ๐Ÿ’ฌ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Document AI (made it way before, for folks who want structured input/output and can fine-tune a model) ", "raw": "Document AI (made it way before, for folks who want structured input/output and can fine-tune a model) ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/collections/merve/awesome-document-ai-65ef1cdc2e97ef9cc85c898e", "href": null, "resource": { "type": "collection", "id": "merve/awesome-document-ai-65ef1cdc2e97ef9cc85c898e", "discussionNum": null }, "url": "https://huggingface.co/collections/merve/awesome-document-ai-65ef1cdc2e97ef9cc85c898e", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ๐Ÿ“–", "raw": " ๐Ÿ“–", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
If you have documents that do not only have text and you're doing retrieval or RAG (using OCR and LLMs), give it up and give ColPali and vision language models a try ๐Ÿค— Why? Documents consist of multiple modalities: layout, table, text, chart, images. Document processing pipelines often consist of multiple models and they're immensely brittle and slow. ๐Ÿฅฒ How? ColPali is a ColBERT-like document retrieval model built on PaliGemma, it operates over image patches directly, and indexing takes far less time with more accuracy. You can use it for retrieval, and if you want to do retrieval augmented generation, find the closest document, and do not process it, give it directly to a VLM like Qwen2-VL (as image input) and give your text query. ๐Ÿค This is much faster + you do not lose out on any information + much easier to maintain too! ๐Ÿฅณ Multimodal RAG https://huggingface.co/collections/merve/multimodal-rag-66d97602e781122aae0a5139 ๐Ÿ’ฌ Document AI (made it way before, for folks who want structured input/output and can fine-tune a model) https://huggingface.co/collections/merve/awesome-document-ai-65ef1cdc2e97ef9cc85c898e ๐Ÿ“–
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1648113222875-6141a88b3a0ec78603c9e784.png", "fullname": "Merve Noyan", "name": "merve", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 5589, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6141a88b3a0ec78603c9e784%2FgRQUP4l8E5DzT2N-PeNYx.jpeg" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "adorkin", "web3builder", "John6666", "Percifal", "jrmasiero", "rwightman", "seek007", "abishekcodes", "zliu", "AI4Industry", "louisbrulenaudet", "byteprobe", "muhtasham", "rumbleFTW" ], "count": 14 }, { "reaction": "๐Ÿ”ฅ", "users": [ "umair894", "abishekcodes", "fsommers", "jithinrocs", "rumbleFTW" ], "count": 5 }, { "reaction": "โค๏ธ", "users": [ "Csplk", "rumbleFTW", "madstuntman11" ], "count": 3 } ]
2024-09-05T09:17:38.000Z
2024-09-21T20:09:39.856Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6444b3135af87c73bbbd7447%2F-WLquJY3E1KZSJbnYUkwD.jpeg", "fullname": "Frank Sommers", "name": "fsommers", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 7, "isFollowing": false } ]
/posts/merve/995511131459162
3,808
2
866472607836541
[ { "type": "text", "value": "๐Ÿ”ฅ Dataset Viber 0.3 launches with Synthesizer to synthesise data with a human in the loop, for free, using open source models with Argilla's distilabel but within a quick-and-easy Gradio Interface.", "raw": "๐Ÿ”ฅ Dataset Viber 0.3 launches with Synthesizer to synthesise data with a human in the loop, for free, using open source models with Argilla's distilabel but within a quick-and-easy Gradio Interface.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Why? Not trying to be all fancy and formal just to iterate on your data and to get familiar with your prompts and the produced data. Under the hood, it relies on Hugging Face Inference endpoints and the latest LLMs and VLMs like Meta Llama 3.1 and BlackForest Labs Flux models.", "raw": "Why? Not trying to be all fancy and formal just to iterate on your data and to get familiar with your prompts and the produced data. Under the hood, it relies on Hugging Face Inference endpoints and the latest LLMs and VLMs like Meta Llama 3.1 and BlackForest Labs Flux models.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "An addition to the other Interfaces that are already support.", "raw": "An addition to the other Interfaces that are already support.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- CollectorInterface: Lazily collect data of model interactions without human annotation.", "raw": "- CollectorInterface: Lazily collect data of model interactions without human annotation.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- AnnotatorInterface: Walk through your data and annotate it with models in the loop.", "raw": "- AnnotatorInterface: Walk through your data and annotate it with models in the loop.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Synthesizer: Synthesize data with distilabel in the loop.", "raw": "- Synthesizer: Synthesize data with distilabel in the loop.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- BulkInterface: Explore your data distribution and annotate in bulk.", "raw": "- BulkInterface: Explore your data distribution and annotate in bulk.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โญ๏ธ Give some good vibes: ", "raw": "โญ๏ธ Give some good vibes: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/davidberenstein1957/dataset-viber", "href": "https://github.com/davidberenstein1957/dataset-viber", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ”ฅ Dataset Viber 0.3 launches with Synthesizer to synthesise data with a human in the loop, for free, using open source models with Argilla's distilabel but within a quick-and-easy Gradio Interface. Why? Not trying to be all fancy and formal just to iterate on your data and to get familiar with your prompts and the produced data. Under the hood, it relies on Hugging Face Inference endpoints and the latest LLMs and VLMs like Meta Llama 3.1 and BlackForest Labs Flux models. An addition to the other Interfaces that are already support. - CollectorInterface: Lazily collect data of model interactions without human annotation. - AnnotatorInterface: Walk through your data and annotate it with models in the loop. - Synthesizer: Synthesize data with distilabel in the loop. - BulkInterface: Explore your data distribution and annotate in bulk. โญ๏ธ Give some good vibes: https://github.com/davidberenstein1957/dataset-viber
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1677141720071-634ff41ff32062e9eb7b06a3.jpeg", "fullname": "David Berenstein", "name": "davidberenstein1957", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 167, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F634ff41ff32062e9eb7b06a3%2FhXo1fjJ_P7vCKo2brM5HW.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-05T08:48:03.000Z
2024-09-05T08:48:03.787Z
[]
/posts/davidberenstein1957/866472607836541
293
0
579064956863993
[ { "type": "text", "value": "Datapluck: Portability Tool for Huggingface Datasets", "raw": "Datapluck: Portability Tool for Huggingface Datasets", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "\"I found myself recently whipping up notebooks just to pull huggingface datasets locally, annotate or operate changes and update them again. This happened often enough that I made a cli tool out of it, which I've been using successfully for the last few months.", "raw": "\"I found myself recently whipping up notebooks just to pull huggingface datasets locally, annotate or operate changes and update them again. This happened often enough that I made a cli tool out of it, which I've been using successfully for the last few months.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "While huggingface uses open formats, I found the official toolchain relatively low-level and not adapted to quick operations such as what I am doing.\"", "raw": "While huggingface uses open formats, I found the official toolchain relatively low-level and not adapted to quick operations such as what I am doing.\"", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "~ ", "raw": "~ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@omarkamali", "href": null, "resource": null, "url": null, "code": null, "user": "omarkamali", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Link : ", "raw": "Link : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://omarkama.li/blog/datapluck", "href": "https://omarkama.li/blog/datapluck", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Datapluck: Portability Tool for Huggingface Datasets "I found myself recently whipping up notebooks just to pull huggingface datasets locally, annotate or operate changes and update them again. This happened often enough that I made a cli tool out of it, which I've been using successfully for the last few months. While huggingface uses open formats, I found the official toolchain relatively low-level and not adapted to quick operations such as what I am doing." ~ @omarkamali Link : https://omarkama.li/blog/datapluck
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F626237d9bbcbd1c34f1bb231%2FEJrOjvAL-68qMCYdnvOrq.png", "fullname": "Ali El Filali", "name": "alielfilali01", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 186, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F665cc58d164b78e36b655f25%2FyiyOVgR3YKe_qNa5xEmu-.jpeg", "fullname": "Omar Kamali", "name": "omarkamali", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1 } ]
[ { "reaction": "โค๏ธ", "users": [ "omarkamali", "abdeljalilELmajjodi", "louisbrulenaudet" ], "count": 3 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "async0x42" ], "count": 2 } ]
2024-09-05T04:13:36.000Z
2024-09-05T12:17:30.997Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F665cc58d164b78e36b655f25%2FyiyOVgR3YKe_qNa5xEmu-.jpeg", "fullname": "Omar Kamali", "name": "omarkamali", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false } ]
/posts/alielfilali01/579064956863993
1,086
1
290847981802358
[ { "type": "text", "value": "Just wrapped up a deep dive into the latest lecture on building LLMs, such as ChatGPT, from ", "raw": "Just wrapped up a deep dive into the latest lecture on building LLMs, such as ChatGPT, from ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@Stanford", "href": null, "resource": null, "url": null, "code": null, "user": "Stanford", "label": null, "lang": null }, { "type": "text", "value": " CS229 course. Here are my top takeaways:", "raw": " CS229 course. Here are my top takeaways:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ” Understanding the Components: LLMs like ChatGPT, Claude, and others are more than just neural networks; they are a complex blend of architecture, training loss, data evaluation, and systems. Knowing how these components work together is key to improving and scaling these models.", "raw": "๐Ÿ” Understanding the Components: LLMs like ChatGPT, Claude, and others are more than just neural networks; they are a complex blend of architecture, training loss, data evaluation, and systems. Knowing how these components work together is key to improving and scaling these models.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“Š Scaling Matters: Performance improves predictably with more data, bigger models, and greater computational power. However, balancing these factors is crucial to avoid overfitting and resource waste.", "raw": "๐Ÿ“Š Scaling Matters: Performance improves predictably with more data, bigger models, and greater computational power. However, balancing these factors is crucial to avoid overfitting and resource waste.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“ˆ Data is King: LLMs are trained on trillions of tokens scraped from the internet, but the quality of this data matters immensely. Rigorous filtering and deduplication processes are essential to maintaining data integrity.", "raw": "๐Ÿ“ˆ Data is King: LLMs are trained on trillions of tokens scraped from the internet, but the quality of this data matters immensely. Rigorous filtering and deduplication processes are essential to maintaining data integrity.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ—๏ธ Pre-Training vs. Post-Training: While pre-training equips the model with general knowledge, post-training (like RLHF) fine-tunes it to follow human-like responses, reducing toxic outputs and improving alignment with human values.", "raw": "๐Ÿ—๏ธ Pre-Training vs. Post-Training: While pre-training equips the model with general knowledge, post-training (like RLHF) fine-tunes it to follow human-like responses, reducing toxic outputs and improving alignment with human values.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŒ Reinforcement Learning from Human Feedback (RLHF): This technique allows LLMs to maximize outputs that align with human preferences, making models more reliable and accurate.", "raw": "๐ŸŒ Reinforcement Learning from Human Feedback (RLHF): This technique allows LLMs to maximize outputs that align with human preferences, making models more reliable and accurate.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ก Why It Matters: Understanding these processes not only helps us appreciate the complexity behind our everyday AI tools but also highlights the challenges and opportunities in the ever-evolving field of AI.", "raw": "๐Ÿ’ก Why It Matters: Understanding these processes not only helps us appreciate the complexity behind our everyday AI tools but also highlights the challenges and opportunities in the ever-evolving field of AI.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Whether youโ€™re in tech, data science, or just AI-curious, staying updated on these advancements is crucial. LLMs are not just transforming industries; theyโ€™re redefining the future of human-computer interaction!", "raw": "Whether youโ€™re in tech, data science, or just AI-curious, staying updated on these advancements is crucial. LLMs are not just transforming industries; theyโ€™re redefining the future of human-computer interaction!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I just realized this was almost 2 hours long...", "raw": "I just realized this was almost 2 hours long...", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Link: ", "raw": "Link: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.youtube.com/watch?v=9vM4p9NN0Ts", "href": "https://www.youtube.com/watch?v=9vM4p9NN0Ts", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Just wrapped up a deep dive into the latest lecture on building LLMs, such as ChatGPT, from @Stanford CS229 course. Here are my top takeaways: ๐Ÿ” Understanding the Components: LLMs like ChatGPT, Claude, and others are more than just neural networks; they are a complex blend of architecture, training loss, data evaluation, and systems. Knowing how these components work together is key to improving and scaling these models. ๐Ÿ“Š Scaling Matters: Performance improves predictably with more data, bigger models, and greater computational power. However, balancing these factors is crucial to avoid overfitting and resource waste. ๐Ÿ“ˆ Data is King: LLMs are trained on trillions of tokens scraped from the internet, but the quality of this data matters immensely. Rigorous filtering and deduplication processes are essential to maintaining data integrity. ๐Ÿ—๏ธ Pre-Training vs. Post-Training: While pre-training equips the model with general knowledge, post-training (like RLHF) fine-tunes it to follow human-like responses, reducing toxic outputs and improving alignment with human values. ๐ŸŒ Reinforcement Learning from Human Feedback (RLHF): This technique allows LLMs to maximize outputs that align with human preferences, making models more reliable and accurate. ๐Ÿ’ก Why It Matters: Understanding these processes not only helps us appreciate the complexity behind our everyday AI tools but also highlights the challenges and opportunities in the ever-evolving field of AI. Whether youโ€™re in tech, data science, or just AI-curious, staying updated on these advancements is crucial. LLMs are not just transforming industries; theyโ€™re redefining the future of human-computer interaction! I just realized this was almost 2 hours long... Link: https://www.youtube.com/watch?v=9vM4p9NN0Ts
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FWXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FrLGOeupSDU6QEGWMEkQuB.png" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "dongnt", "alielfilali01", "Joseph717171", "dsmonk", "louisbrulenaudet" ], "count": 5 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Joseph717171" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "lamhieu" ], "count": 1 } ]
2024-09-04T21:37:25.000Z
2024-09-06T10:00:29.344Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6569216f9c96f1a47bf45788%2FmCLqmAs4dOjKdxNQVAp1w.png", "fullname": "Sica Rius", "name": "SicariusSicariiStuff", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 135, "isFollowing": false }, { "avatarUrl": "/avatars/ea4398745974d781ae9dc0e95b12cabe.svg", "fullname": "Joseph", "name": "Joseph717171", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 22, "isFollowing": false }, { "avatarUrl": "/avatars/4d77428c302dc8866e0073c3ce667323.svg", "fullname": "vhjghvy uyfyfuyfy", "name": "WbjuSrceu", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/singhsidhukuldeep/290847981802358
1,630
3
993063646272657
[ { "type": "text", "value": "I just bought HF Pro but i don't know how many request per month i can get, if i request 1 time every 5s, around 2k token, is the pro account enough?, thanks for reading", "raw": "I just bought HF Pro but i don't know how many request per month i can get, if i request 1 time every 5s, around 2k token, is the pro account enough?, thanks for reading", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I just bought HF Pro but i don't know how many request per month i can get, if i request 1 time every 5s, around 2k token, is the pro account enough?, thanks for reading
{ "avatarUrl": "/avatars/bbaffa3a6cfe0fc224d02d4dc8454886.svg", "fullname": "Cao Trong Thang", "name": "fptisthebest", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "fptisthebest", "davidberenstein1957", "Tonic" ], "count": 4 } ]
2024-09-04T21:19:51.000Z
2024-09-05T02:40:17.234Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6640bbd0220cfa8cbfdce080%2FwiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false } ]
/posts/fptisthebest/993063646272657
851
1
254051507992365
[ { "type": "text", "value": "My tool calling playgrounds repo has been updated again to include the use of flux1-schnell or dev image generation. This functionality is similar to using Dall-E 3 via the ", "raw": "My tool calling playgrounds repo has been updated again to include the use of flux1-schnell or dev image generation. This functionality is similar to using Dall-E 3 via the ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`@`", "href": null, "resource": null, "url": null, "code": "@", "user": null, "label": null, "lang": null }, { "type": "text", "value": " decorator in ChatGPT. Once the function is selected, the model will either extract or improve your prompt (depending on how you ask).", "raw": " decorator in ChatGPT. Once the function is selected, the model will either extract or improve your prompt (depending on how you ask).", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I have also included 2 notebooks that cover different ways to access Flux for your specific use case. The first method covers how to access flux via LitServe from Lightning AI. LitServe is a bare-bones inference engine with a focus on modularity rather than raw performance. LitServe supports text generation models as well as image generation, which is great for some use cases, but does not provide the caching mechanisms from a dedicated image generation solution. ", "raw": "I have also included 2 notebooks that cover different ways to access Flux for your specific use case. The first method covers how to access flux via LitServe from Lightning AI. LitServe is a bare-bones inference engine with a focus on modularity rather than raw performance. LitServe supports text generation models as well as image generation, which is great for some use cases, but does not provide the caching mechanisms from a dedicated image generation solution. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Since dedicated caching mechanisms are so crucial to performance, I also included an example for how to integrate SwarmUI/ComfyUI to utilize a more dedicated infrastructure that may already be running as part of your tech stack. Resulting in a Llama-3.1 capable of utilizing specific ComfyUI JSON configs, and many different settings. ", "raw": "Since dedicated caching mechanisms are so crucial to performance, I also included an example for how to integrate SwarmUI/ComfyUI to utilize a more dedicated infrastructure that may already be running as part of your tech stack. Resulting in a Llama-3.1 capable of utilizing specific ComfyUI JSON configs, and many different settings. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Lastly, I tested the response times for each over a small batch request to simulate a speed test.", "raw": "Lastly, I tested the response times for each over a small batch request to simulate a speed test.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "It becomes clear quickly how efficient caching mechanisms can greatly reduce the generation time, even in a scenario where another model is called. An average 4.5 second response time is not bad at all when you consider that an 8B model is calling a 12B parameter model for a secondary generation.", "raw": "It becomes clear quickly how efficient caching mechanisms can greatly reduce the generation time, even in a scenario where another model is called. An average 4.5 second response time is not bad at all when you consider that an 8B model is calling a 12B parameter model for a secondary generation.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Repo: ", "raw": "Repo: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/tdolan21/tool-calling-playground", "href": "https://github.com/tdolan21/tool-calling-playground", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "LitServe: ", "raw": "LitServe: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/Lightning-AI/LitServe", "href": "https://github.com/Lightning-AI/LitServe", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "SwarmUI: ", "raw": "SwarmUI: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/mcmonkeyprojects/SwarmUI", "href": "https://github.com/mcmonkeyprojects/SwarmUI", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
My tool calling playgrounds repo has been updated again to include the use of flux1-schnell or dev image generation. This functionality is similar to using Dall-E 3 via the `@` decorator in ChatGPT. Once the function is selected, the model will either extract or improve your prompt (depending on how you ask). I have also included 2 notebooks that cover different ways to access Flux for your specific use case. The first method covers how to access flux via LitServe from Lightning AI. LitServe is a bare-bones inference engine with a focus on modularity rather than raw performance. LitServe supports text generation models as well as image generation, which is great for some use cases, but does not provide the caching mechanisms from a dedicated image generation solution. Since dedicated caching mechanisms are so crucial to performance, I also included an example for how to integrate SwarmUI/ComfyUI to utilize a more dedicated infrastructure that may already be running as part of your tech stack. Resulting in a Llama-3.1 capable of utilizing specific ComfyUI JSON configs, and many different settings. Lastly, I tested the response times for each over a small batch request to simulate a speed test. It becomes clear quickly how efficient caching mechanisms can greatly reduce the generation time, even in a scenario where another model is called. An average 4.5 second response time is not bad at all when you consider that an 8B model is calling a 12B parameter model for a secondary generation. Repo: https://github.com/tdolan21/tool-calling-playground LitServe: https://github.com/Lightning-AI/LitServe SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6455cc8d679315e4ef16fbec%2FM6Cfifn05BUzkCFd2QDIT.png", "fullname": "Tim Dolan", "name": "macadeliccc", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 152, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6455cc8d679315e4ef16fbec%2FFhl8PQ2daHSCs9bQkvRTo.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6455cc8d679315e4ef16fbec%2FFo3QQLzYVJMT-eqKxxUAX.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 }, { "reaction": "๐Ÿ”ฅ", "users": [ "louisbrulenaudet" ], "count": 1 } ]
2024-09-04T17:01:20.000Z
2024-09-04T17:01:20.418Z
[]
/posts/macadeliccc/254051507992365
949
0
267778050099092
[ { "type": "text", "value": "๐Ÿฅณ ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ณ๐—ผ๐—ฟ๐—บ๐—ฒ๐—ฟ๐˜€ ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ป๐—ผ๐˜„ ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜๐˜€ ๐— ๐˜‚๐—น๐˜๐—ถ-๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ๐˜€!", "raw": "๐Ÿฅณ ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ณ๐—ผ๐—ฟ๐—บ๐—ฒ๐—ฟ๐˜€ ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ป๐—ผ๐˜„ ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜๐˜€ ๐— ๐˜‚๐—น๐˜๐—ถ-๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ๐˜€!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Multi-agent systems have been introduced in Microsoft's framework Autogen. It simply means having several agents working together to solve your task instead of only one : this paradigm empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization.", "raw": "Multi-agent systems have been introduced in Microsoft's framework Autogen. It simply means having several agents working together to solve your task instead of only one : this paradigm empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "You can now easily build hierarchical multi-agent systems with transformers.agents (not released yet, use the dev version)", "raw": "You can now easily build hierarchical multi-agent systems with transformers.agents (not released yet, use the dev version)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "To do so, encapsulate the agent in a ManagedAgent object. This object needs arguments agent, name, and a description, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.", "raw": "To do so, encapsulate the agent in a ManagedAgent object. This object needs arguments agent, name, and a description, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Cf the example in the image! We'll keep building on this paradigm in the upcoming weeks ๐Ÿš€", "raw": "Cf the example in the image! We'll keep building on this paradigm in the upcoming weeks ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Read more in the doc ๐Ÿ‘‰ ", "raw": "Read more in the doc ๐Ÿ‘‰ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/huggingface/transformers/blob/main/docs/source/en/agents_advanced.md", "href": "https://github.com/huggingface/transformers/blob/main/docs/source/en/agents_advanced.md", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Checkout an advanced multi-agent system that tops the GAIA leaderboard ๐Ÿ‘‰ ", "raw": "Checkout an advanced multi-agent system that tops the GAIA leaderboard ๐Ÿ‘‰ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/aymeric-roucher/GAIA/blob/main/gaia_multiagent.py", "href": "https://github.com/aymeric-roucher/GAIA/blob/main/gaia_multiagent.py", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿฅณ ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ณ๐—ผ๐—ฟ๐—บ๐—ฒ๐—ฟ๐˜€ ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ป๐—ผ๐˜„ ๐˜€๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜๐˜€ ๐— ๐˜‚๐—น๐˜๐—ถ-๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐˜€๐˜†๐˜€๐˜๐—ฒ๐—บ๐˜€! Multi-agent systems have been introduced in Microsoft's framework Autogen. It simply means having several agents working together to solve your task instead of only one : this paradigm empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. You can now easily build hierarchical multi-agent systems with transformers.agents (not released yet, use the dev version) To do so, encapsulate the agent in a ManagedAgent object. This object needs arguments agent, name, and a description, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools. Cf the example in the image! We'll keep building on this paradigm in the upcoming weeks ๐Ÿš€ Read more in the doc ๐Ÿ‘‰ https://github.com/huggingface/transformers/blob/main/docs/source/en/agents_advanced.md Checkout an advanced multi-agent system that tops the GAIA leaderboard ๐Ÿ‘‰ https://github.com/aymeric-roucher/GAIA/blob/main/gaia_multiagent.py
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2F7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2FzE2JkQiVNMx9HS_vs1NWd.png" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "ibrahim313", "John6666", "osanseviero", "Kaoeiri", "dsmonk", "Csplk", "KingNish", "whitebill", "Winnougan" ], "count": 9 }, { "reaction": "๐Ÿค—", "users": [ "louisbrulenaudet", "Kaoeiri", "KingNish" ], "count": 3 } ]
2024-09-04T16:49:06.000Z
2024-09-04T16:49:06.292Z
[]
/posts/m-ric/267778050099092
2,130
0
169227177418296
[ { "type": "text", "value": "the new version of Enigma, our code-instruct specialist, is out now:", "raw": "the new version of Enigma, our code-instruct specialist, is out now:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma", "href": null, "resource": { "type": "model", "id": "ValiantLabs/Llama3.1-8B-Enigma", "discussionNum": null }, "url": "https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " is trained on code-instruct and general chat data.", "raw": " is trained on code-instruct and general chat data.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- the updated code-instruct database is available now as well: ", "raw": "- the updated code-instruct database is available now as well: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/sequelbox/Tachibana", "href": null, "resource": { "type": "dataset", "id": "sequelbox/Tachibana", "discussionNum": null }, "url": "https://huggingface.co/datasets/sequelbox/Tachibana", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "more to come soon!", "raw": "more to come soon!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
the new version of Enigma, our code-instruct specialist, is out now: - https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma is trained on code-instruct and general chat data. - the updated code-instruct database is available now as well: https://huggingface.co/datasets/sequelbox/Tachibana more to come soon!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63444f2687964b331809eb55%2FWvZivsvKsM_t0tBtakovK.png", "fullname": "t.d.a.g.", "name": "sequelbox", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 51, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "djuna" ], "count": 2 } ]
2024-09-04T16:23:27.000Z
2024-09-04T16:23:27.875Z
[]
/posts/sequelbox/169227177418296
711
0
317300660282714
[ { "type": "text", "value": "๐ŸฃAi2 Releasing OLMoE! ", "raw": "๐ŸฃAi2 Releasing OLMoE! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters, and, OLMoE is 100% open-source in model, code-base, datasets!", "raw": "OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters, and, OLMoE is 100% open-source in model, code-base, datasets!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿฆ–Paper: ", "raw": "๐Ÿฆ–Paper: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://arxiv.org/abs/2409.02060", "href": "https://arxiv.org/abs/2409.02060", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค—Model: ", "raw": "๐Ÿค—Model: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct", "href": null, "resource": { "type": "model", "id": "allenai/OLMoE-1B-7B-0924-Instruct", "discussionNum": null }, "url": "https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’พDatasets: ", "raw": "๐Ÿ’พDatasets: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/allenai/OLMoE-mix-0924", "href": null, "resource": { "type": "dataset", "id": "allenai/OLMoE-mix-0924", "discussionNum": null }, "url": "https://huggingface.co/datasets/allenai/OLMoE-mix-0924", "code": null, "user": null, "label": null, "lang": null } ]
๐ŸฃAi2 Releasing OLMoE! OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters, and, OLMoE is 100% open-source in model, code-base, datasets! ๐Ÿฆ–Paper: https://arxiv.org/abs/2409.02060 ๐Ÿค—Model: https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct ๐Ÿ’พDatasets: https://huggingface.co/datasets/allenai/OLMoE-mix-0924
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F642827944fe87caede802784%2Fa7s3Ub9Cy6-PuuaX8wwXm.png", "fullname": "VILARIN", "name": "vilarin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 67, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "orrinin", "YaTharThShaRma999", "John6666", "osanseviero", "den0620", "louisbrulenaudet" ], "count": 6 }, { "reaction": "๐Ÿš€", "users": [ "sequelbox" ], "count": 1 } ]
2024-09-04T15:48:41.000Z
2024-09-06T08:54:44.424Z
[]
/posts/vilarin/317300660282714
1,613
0
761761396766140
[ { "type": "text", "value": "If you want a clear understanding of the environmental impacts of AI throughout its entire lifecycle, this primer by ", "raw": "If you want a clear understanding of the environmental impacts of AI throughout its entire lifecycle, this primer by ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@sasha", "href": null, "resource": null, "url": null, "code": null, "user": "sasha", "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@brunatrevelin", "href": null, "resource": null, "url": null, "code": null, "user": "brunatrevelin", "label": null, "lang": null }, { "type": "text", "value": " and ", "raw": " and ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@meg", "href": null, "resource": null, "url": null, "code": null, "user": "meg", "label": null, "lang": null }, { "type": "text", "value": " is a must-read.", "raw": " is a must-read.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "It brilliantly explains which types of impacts occur, when they happen, and why they matter.", "raw": "It brilliantly explains which types of impacts occur, when they happen, and why they matter.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/blog/sasha/ai-environment-primer", "href": "https://huggingface.co/blog/sasha/ai-environment-primer", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
If you want a clear understanding of the environmental impacts of AI throughout its entire lifecycle, this primer by @sasha @brunatrevelin and @meg is a must-read. It brilliantly explains which types of impacts occur, when they happen, and why they matter. https://huggingface.co/blog/sasha/ai-environment-primer
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F647f36a8454af0237bd49574%2FjshkqBUTY-GZL8As8y6Aq.jpeg", "fullname": "Florent Daudens", "name": "fdaudens", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 384, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F651ea296c887c687e09158af%2Fju9Zx2xDBVhDLnLL1e1Mq.jpeg", "fullname": "Bruna Trevelin", "name": "brunatrevelin", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 36 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1626214544196-60c757ea5f9a76ab3f844f12.png", "fullname": "Margaret Mitchell", "name": "meg", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 98 }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F60edd0133e2c73a9a21455f5%2FyK1G-Fv-YjYb7v_chkz3p.jpeg", "fullname": "Sasha Luccioni", "name": "sasha", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 162 } ]
[ { "reaction": "โค๏ธ", "users": [ "brunatrevelin", "John6666", "not-lain", "BrigitteTousi", "louisbrulenaudet" ], "count": 5 } ]
2024-09-04T15:48:16.000Z
2024-09-04T15:48:16.042Z
[]
/posts/fdaudens/761761396766140
900
0
964871105613632
[ { "type": "text", "value": "The new Qwen-2 VL models seem to perform quite well in object detection. You can prompt them to respond with bounding boxes in a reference frame of 1k x 1k pixels and scale those boxes to the original image size.", "raw": "The new Qwen-2 VL models seem to perform quite well in object detection. You can prompt them to respond with bounding boxes in a reference frame of 1k x 1k pixels and scale those boxes to the original image size.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "You can try it out with my space ", "raw": "You can try it out with my space ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/maxiw/Qwen2-VL-Detection", "href": null, "resource": { "type": "space", "id": "maxiw/Qwen2-VL-Detection", "discussionNum": null }, "url": "https://huggingface.co/spaces/maxiw/Qwen2-VL-Detection", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
The new Qwen-2 VL models seem to perform quite well in object detection. You can prompt them to respond with bounding boxes in a reference frame of 1k x 1k pixels and scale those boxes to the original image size. You can try it out with my space https://huggingface.co/spaces/maxiw/Qwen2-VL-Detection
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6313a26b2c7ffdd9f50187ed%2FMTBOHg2bMcuOMWFLCZ86L.png", "fullname": "Maxi", "name": "maxiw", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 48, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6313a26b2c7ffdd9f50187ed%2FZ23y8kJLAGbYgaF95CfyX.png" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "de-Rodrigo", "dsmonk", "tosouth", "hxypqr", "mrdbourke", "SvPolina", "akazakov", "Panerlu", "iiBLACKii" ], "count": 9 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "Greenbean", "denizaybey", "YaTharThShaRma999", "rwightman" ], "count": 5 }, { "reaction": "๐Ÿค—", "users": [ "thusinh1969" ], "count": 1 } ]
2024-09-04T14:06:12.000Z
2024-10-06T08:27:03.648Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6313a26b2c7ffdd9f50187ed%2FMTBOHg2bMcuOMWFLCZ86L.png", "fullname": "Maxi", "name": "maxiw", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 48, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F64660c1ca1a19b0623fcf84c%2FwKZW7gdXufDO8xJ4NsOVV.jpeg", "fullname": "YCX", "name": "fridayfairy", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F642cf7abab0cc792e43b8497%2FIo06Gn7ERvz2N9QMo0CBY.jpeg", "fullname": "Nguyแป…n Anh Nguyรชn", "name": "thusinh1969", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 7, "isFollowing": false } ]
/posts/maxiw/964871105613632
2,361
4
917987360905988
[ { "type": "text", "value": "A few weeks ago, we uploaded the MERIT Dataset ๐ŸŽ’๐Ÿ“ƒ๐Ÿ† into Hugging Face ๐Ÿค—!", "raw": "A few weeks ago, we uploaded the MERIT Dataset ๐ŸŽ’๐Ÿ“ƒ๐Ÿ† into Hugging Face ๐Ÿค—!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Now, we are excited to share the Merit Dataset paper via arXiv! ๐Ÿ“ƒ๐Ÿ’ซ", "raw": "Now, we are excited to share the Merit Dataset paper via arXiv! ๐Ÿ“ƒ๐Ÿ’ซ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/papers/2409.00447", "href": null, "resource": { "type": "paper", "id": "2409.00447", "discussionNum": null }, "url": "https://huggingface.co/papers/2409.00447", "code": null, "user": null, "label": "The MERIT Dataset: Modelling and Efficiently Rendering Interpretable\n Transcripts (2409.00447)", "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. ๐Ÿ”ง๐Ÿ”จ", "raw": "The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. ๐Ÿ”ง๐Ÿ”จ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? ๐Ÿง", "raw": "MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? ๐Ÿง", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Resources:", "raw": "Resources:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Dataset: ", "raw": "- Dataset: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/de-Rodrigo/merit", "href": null, "resource": { "type": "dataset", "id": "de-Rodrigo/merit", "discussionNum": null }, "url": "https://huggingface.co/datasets/de-Rodrigo/merit", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Code and generation pipeline: ", "raw": "- Code and generation pipeline: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/nachoDRT/MERIT-Dataset", "href": "https://github.com/nachoDRT/MERIT-Dataset", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "PD: We are grateful to Hugging Face ๐Ÿค— for providing the fantastic tools and resources we find in the platform and, more specifically, to ", "raw": "PD: We are grateful to Hugging Face ๐Ÿค— for providing the fantastic tools and resources we find in the platform and, more specifically, to ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@nielsr", "href": null, "resource": null, "url": null, "code": null, "user": "nielsr", "label": null, "lang": null }, { "type": "text", "value": " for sharing the fine-tuning/inference scripts we have used in our benchmark.", "raw": " for sharing the fine-tuning/inference scripts we have used in our benchmark.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
A few weeks ago, we uploaded the MERIT Dataset ๐ŸŽ’๐Ÿ“ƒ๐Ÿ† into Hugging Face ๐Ÿค—! Now, we are excited to share the Merit Dataset paper via arXiv! ๐Ÿ“ƒ๐Ÿ’ซ https://huggingface.co/papers/2409.00447 The MERIT Dataset is a fully synthetic, labeled dataset created for training and benchmarking LLMs on Visually Rich Document Understanding tasks. It is also designed to help detect biases and improve interpretability in LLMs, where we are actively working. ๐Ÿ”ง๐Ÿ”จ MERIT contains synthetically rendered students' transcripts of records from different schools in English and Spanish. We plan to expand the dataset into different contexts (synth medical/insurance documents, synth IDS, etc.) Want to collaborate? Do you have any feedback? ๐Ÿง Resources: - Dataset: https://huggingface.co/datasets/de-Rodrigo/merit - Code and generation pipeline: https://github.com/nachoDRT/MERIT-Dataset PD: We are grateful to Hugging Face ๐Ÿค— for providing the fantastic tools and resources we find in the platform and, more specifically, to @nielsr for sharing the fine-tuning/inference scripts we have used in our benchmark.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1676563169736-noauth.jpeg", "fullname": "de Rodrigo", "name": "de-Rodrigo", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63ee535a190ddd6214f30dc2%2FcdxZSF1f69iGUkmtydACh.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63ee535a190ddd6214f30dc2%2FAio4dSOAFLkbSCPwz_DEO.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63ee535a190ddd6214f30dc2%2FQgeJUVQ07gHcMcfEBbWXm.png" } ]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1608042047613-5f1158120c833276f61f1a84.jpeg", "fullname": "Niels Rogge", "name": "nielsr", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 680 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "louisbrulenaudet" ], "count": 2 }, { "reaction": "๐Ÿ”ฅ", "users": [ "David-Egea" ], "count": 1 } ]
2024-09-04T13:30:30.000Z
2024-09-04T13:34:02.689Z
[]
/posts/de-Rodrigo/917987360905988
989
0
191046582909567
[ { "type": "mention", "value": null, "raw": "@victor", "href": null, "resource": null, "url": null, "code": null, "user": "victor", "label": null, "lang": null }, { "type": "text", "value": " Sorry for the repetitiveness.", "raw": " Sorry for the repetitiveness.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it.", "raw": "I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x.", "raw": "Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Yntec (", "raw": "Yntec (", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/Yntec", "href": "https://huggingface.co/Yntec", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ") discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him.", "raw": ") discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts.", "raw": "The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Thank you in advance.", "raw": "Thank you in advance.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/John6666/blitz_diffusion_error", "href": null, "resource": { "type": "space", "id": "John6666/blitz_diffusion_error", "discussionNum": null }, "url": "https://huggingface.co/spaces/John6666/blitz_diffusion_error", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/John6666/GPU-stresser-t2i-error", "href": null, "resource": { "type": "space", "id": "John6666/GPU-stresser-t2i-error", "discussionNum": null }, "url": "https://huggingface.co/spaces/John6666/GPU-stresser-t2i-error", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```\nValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']\n```", "href": null, "resource": null, "url": null, "code": "ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']", "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
@victor Sorry for the repetitiveness. I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it. Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x. Yntec (https://huggingface.co/Yntec) discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him. The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts. Thank you in advance. https://huggingface.co/spaces/John6666/blitz_diffusion_error https://huggingface.co/spaces/John6666/GPU-stresser-t2i-error ``` ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF'] ```
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6640bbd0220cfa8cbfdce080%2FwiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false }
[]
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f17f0a0925b9863e28ad517%2FX7QKoiXbUtEZSG9jyvfk3.jpeg", "fullname": "Victor Mustar", "name": "victor", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 2607 } ]
[ { "reaction": "๐Ÿ‘€", "users": [ "victor", "julien-c", "AtAndDev" ], "count": 3 } ]
2024-09-04T12:58:53.000Z
2024-09-09T10:51:25.537Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63239b8370edc53f51cd5d42%2F88od0k-AAkxAIV-5ULwDs.png", "fullname": "Yn Tec", "name": "Yntec", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2008, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f17f0a0925b9863e28ad517%2FX7QKoiXbUtEZSG9jyvfk3.jpeg", "fullname": "Victor Mustar", "name": "victor", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 2607, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6640bbd0220cfa8cbfdce080%2FwiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1621947938344-noauth.png", "fullname": "Abubakar Abid", "name": "abidlabs", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 487, "isFollowing": false }, { "avatarUrl": "/avatars/6bd14f36bf31ddc8c86cddd6d39d920e.svg", "fullname": "Juandiego Morzan", "name": "jdmorzan", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/John6666/191046582909567
9,526
14
395501387301708
[ { "type": "text", "value": "I am integrating Azure Cosmos DB, the database system that backs GPT conversations into my workflow, and experimenting with new patterns to accelerate dataset evolution for evaluation and training of AI.", "raw": "I am integrating Azure Cosmos DB, the database system that backs GPT conversations into my workflow, and experimenting with new patterns to accelerate dataset evolution for evaluation and training of AI.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "While initially using it for research prompts and research outputs using my GPT-4o client here which can interface and search ArXiv, I am excited to try out some new features specifically for AI at scale. Research on memory augmentation is shown. ", "raw": "While initially using it for research prompts and research outputs using my GPT-4o client here which can interface and search ArXiv, I am excited to try out some new features specifically for AI at scale. Research on memory augmentation is shown. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/awacke1/GPT-4o-omni-text-audio-image-video", "href": null, "resource": { "type": "space", "id": "awacke1/GPT-4o-omni-text-audio-image-video", "discussionNum": null }, "url": "https://huggingface.co/spaces/awacke1/GPT-4o-omni-text-audio-image-video", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/awacke1/AzureCosmosDBUI", "href": null, "resource": { "type": "space", "id": "awacke1/AzureCosmosDBUI", "discussionNum": null }, "url": "https://huggingface.co/spaces/awacke1/AzureCosmosDBUI", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I am integrating Azure Cosmos DB, the database system that backs GPT conversations into my workflow, and experimenting with new patterns to accelerate dataset evolution for evaluation and training of AI. While initially using it for research prompts and research outputs using my GPT-4o client here which can interface and search ArXiv, I am excited to try out some new features specifically for AI at scale. Research on memory augmentation is shown. https://huggingface.co/spaces/awacke1/GPT-4o-omni-text-audio-image-video https://huggingface.co/spaces/awacke1/AzureCosmosDBUI
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1656147940537-620630b603825909dcbeba35.jpeg", "fullname": "Aaron C Wacker", "name": "awacke1", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 185, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F620630b603825909dcbeba35%2F7YNtYZ38tpsms_UklntR1.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-04T11:44:05.000Z
2024-09-04T11:44:05.531Z
[]
/posts/awacke1/395501387301708
588
0
506001462483816
[ { "type": "text", "value": "๐Ÿš€ Introducing Hugging Face's Multilingual Speech-to-Speech! ๐ŸŽค", "raw": "๐Ÿš€ Introducing Hugging Face's Multilingual Speech-to-Speech! ๐ŸŽค", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ฌOur modular, cross-platform pipeline to run GPT4o-like experiences on device can now seamlessly switch languages mid-conversation with an imperceptible 100ms delay.", "raw": "๐Ÿ’ฌOur modular, cross-platform pipeline to run GPT4o-like experiences on device can now seamlessly switch languages mid-conversation with an imperceptible 100ms delay.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŒŸ Building on an amazing early reception with 2600 stars on GitHub ๐ŸŒŸ ", "raw": "๐ŸŒŸ Building on an amazing early reception with 2600 stars on GitHub ๐ŸŒŸ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿš€ We are expanding the library to support multiple languages ", "raw": "๐Ÿš€ We are expanding the library to support multiple languages ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ”ฅ Try it out with a flag: --language fr ", "raw": "๐Ÿ”ฅ Try it out with a flag: --language fr ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿคฏ Or don't set the flag and let the system detect the language ", "raw": "๐Ÿคฏ Or don't set the flag and let the system detect the language ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ก What feature should we add next?", "raw": "๐Ÿ’ก What feature should we add next?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿš€ Introducing Hugging Face's Multilingual Speech-to-Speech! ๐ŸŽค ๐Ÿ’ฌOur modular, cross-platform pipeline to run GPT4o-like experiences on device can now seamlessly switch languages mid-conversation with an imperceptible 100ms delay. ๐ŸŒŸ Building on an amazing early reception with 2600 stars on GitHub ๐ŸŒŸ ๐Ÿš€ We are expanding the library to support multiple languages ๐Ÿ”ฅ Try it out with a flag: --language fr ๐Ÿคฏ Or don't set the flag and let the system detect the language ๐Ÿ’ก What feature should we add next?
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65d66b494bbd0d92b641cdbb%2F6-7dm7B-JxcoS1QlCPdMN.jpeg", "fullname": "Andres Marafioti", "name": "andito", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 61, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65d66b494bbd0d92b641cdbb%2FWbpkWi8OlJGXnL1kzmcqK.mp4" } ]
[]
[ { "reaction": "๐Ÿค—", "users": [ "prithivMLmods", "osanseviero", "John6666", "THEFIG" ], "count": 4 }, { "reaction": "๐Ÿ˜Ž", "users": [ "de-Rodrigo" ], "count": 1 }, { "reaction": "๐Ÿ”ฅ", "users": [ "Aurelien-Morgan" ], "count": 1 }, { "reaction": "๐Ÿ‘", "users": [ "dashfunnydashdash" ], "count": 1 } ]
2024-09-04T07:54:29.000Z
2024-09-04T07:54:44.640Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65d66b494bbd0d92b641cdbb%2F6-7dm7B-JxcoS1QlCPdMN.jpeg", "fullname": "Andres Marafioti", "name": "andito", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 61, "isFollowing": false } ]
/posts/andito/506001462483816
1,590
1
314529831042259
[ { "type": "text", "value": "๐Ÿงญ Guided Reasoning", "raw": "๐Ÿงญ Guided Reasoning", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘‹Hi everyone, ", "raw": "๐Ÿ‘‹Hi everyone, ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We've been releasing Guided Reasoning:", "raw": "We've been releasing Guided Reasoning:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Our AI guides walk your favorite LLM through complex reasoning problems.", "raw": "Our AI guides walk your favorite LLM through complex reasoning problems.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŽฏ Goals:", "raw": "๐ŸŽฏ Goals:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "1๏ธโƒฃ Reliability. AIs consistently follow reasoning methods.", "raw": "1๏ธโƒฃ Reliability. AIs consistently follow reasoning methods.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "2๏ธโƒฃ Self-explainability. AIs see reasoning protocols and can explain internal deliberation.", "raw": "2๏ธโƒฃ Self-explainability. AIs see reasoning protocols and can explain internal deliberation.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "3๏ธโƒฃ Contestability. Users may amend AI reasoning and revise plausibility assessments.", "raw": "3๏ธโƒฃ Contestability. Users may amend AI reasoning and revise plausibility assessments.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Try out Guided Reasoning with our light demo chatbot, powered by ๐Ÿค— HuggingFace's free Inference Api and small LLMs. (Sorry for poor latency and limited availability -- we are currently searching for ๐Ÿ’ธ compute sponsors to run more powerful models, faster, and optimize guided reasoning performance.)", "raw": "Try out Guided Reasoning with our light demo chatbot, powered by ๐Ÿค— HuggingFace's free Inference Api and small LLMs. (Sorry for poor latency and limited availability -- we are currently searching for ๐Ÿ’ธ compute sponsors to run more powerful models, faster, and optimize guided reasoning performance.)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Built on top of Logikon's open-source AI reasoning analytics.", "raw": "Built on top of Logikon's open-source AI reasoning analytics.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Demo chat app: ", "raw": "Demo chat app: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/logikon/benjamin-chat", "href": null, "resource": { "type": "space", "id": "logikon/benjamin-chat", "discussionNum": null }, "url": "https://huggingface.co/spaces/logikon/benjamin-chat", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Github: ", "raw": "Github: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/logikon-ai/logikon", "href": "https://github.com/logikon-ai/logikon", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Technical report: ", "raw": "Technical report: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://arxiv.org/abs/2408.16331", "href": "https://arxiv.org/abs/2408.16331", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โžก๏ธ Check it out and get involved! Looking forward to hearing from you.", "raw": "โžก๏ธ Check it out and get involved! Looking forward to hearing from you.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿงญ Guided Reasoning ๐Ÿ‘‹Hi everyone, We've been releasing Guided Reasoning: Our AI guides walk your favorite LLM through complex reasoning problems. ๐ŸŽฏ Goals: 1๏ธโƒฃ Reliability. AIs consistently follow reasoning methods. 2๏ธโƒฃ Self-explainability. AIs see reasoning protocols and can explain internal deliberation. 3๏ธโƒฃ Contestability. Users may amend AI reasoning and revise plausibility assessments. Try out Guided Reasoning with our light demo chatbot, powered by ๐Ÿค— HuggingFace's free Inference Api and small LLMs. (Sorry for poor latency and limited availability -- we are currently searching for ๐Ÿ’ธ compute sponsors to run more powerful models, faster, and optimize guided reasoning performance.) Built on top of Logikon's open-source AI reasoning analytics. Demo chat app: https://huggingface.co/spaces/logikon/benjamin-chat Github: https://github.com/logikon-ai/logikon Technical report: https://arxiv.org/abs/2408.16331 โžก๏ธ Check it out and get involved! Looking forward to hearing from you.
{ "avatarUrl": "/avatars/78be882adf32b808686713e9b457797d.svg", "fullname": "Gregor Betz", "name": "ggbetz", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 4, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "andito", "reuank", "scacean" ], "count": 3 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-04T07:23:45.000Z
2024-09-04T07:26:21.336Z
[]
/posts/ggbetz/314529831042259
1,145
0
182312801833822
[ { "type": "text", "value": " Fine-tuned Phi-3.5 Chatbot", "raw": " Fine-tuned Phi-3.5 Chatbot", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This project presents a fine-tuned version of Microsoft's Phi-3.5 model, optimized for enhanced conversational abilities and general knowledge tasks.", "raw": "This project presents a fine-tuned version of Microsoft's Phi-3.5 model, optimized for enhanced conversational abilities and general knowledge tasks.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Model Details", "raw": "Model Details", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Base model: microsoft/Phi-3.5-mini-instruct", "raw": "- Base model: microsoft/Phi-3.5-mini-instruct", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Fine-tuning method: PEFT (Parameter-Efficient Fine-Tuning)", "raw": "- Fine-tuning method: PEFT (Parameter-Efficient Fine-Tuning)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Training data: [Brief description of your dataset]", "raw": "- Training data: [Brief description of your dataset]", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " Features", "raw": " Features", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Improved response generation for a wide range of topics", "raw": "- Improved response generation for a wide range of topics", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Enhanced context understanding and coherence", "raw": "- Enhanced context understanding and coherence", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Optimized for deployment on Hugging Face Spaces", "raw": "- Optimized for deployment on Hugging Face Spaces", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Usage", "raw": "Usage", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This model can be used for various natural language processing tasks, including:", "raw": "This model can be used for various natural language processing tasks, including:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- General conversation", "raw": "- General conversation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Question answering", "raw": "- Question answering", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Task instructions", "raw": "- Task instructions", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Creative writing", "raw": "- Creative writing", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Try out the model here : ", "raw": "Try out the model here : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/sagar007/phi3.5_mini_instruct_finetune", "href": null, "resource": { "type": "space", "id": "sagar007/phi3.5_mini_instruct_finetune", "discussionNum": null }, "url": "https://huggingface.co/spaces/sagar007/phi3.5_mini_instruct_finetune", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Limitations", "raw": "Limitations", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "While this fine-tuned model shows improved performance, users should be aware of potential biases and limitations inherent in language models. Always critically evaluate the model's outputs.", "raw": "While this fine-tuned model shows improved performance, users should be aware of potential biases and limitations inherent in language models. Always critically evaluate the model's outputs.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " Feedback", "raw": " Feedback", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I welcome any feedback, suggestions, or questions about this project. Feel free to open an issue or contribute to further improvements!", "raw": "I welcome any feedback, suggestions, or questions about this project. Feel free to open an issue or contribute to further improvements!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "#Phi35 #FineTuning #NLP #MachineLearning #HuggingFace", "raw": "#Phi35 #FineTuning #NLP #MachineLearning #HuggingFace", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Fine-tuned Phi-3.5 Chatbot This project presents a fine-tuned version of Microsoft's Phi-3.5 model, optimized for enhanced conversational abilities and general knowledge tasks. Model Details - Base model: microsoft/Phi-3.5-mini-instruct - Fine-tuning method: PEFT (Parameter-Efficient Fine-Tuning) - Training data: [Brief description of your dataset] Features - Improved response generation for a wide range of topics - Enhanced context understanding and coherence - Optimized for deployment on Hugging Face Spaces Usage This model can be used for various natural language processing tasks, including: - General conversation - Question answering - Task instructions - Creative writing Try out the model here : https://huggingface.co/spaces/sagar007/phi3.5_mini_instruct_finetune Limitations While this fine-tuned model shows improved performance, users should be aware of potential biases and limitations inherent in language models. Always critically evaluate the model's outputs. Feedback I welcome any feedback, suggestions, or questions about this project. Feel free to open an issue or contribute to further improvements! #Phi35 #FineTuning #NLP #MachineLearning #HuggingFace
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62a464cfe0de0c5c6d8b04a1%2F1gCs46R_bW9apQzLQUrn5.png", "fullname": "Sagar pallai", "name": "sagar007", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 8, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62a464cfe0de0c5c6d8b04a1%2FU6PEKRIi2Dk8PfHT5Syyd.webp" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-04T05:28:56.000Z
2024-09-05T05:40:08.424Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6569216f9c96f1a47bf45788%2FmCLqmAs4dOjKdxNQVAp1w.png", "fullname": "Sica Rius", "name": "SicariusSicariiStuff", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 135, "isFollowing": false } ]
/posts/sagar007/182312801833822
464
1
329890206827914
[ { "type": "text", "value": "Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation ", "raw": "Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://mltblog.com/4dNPSnB", "href": "https://mltblog.com/4dNPSnB", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "New additions to this ground-breaking system include multi-token distillation when processing prompts, agents to meet user intent, more NLP, and a command prompt menu accepting both standard prompts and various actions.", "raw": "New additions to this ground-breaking system include multi-token distillation when processing prompts, agents to meet user intent, more NLP, and a command prompt menu accepting both standard prompts and various actions.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I also added several illustrations, featuring xLLM in action with a full session and sample commands to fine-tune in real-time. All the code, input sources (anonymized corporate corpus from fortune 100 company), contextual backend tables including embeddings, are on GitHub. My system has zero weight, no transformer, and no neural network. It relies on explainable AI, does not require training, is fully reproducible, and fits in memory. Yet your prompts can retrieve relevant full text entities from the corpus with no latency โ€” including URLs, categories, titles, email addresses, and so on โ€” thanks to well-designed architecture.", "raw": "I also added several illustrations, featuring xLLM in action with a full session and sample commands to fine-tune in real-time. All the code, input sources (anonymized corporate corpus from fortune 100 company), contextual backend tables including embeddings, are on GitHub. My system has zero weight, no transformer, and no neural network. It relies on explainable AI, does not require training, is fully reproducible, and fits in memory. Yet your prompts can retrieve relevant full text entities from the corpus with no latency โ€” including URLs, categories, titles, email addresses, and so on โ€” thanks to well-designed architecture.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Read more, get the code, paper and everything for free, at ", "raw": "Read more, get the code, paper and everything for free, at ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://mltblog.com/4dNPSnB", "href": "https://mltblog.com/4dNPSnB", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Hyperfast Contextual Custom LLM with Agents, Multitokens, Explainable AI, and Distillation https://mltblog.com/4dNPSnB New additions to this ground-breaking system include multi-token distillation when processing prompts, agents to meet user intent, more NLP, and a command prompt menu accepting both standard prompts and various actions. I also added several illustrations, featuring xLLM in action with a full session and sample commands to fine-tune in real-time. All the code, input sources (anonymized corporate corpus from fortune 100 company), contextual backend tables including embeddings, are on GitHub. My system has zero weight, no transformer, and no neural network. It relies on explainable AI, does not require training, is fully reproducible, and fits in memory. Yet your prompts can retrieve relevant full text entities from the corpus with no latency โ€” including URLs, categories, titles, email addresses, and so on โ€” thanks to well-designed architecture. Read more, get the code, paper and everything for free, at https://mltblog.com/4dNPSnB
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F669c89e98f2dbc203f9e74ab%2FhigvnXEHeo_Ig2bgTpn47.png", "fullname": "Vincent Granville", "name": "vincentg64", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 17, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F669c89e98f2dbc203f9e74ab%2FZlwkNzh2GnMNGKVJASNfN.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "djuna", "Bruhn", "lilcheaty" ], "count": 4 }, { "reaction": "โค๏ธ", "users": [ "StephenGenusa" ], "count": 1 } ]
2024-09-03T16:49:54.000Z
2024-09-05T18:53:58.943Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662162fd296b3d40f15367a4%2FjM74dtHuAGI6UlLGT7A9s.jpeg", "fullname": "Stephen Genusa", "name": "StephenGenusa", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 1, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F669c89e98f2dbc203f9e74ab%2FhigvnXEHeo_Ig2bgTpn47.png", "fullname": "Vincent Granville", "name": "vincentg64", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 17, "isFollowing": false } ]
/posts/vincentg64/329890206827914
1,448
2
407079979685500
[ { "type": "text", "value": "๐Ÿšจ ๐—›๐˜‚๐—บ๐—ฎ๐—ป ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—œ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด: ๐—ก๐—ผ๐˜ ๐˜๐—ต๐—ฒ ๐—ด๐—ผ๐—น๐—ฑ๐—ฒ๐—ป ๐—ด๐—ผ๐—ผ๐˜€๐—ฒ ๐˜„๐—ฒ ๐˜๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜?", "raw": "๐Ÿšจ ๐—›๐˜‚๐—บ๐—ฎ๐—ป ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—œ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด: ๐—ก๐—ผ๐˜ ๐˜๐—ต๐—ฒ ๐—ด๐—ผ๐—น๐—ฑ๐—ฒ๐—ป ๐—ด๐—ผ๐—ผ๐˜€๐—ฒ ๐˜„๐—ฒ ๐˜๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Iโ€™ve just read a great paper where Cohere researchers raises significant questions about using Human feedback to evaluate AI language models.", "raw": "Iโ€™ve just read a great paper where Cohere researchers raises significant questions about using Human feedback to evaluate AI language models.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Human feedback is often regarded as the gold standard for judging AI performance, but it turns out, it might be more like fool's gold : the study reveals that our human judgments are easily swayed by factors that have nothing to do with actual AI performance.", "raw": "Human feedback is often regarded as the gold standard for judging AI performance, but it turns out, it might be more like fool's gold : the study reveals that our human judgments are easily swayed by factors that have nothing to do with actual AI performance.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:", "raw": "๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿง  Test several models: Llama-2, Falcon-40B, Cohere Command 6 and 52B ๐Ÿ™…โ€โ™‚๏ธ Refusing to answer tanks AI ratings more than getting facts wrong. We apparently prefer a wrong answer to no answer!", "raw": "๐Ÿง  Test several models: Llama-2, Falcon-40B, Cohere Command 6 and 52B ๐Ÿ™…โ€โ™‚๏ธ Refusing to answer tanks AI ratings more than getting facts wrong. We apparently prefer a wrong answer to no answer!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ช Confidence is key (even when it shouldn't be): More assertive AI responses are seen as more factual, even when they're not. This could be pushing AI development in the wrong direction, with systems like RLHF.", "raw": "๐Ÿ’ช Confidence is key (even when it shouldn't be): More assertive AI responses are seen as more factual, even when they're not. This could be pushing AI development in the wrong direction, with systems like RLHF.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŽญ The assertiveness trap: As AI responses get more confident-sounding, non-expert annotators become less likely to notice when they're wrong or inconsistent.", "raw": "๐ŸŽญ The assertiveness trap: As AI responses get more confident-sounding, non-expert annotators become less likely to notice when they're wrong or inconsistent.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "And a consequence of the above:", "raw": "And a consequence of the above:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ”„ ๐—ฅ๐—Ÿ๐—›๐—™ ๐—บ๐—ถ๐—ด๐—ต๐˜ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ณ๐—ถ๐—ฟ๐—ฒ: Using human feedback to train AI (Reinforcement Learning from Human Feedback) could accidentally make AI more overconfident and less accurate.", "raw": "๐Ÿ”„ ๐—ฅ๐—Ÿ๐—›๐—™ ๐—บ๐—ถ๐—ด๐—ต๐˜ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ณ๐—ถ๐—ฟ๐—ฒ: Using human feedback to train AI (Reinforcement Learning from Human Feedback) could accidentally make AI more overconfident and less accurate.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This paper means we need to think carefully about how we evaluate and train AI systems to ensure we're rewarding correctness over apparences of it like confident talk.", "raw": "This paper means we need to think carefully about how we evaluate and train AI systems to ensure we're rewarding correctness over apparences of it like confident talk.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ›”๏ธ Chatbot Arenaโ€™s ELO leaderboard, based on crowdsourced answers from average joes like you and me, might become completely irrelevant as models will become smarter and smarter.", "raw": "โ›”๏ธ Chatbot Arenaโ€™s ELO leaderboard, based on crowdsourced answers from average joes like you and me, might become completely irrelevant as models will become smarter and smarter.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Read the paper ๐Ÿ‘‰ ", "raw": "Read the paper ๐Ÿ‘‰ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/papers/2309.16349", "href": null, "resource": { "type": "paper", "id": "2309.16349", "discussionNum": null }, "url": "https://huggingface.co/papers/2309.16349", "code": null, "user": null, "label": "Human Feedback is not Gold Standard (2309.16349)", "lang": null } ]
๐Ÿšจ ๐—›๐˜‚๐—บ๐—ฎ๐—ป ๐—™๐—ฒ๐—ฒ๐—ฑ๐—ฏ๐—ฎ๐—ฐ๐—ธ ๐—ณ๐—ผ๐—ฟ ๐—”๐—œ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด: ๐—ก๐—ผ๐˜ ๐˜๐—ต๐—ฒ ๐—ด๐—ผ๐—น๐—ฑ๐—ฒ๐—ป ๐—ด๐—ผ๐—ผ๐˜€๐—ฒ ๐˜„๐—ฒ ๐˜๐—ต๐—ผ๐˜‚๐—ด๐—ต๐˜? Iโ€™ve just read a great paper where Cohere researchers raises significant questions about using Human feedback to evaluate AI language models. Human feedback is often regarded as the gold standard for judging AI performance, but it turns out, it might be more like fool's gold : the study reveals that our human judgments are easily swayed by factors that have nothing to do with actual AI performance. ๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€: ๐Ÿง  Test several models: Llama-2, Falcon-40B, Cohere Command 6 and 52B ๐Ÿ™…โ€โ™‚๏ธ Refusing to answer tanks AI ratings more than getting facts wrong. We apparently prefer a wrong answer to no answer! ๐Ÿ’ช Confidence is key (even when it shouldn't be): More assertive AI responses are seen as more factual, even when they're not. This could be pushing AI development in the wrong direction, with systems like RLHF. ๐ŸŽญ The assertiveness trap: As AI responses get more confident-sounding, non-expert annotators become less likely to notice when they're wrong or inconsistent. And a consequence of the above: ๐Ÿ”„ ๐—ฅ๐—Ÿ๐—›๐—™ ๐—บ๐—ถ๐—ด๐—ต๐˜ ๐—ฏ๐—ฎ๐—ฐ๐—ธ๐—ณ๐—ถ๐—ฟ๐—ฒ: Using human feedback to train AI (Reinforcement Learning from Human Feedback) could accidentally make AI more overconfident and less accurate. This paper means we need to think carefully about how we evaluate and train AI systems to ensure we're rewarding correctness over apparences of it like confident talk. โ›”๏ธ Chatbot Arenaโ€™s ELO leaderboard, based on crowdsourced answers from average joes like you and me, might become completely irrelevant as models will become smarter and smarter. Read the paper ๐Ÿ‘‰ https://huggingface.co/papers/2309.16349
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2F7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2FZeAJhy5RG9F0knqMsqwee.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "louisbrulenaudet" ], "count": 2 } ]
2024-09-03T14:45:11.000Z
2024-09-03T14:45:11.422Z
[]
/posts/m-ric/407079979685500
808
0
440844864868620
[ { "type": "text", "value": "Is AIโ€™s impact on elections being overblown? Three researchers think so in this opinion piece published in the MIT Tech Review.", "raw": "Is AIโ€™s impact on elections being overblown? Three researchers think so in this opinion piece published in the MIT Tech Review.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Highlights:", "raw": "Highlights:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข\tโ€œAI is being used to try to influence electoral processes, but these efforts have not been fruitful.โ€", "raw": "โ€ข\tโ€œAI is being used to try to influence electoral processes, but these efforts have not been fruitful.โ€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข\tโ€œWhy were these initial speculations about AI-enabled electoral interference so off (โ€ฆ) ? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.โ€", "raw": "โ€ข\tโ€œWhy were these initial speculations about AI-enabled electoral interference so off (โ€ฆ) ? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.โ€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ€ข\tโ€œYet we should remember that thereโ€™s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed.โ€", "raw": "โ€ข\tโ€œYet we should remember that thereโ€™s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed.โ€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘‰Read more here: ", "raw": "๐Ÿ‘‰Read more here: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/", "href": "https://technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Is AIโ€™s impact on elections being overblown? Three researchers think so in this opinion piece published in the MIT Tech Review. Highlights: โ€ข โ€œAI is being used to try to influence electoral processes, but these efforts have not been fruitful.โ€ โ€ข โ€œWhy were these initial speculations about AI-enabled electoral interference so off (โ€ฆ) ? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology.โ€ โ€ข โ€œYet we should remember that thereโ€™s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed.โ€ ๐Ÿ‘‰Read more here: https://technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F647f36a8454af0237bd49574%2FjshkqBUTY-GZL8As8y6Aq.jpeg", "fullname": "Florent Daudens", "name": "fdaudens", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 384, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "BrigitteTousi", "jsulz", "alielfilali01" ], "count": 4 }, { "reaction": "๐Ÿง ", "users": [ "alielfilali01", "louisbrulenaudet" ], "count": 2 } ]
2024-09-03T13:44:50.000Z
2024-09-03T13:44:50.385Z
[]
/posts/fdaudens/440844864868620
1,538
0
513925031707884
[ { "type": "text", "value": "The Forward-Forward Algorithm๐Ÿค–", "raw": "The Forward-Forward Algorithm๐Ÿค–", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "FFA replaces the forward and backward passes in backpropagtion with two forward passes - one with positive (real) data and another with negative data. Each layer has its objective function - to increase or decrease a โ€œgoodness\" metric. The positive pass uses real data and adjusts weights to increase โ€œgoodnessโ€ in every hidden layer. The negative pass does the opposite. ", "raw": "FFA replaces the forward and backward passes in backpropagtion with two forward passes - one with positive (real) data and another with negative data. Each layer has its objective function - to increase or decrease a โ€œgoodness\" metric. The positive pass uses real data and adjusts weights to increase โ€œgoodnessโ€ in every hidden layer. The negative pass does the opposite. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I must say reading&Implementing a godfather paper feels quite fulfilling:)", "raw": "I must say reading&Implementing a godfather paper feels quite fulfilling:)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Thank you Prof. Geoffrey Hinton.", "raw": "Thank you Prof. Geoffrey Hinton.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Code: ", "raw": "Code: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/Jaykef/ai-algorithms/blob/main/mnist_the_forward_forward_algorithm.ipynb", "href": "https://github.com/Jaykef/ai-algorithms/blob/main/mnist_the_forward_forward_algorithm.ipynb", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
The Forward-Forward Algorithm๐Ÿค– FFA replaces the forward and backward passes in backpropagtion with two forward passes - one with positive (real) data and another with negative data. Each layer has its objective function - to increase or decrease a โ€œgoodness" metric. The positive pass uses real data and adjusts weights to increase โ€œgoodnessโ€ in every hidden layer. The negative pass does the opposite. I must say reading&Implementing a godfather paper feels quite fulfilling:) Thank you Prof. Geoffrey Hinton. Code: https://github.com/Jaykef/ai-algorithms/blob/main/mnist_the_forward_forward_algorithm.ipynb
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6438a9027de34e8ea7e4b257%2Fvib8QSd1AWMr_bR9ig_xJ.jpeg", "fullname": "Jaward Sesay", "name": "Jaward", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 191, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6438a9027de34e8ea7e4b257%2FFm7L4314h2q8rzjOpRQJJ.mp4" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6438a9027de34e8ea7e4b257%2FcNFytChGQoSCw4z7B0x80.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6438a9027de34e8ea7e4b257%2FcxgrGBBpgO1cKLOzo7yQp.jpeg" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6438a9027de34e8ea7e4b257%2FRMDyDc7_RhfW9yeH4MJJD.jpeg" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 }, { "reaction": "๐Ÿ‘", "users": [ "GigaBoy" ], "count": 1 } ]
2024-09-03T11:34:26.000Z
2024-09-03T11:34:26.783Z
[]
/posts/Jaward/513925031707884
550
0
865363319225333
[ { "type": "text", "value": "๐Œ๐ฒ ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐œ๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž! ๐’๐ž๐ฅ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ฐ๐ข๐ญ๐ก ๐’๐ฉ๐ž๐œ๐ญ๐ซ๐ฎ๐ฆ ๐ŸŽฏ ", "raw": "๐Œ๐ฒ ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐œ๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž! ๐’๐ž๐ฅ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ฐ๐ข๐ญ๐ก ๐’๐ฉ๐ž๐œ๐ญ๐ซ๐ฎ๐ฆ ๐ŸŽฏ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning.", "raw": "Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“” ๐Ÿ‘ฃ ", "raw": "๐Ÿ“” ๐Ÿ‘ฃ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/blog/anakin87/spectrum", "href": "https://huggingface.co/blog/anakin87/spectrum", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "---", "raw": "---", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Looking to fine-tune Language Models efficiently and save on computational resources?", "raw": "Looking to fine-tune Language Models efficiently and save on computational resources?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top.", "raw": "One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "It's quite effective and uses less GPU than full fine-tuning.", "raw": "It's quite effective and uses less GPU than full fine-tuning.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "However, QLoRa applies Low-Rank Adaptation uniformly across the entire model.", "raw": "However, QLoRa applies Low-Rank Adaptation uniformly across the entire model.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "What if we could identify the most informative layers and only fine-tune those? ๐Ÿค”", "raw": "What if we could identify the most informative layers and only fine-tune those? ๐Ÿค”", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This is exactly what Spectrum does! ๐Ÿ‘‡", "raw": "This is exactly what Spectrum does! ๐Ÿ‘‡", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ”ฌ Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one.", "raw": "๐Ÿ”ฌ Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "(It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.)", "raw": "(It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐ŸŽฏ Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.).", "raw": "๐ŸŽฏ Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.).", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "You can then โ„๏ธ freeze the rest of the model and focus your ๐Ÿ‹๏ธโ€โ™‚๏ธ training on the chosen layers.", "raw": "You can then โ„๏ธ freeze the rest of the model and focus your ๐Ÿ‹๏ธโ€โ™‚๏ธ training on the chosen layers.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ† Results/Evaluation", "raw": "๐Ÿ† Results/Evaluation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks.", "raw": "- Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups.", "raw": "- While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions...", "raw": "- Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions...", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "---", "raw": "---", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "For a practical guide, check out the article above.", "raw": "For a practical guide, check out the article above.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Œ๐ฒ ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐œ๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž! ๐’๐ž๐ฅ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ฐ๐ข๐ญ๐ก ๐’๐ฉ๐ž๐œ๐ญ๐ซ๐ฎ๐ฆ ๐ŸŽฏ Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning. ๐Ÿ“” ๐Ÿ‘ฃ https://huggingface.co/blog/anakin87/spectrum --- Looking to fine-tune Language Models efficiently and save on computational resources? One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top. It's quite effective and uses less GPU than full fine-tuning. However, QLoRa applies Low-Rank Adaptation uniformly across the entire model. What if we could identify the most informative layers and only fine-tune those? ๐Ÿค” This is exactly what Spectrum does! ๐Ÿ‘‡ ๐Ÿ”ฌ Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one. (It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.) ๐ŸŽฏ Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.). You can then โ„๏ธ freeze the rest of the model and focus your ๐Ÿ‹๏ธโ€โ™‚๏ธ training on the chosen layers. ๐Ÿ† Results/Evaluation - Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks. - While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups. - Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions... --- For a practical guide, check out the article above.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F626505d493e0b04d75710566%2F9rfJc9ORXU9J5a42Ev3v6.png", "fullname": "Stefano Fiorucci", "name": "anakin87", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 66, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F626505d493e0b04d75710566%2FfVCMAKAU5KCYhbzCL_qBg.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "osanseviero" ], "count": 2 }, { "reaction": "๐Ÿš€", "users": [ "gsarti" ], "count": 1 } ]
2024-09-03T10:00:04.000Z
2024-09-05T08:17:30.967Z
[]
/posts/anakin87/865363319225333
1,084
1
578032749040253
[ { "type": "text", "value": "๐๐ž๐ฐ ๐‘๐ž๐ฅ๐ž๐š๐ฌ๐ž: ๐Œ๐š๐ฃ๐จ๐ซ ๐“๐Ž๐Œ ๐ƒ๐ข๐ ๐ข๐ญ๐š๐ฅ ๐„๐ฅ๐ž๐ฏ๐š๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐„๐ฑ๐ฉ๐š๐ง๐ฌ๐ข๐จ๐ง ๐Ÿ—บ๏ธ", "raw": "๐๐ž๐ฐ ๐‘๐ž๐ฅ๐ž๐š๐ฌ๐ž: ๐Œ๐š๐ฃ๐จ๐ซ ๐“๐Ž๐Œ ๐ƒ๐ข๐ ๐ข๐ญ๐š๐ฅ ๐„๐ฅ๐ž๐ฏ๐š๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐„๐ฑ๐ฉ๐š๐ง๐ฌ๐ข๐จ๐ง ๐Ÿ—บ๏ธ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Dataset: ", "raw": "Dataset: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/Major-TOM/Core-DEM", "href": null, "resource": { "type": "dataset", "id": "Major-TOM/Core-DEM", "discussionNum": null }, "url": "https://huggingface.co/datasets/Major-TOM/Core-DEM", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Today with European Space Agency - ESA and Adobe Research, we release a global expansion to Major TOM with GLO-30 DEM data.", "raw": "Today with European Space Agency - ESA and Adobe Research, we release a global expansion to Major TOM with GLO-30 DEM data.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "You can now instantly access nearly 2M of Major TOM samples with elevation data to build your next AI model for EO. ๐ŸŒ ", "raw": "You can now instantly access nearly 2M of Major TOM samples with elevation data to build your next AI model for EO. ๐ŸŒ ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ” Browse the data in our usual viewer app: ", "raw": "๐Ÿ” Browse the data in our usual viewer app: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/Major-TOM/MajorTOM-Core-Viewer", "href": null, "resource": { "type": "space", "id": "Major-TOM/MajorTOM-Core-Viewer", "discussionNum": null }, "url": "https://huggingface.co/spaces/Major-TOM/MajorTOM-Core-Viewer", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Fantastic work championed by Paul Borne--Pons ", "raw": "Fantastic work championed by Paul Borne--Pons ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@NewtNewt", "href": null, "resource": null, "url": null, "code": null, "user": "NewtNewt", "label": null, "lang": null }, { "type": "text", "value": " ๐Ÿš€", "raw": " ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐๐ž๐ฐ ๐‘๐ž๐ฅ๐ž๐š๐ฌ๐ž: ๐Œ๐š๐ฃ๐จ๐ซ ๐“๐Ž๐Œ ๐ƒ๐ข๐ ๐ข๐ญ๐š๐ฅ ๐„๐ฅ๐ž๐ฏ๐š๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ ๐„๐ฑ๐ฉ๐š๐ง๐ฌ๐ข๐จ๐ง ๐Ÿ—บ๏ธ Dataset: https://huggingface.co/datasets/Major-TOM/Core-DEM Today with European Space Agency - ESA and Adobe Research, we release a global expansion to Major TOM with GLO-30 DEM data. You can now instantly access nearly 2M of Major TOM samples with elevation data to build your next AI model for EO. ๐ŸŒ ๐Ÿ” Browse the data in our usual viewer app: https://huggingface.co/spaces/Major-TOM/MajorTOM-Core-Viewer Fantastic work championed by Paul Borne--Pons @NewtNewt ๐Ÿš€
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1678741407493-6304c06eeb6d777a838eab63.png", "fullname": "Mikolaj Czerkawski", "name": "mikonvergence", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 25, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6304c06eeb6d777a838eab63%2F7BtZPtS--GFa_2rLuTxFN.mp4" } ]
[ { "avatarUrl": "/avatars/83cf39dd0f5895e7d7e6ae5a80b47deb.svg", "fullname": "PaulBP", "name": "NewtNewt", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 1 } ]
[ { "reaction": "โค๏ธ", "users": [ "Tonic", "AtAndDev", "NewtNewt", "osanseviero", "Obenlia", "robmarkcole" ], "count": 6 }, { "reaction": "๐Ÿง ", "users": [ "Tonic", "AtAndDev", "bmorphism", "osanseviero" ], "count": 4 }, { "reaction": "๐Ÿš€", "users": [ "Tonic", "AtAndDev", "John6666", "osanseviero" ], "count": 4 }, { "reaction": "๐Ÿค", "users": [ "Tonic", "pduf", "AtAndDev" ], "count": 3 }, { "reaction": "๐Ÿ‘€", "users": [ "Tonic", "AtAndDev" ], "count": 2 }, { "reaction": "๐Ÿค—", "users": [ "Tonic", "AtAndDev" ], "count": 2 }, { "reaction": "๐Ÿ˜Ž", "users": [ "Tonic", "AtAndDev" ], "count": 2 } ]
2024-09-03T08:05:29.000Z
2024-09-03T08:05:29.710Z
[]
/posts/mikonvergence/578032749040253
2,202
0
622098077829042
[ { "type": "text", "value": "# Excited to Share: New LLM Tokenization - Convert Text to tokens and vice versa! ๐Ÿš€", "raw": "# Excited to Share: New LLM Tokenization - Convert Text to tokens and vice versa! ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I've just developed a powerful tool for anyone working with Language Models (LLMs) or diving into Natural Language Processing (NLP). ", "raw": "I've just developed a powerful tool for anyone working with Language Models (LLMs) or diving into Natural Language Processing (NLP). ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ” Introducing the LLM Tokenization - Convert Text to tokens and vice versa!!", "raw": "๐Ÿ” Introducing the LLM Tokenization - Convert Text to tokens and vice versa!!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Key Features:", "raw": "Key Features:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Convert text to tokens and token IDs", "raw": "- Convert text to tokens and token IDs", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Reverse engineer: convert token IDs back to text", "raw": "- Reverse engineer: convert token IDs back to text", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Support for popular models: LLama3 (Will add more models iteratively)", "raw": "- Support for popular models: LLama3 (Will add more models iteratively)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- User-friendly Gradio interface for easy interaction", "raw": "- User-friendly Gradio interface for easy interaction", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Whether you're debugging your NLP pipeline, exploring how different models tokenize text, or just curious about the inner workings of LLMs, this tool is for you!", "raw": "Whether you're debugging your NLP pipeline, exploring how different models tokenize text, or just curious about the inner workings of LLMs, this tool is for you!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘ฉโ€๐Ÿ’ป Tech Stack:", "raw": "๐Ÿ‘ฉโ€๐Ÿ’ป Tech Stack:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Python", "raw": "- Python", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Gradio for the web interface", "raw": "- Gradio for the web interface", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Hugging Face Transformers for tokenization", "raw": "- Hugging Face Transformers for tokenization", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The application is deployed in Hugging Face spaces as Gradio application", "raw": "The application is deployed in Hugging Face spaces as Gradio application", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ”— Try it out: ", "raw": "๐Ÿ”— Try it out: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://lnkd.in/g6R5z9k2", "href": "https://lnkd.in/g6R5z9k2", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "#NLP #MachineLearning #AI #PythonDevelopment #OpenSource", "raw": "#NLP #MachineLearning #AI #PythonDevelopment #OpenSource", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
# Excited to Share: New LLM Tokenization - Convert Text to tokens and vice versa! ๐Ÿš€ I've just developed a powerful tool for anyone working with Language Models (LLMs) or diving into Natural Language Processing (NLP). ๐Ÿ” Introducing the LLM Tokenization - Convert Text to tokens and vice versa!! Key Features: - Convert text to tokens and token IDs - Reverse engineer: convert token IDs back to text - Support for popular models: LLama3 (Will add more models iteratively) - User-friendly Gradio interface for easy interaction Whether you're debugging your NLP pipeline, exploring how different models tokenize text, or just curious about the inner workings of LLMs, this tool is for you! ๐Ÿ‘ฉโ€๐Ÿ’ป Tech Stack: - Python - Gradio for the web interface - Hugging Face Transformers for tokenization The application is deployed in Hugging Face spaces as Gradio application ๐Ÿ”— Try it out: https://lnkd.in/g6R5z9k2 #NLP #MachineLearning #AI #PythonDevelopment #OpenSource
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f6ddf835e78cc6b0ed31e5d%2FLf6aTuebYrSBXEDE4q4to.jpeg", "fullname": "Prasanna Kumar V", "name": "vpkprasanna", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 5, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-03T06:19:33.000Z
2024-09-03T06:20:17.275Z
[]
/posts/vpkprasanna/622098077829042
498
0
281598302766823
[ { "type": "text", "value": "I started training a public LoRA style (2 seperate training each on 4x A6000).", "raw": "I started training a public LoRA style (2 seperate training each on 4x A6000).", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX.", "raw": "Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Generated captions with multi-GPU batch Joycaption app.", "raw": "Generated captions with multi-GPU batch Joycaption app.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.", "raw": "I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : ", "raw": "I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.patreon.com/posts/110613301", "href": "https://www.patreon.com/posts/110613301", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : ", "raw": "I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.patreon.com/posts/108992085", "href": "https://www.patreon.com/posts/108992085", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The no caption dataset uses only ohwx 3d render as caption", "raw": "The no caption dataset uses only ohwx 3d render as caption", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs โ€” 114 images : ", "raw": "I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs โ€” 114 images : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.patreon.com/posts/110879657", "href": "https://www.patreon.com/posts/110879657", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Total step count is being 500 * 114 / 4 (4x GPU โ€” batch size 1) = 14250", "raw": "Total step count is being 500 * 114 / 4 (4x GPU โ€” batch size 1) = 14250", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Taking 37 hours currently if I donโ€™t terminate early", "raw": "Taking 37 hours currently if I donโ€™t terminate early", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Will save a checkpoint once every 25 epochs", "raw": "Will save a checkpoint once every 25 epochs", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Full Windows Kohya LoRA training tutorial : ", "raw": "Full Windows Kohya LoRA training tutorial : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://youtu.be/nySGu12Y05k", "href": "https://youtu.be/nySGu12Y05k", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Full cloud tutorial I am still editing", "raw": "Full cloud tutorial I am still editing", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.", "raw": "Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I got permission to share dataset but canโ€™t be used commercially.", "raw": "I got permission to share dataset but canโ€™t be used commercially.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages.", "raw": "Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I started training a public LoRA style (2 seperate training each on 4x A6000). Experimenting captions vs non-captions. So we will see which yields best results for style training on FLUX. Generated captions with multi-GPU batch Joycaption app. I am showing 5 examples of what Joycaption generates on FLUX dev. Left images are the original style images from the dataset. I used my multi-GPU Joycaption APP (used 8x A6000 for ultra fast captioning) : https://www.patreon.com/posts/110613301 I used my Gradio batch caption editor to edit some words and add activation token as ohwx 3d render : https://www.patreon.com/posts/108992085 The no caption dataset uses only ohwx 3d render as caption I am using my newest 4x_GPU_Rank_1_SLOW_Better_Quality.json on 4X A6000 GPU and train 500 epochs โ€” 114 images : https://www.patreon.com/posts/110879657 Total step count is being 500 * 114 / 4 (4x GPU โ€” batch size 1) = 14250 Taking 37 hours currently if I donโ€™t terminate early Will save a checkpoint once every 25 epochs Full Windows Kohya LoRA training tutorial : https://youtu.be/nySGu12Y05k Full cloud tutorial I am still editing Hopefully will share trained LoRA on Hugging Face and CivitAI along with full dataset including captions. I got permission to share dataset but canโ€™t be used commercially. Also I will hopefully share full workflow in the CivitAI and Hugging Face LoRA pages.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1672531901326-6345bd89fe134dfd7a0dba40.png", "fullname": "Furkan Gรถzรผkara", "name": "MonsterMMORPG", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 376, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2FuzDY7XcoU-5y-ObSoCLoN.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2FvW1OhzwcMn6gglsKc5XDp.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2Fidjp8LDSEHFhZ6PZCM7qd.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2Fd2kopHdFjRxmBYDCMr_17.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2F3flDh6GgZ0DvCPA49f_sI.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2FRW3IN9dJwqURTnwIII7T5.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2Fp2VR9MYj0zUj21J8Ut4Ez.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2Fbv4ALmepdH4Rsf87xTkmI.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2FavgizrEGzfrO8tjkNQxcj.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2F9Jjd-_y8Q6WwU50Wpds0p.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6345bd89fe134dfd7a0dba40%2FL1ghkrFru08rn3wJcU9HY.png" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "MonsterMMORPG", "ajibawa-2023", "xziayro" ], "count": 3 }, { "reaction": "โž•", "users": [ "MonsterMMORPG", "Triangalogin", "pduf" ], "count": 3 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "MonsterMMORPG" ], "count": 2 }, { "reaction": "๐Ÿค—", "users": [ "MonsterMMORPG", "whiplashG" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "MonsterMMORPG", "erblicken" ], "count": 2 }, { "reaction": "๐Ÿ”ฅ", "users": [ "MonsterMMORPG" ], "count": 1 }, { "reaction": "โค๏ธ", "users": [ "MonsterMMORPG" ], "count": 1 }, { "reaction": "๐Ÿง ", "users": [ "MonsterMMORPG" ], "count": 1 }, { "reaction": "๐Ÿ˜Ž", "users": [ "MonsterMMORPG" ], "count": 1 }, { "reaction": "๐Ÿค", "users": [ "MonsterMMORPG" ], "count": 1 }, { "reaction": "๐Ÿคฏ", "users": [ "MonsterMMORPG" ], "count": 1 } ]
2024-09-02T23:09:39.000Z
2024-09-03T00:19:10.257Z
[]
/posts/MonsterMMORPG/281598302766823
2,393
0
810635856263958
[ { "type": "text", "value": "Spent a few minutes to build an alternative to Character AI on top of llama3.1 405B through SambaNova's super fast inference API ", "raw": "Spent a few minutes to build an alternative to Character AI on top of llama3.1 405B through SambaNova's super fast inference API ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Space: ", "raw": "Space: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/kz919/Persona-AI", "href": null, "resource": { "type": "space", "id": "kz919/Persona-AI", "discussionNum": null }, "url": "https://huggingface.co/spaces/kz919/Persona-AI", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "API referral link: ", "raw": "API referral link: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://sambanova.ai/fast-api?api_ref=907266", "href": "https://sambanova.ai/fast-api?api_ref=907266", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Spent a few minutes to build an alternative to Character AI on top of llama3.1 405B through SambaNova's super fast inference API Space: https://huggingface.co/spaces/kz919/Persona-AI API referral link: https://sambanova.ai/fast-api?api_ref=907266
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FFTiirwS_L6IaLHmHwIo2g.png", "fullname": "Kaizhao Liang", "name": "kz919", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 34, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2F-sjYE0eR_9QmmXmV7Nzhy.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FKjuAy-QnfL_R8TbjEByo0.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "kz919", "zolicsaki", "deki" ], "count": 3 }, { "reaction": "๐Ÿ˜Ž", "users": [ "kz919", "John6666", "zolicsaki" ], "count": 3 }, { "reaction": "๐Ÿš€", "users": [ "kz919", "zolicsaki" ], "count": 2 }, { "reaction": "๐Ÿค—", "users": [ "kz919", "zolicsaki" ], "count": 2 }, { "reaction": "๐Ÿคฏ", "users": [ "kz919", "zolicsaki" ], "count": 2 }, { "reaction": "๐Ÿง ", "users": [ "kz919", "zolicsaki" ], "count": 2 } ]
2024-09-02T21:33:14.000Z
2024-09-03T15:00:46.298Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F66c75fe82c2207bb1732c672%2FX_a8y4ZrSAQEylKpERMFL.jpeg", "fullname": "Scott Cawthon", "name": "Opa-Opa", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false } ]
/posts/kz919/810635856263958
1,584
3
887755095475831
[ { "type": "text", "value": "ML people on a long flight", "raw": "ML people on a long flight", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "(See picture)", "raw": "(See picture)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
ML people on a long flight (See picture)
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1657144463525-629a173153a72d997d3f57d0.jpeg", "fullname": "Santiago Viquez", "name": "santiviquez", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 84, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F629a173153a72d997d3f57d0%2FQgnsrMxm4_79msO6PHhtG.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-09-02T21:24:44.000Z
2024-11-06T22:29:34.200Z
[ { "avatarUrl": "/avatars/744eddaa7dfc34a57df9ce32a78059a0.svg", "fullname": "Tyrone Pierce", "name": "piercyy", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false } ]
/posts/santiviquez/887755095475831
426
1
332713316648258
[ { "type": "text", "value": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks ,", "raw": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks ,", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โœ’๏ธInkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French.", "raw": "โœ’๏ธInkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "model ", "raw": "model ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/lelapa/InkubaLM-0.4B", "href": null, "resource": { "type": "model", "id": "lelapa/InkubaLM-0.4B", "discussionNum": null }, "url": "https://huggingface.co/lelapa/InkubaLM-0.4B", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "demo ", "raw": "demo ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/Tonic/Inkuba-0.4B", "href": null, "resource": { "type": "space", "id": "Tonic/Inkuba-0.4B", "discussionNum": null }, "url": "https://huggingface.co/spaces/Tonic/Inkuba-0.4B", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธhey there folks , โœ’๏ธInkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French. model https://huggingface.co/lelapa/InkubaLM-0.4B demo https://huggingface.co/spaces/Tonic/Inkuba-0.4B
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62a3bb1cd0d8c2c2169f0b88%2FeT2TS0IlQbZtz-F_zHLz9.jpeg", "fullname": "Joseph [open/acc] Pollack", "name": "Tonic", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 313, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿš€", "users": [ "monsoon-nlp", "AtAndDev", "KingNish", "louisbrulenaudet", "d0rj", "Moio" ], "count": 6 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "AtAndDev", "afrideva", "osanseviero" ], "count": 4 }, { "reaction": "๐Ÿค—", "users": [ "ijohn07" ], "count": 1 } ]
2024-09-02T20:29:07.000Z
2024-09-02T20:29:07.386Z
[]
/posts/Tonic/332713316648258
2,524
0
675665165365717
[ { "type": "text", "value": "Do you know how PCA and SVD are related?", "raw": "Do you know how PCA and SVD are related?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I explained it for everyone in this post!", "raw": "I explained it for everyone in this post!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Go and check it out: ", "raw": "Go and check it out: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/alexbodner_/status/1798357519678718062?s=46", "href": "https://x.com/alexbodner_/status/1798357519678718062?s=46", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Do you know how PCA and SVD are related? I explained it for everyone in this post! Go and check it out: https://x.com/alexbodner_/status/1798357519678718062?s=46
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FwMv9-ZsJUw4QQnld_cci7.jpeg", "fullname": "Alexander Dylan Bodner", "name": "AlexBodner", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "andito" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "TDL123" ], "count": 1 } ]
2024-09-02T18:06:33.000Z
2024-09-02T18:06:33.192Z
[]
/posts/AlexBodner/675665165365717
1,269
0
389301188834529
[ { "type": "text", "value": "Hey everyone ๐Ÿค—!", "raw": "Hey everyone ๐Ÿค—!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check out this awesome new model for object segmentation!", "raw": "Check out this awesome new model for object segmentation!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/finegrain/finegrain-object-cutter", "href": null, "resource": { "type": "space", "id": "finegrain/finegrain-object-cutter", "discussionNum": null }, "url": "https://huggingface.co/spaces/finegrain/finegrain-object-cutter", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": ".", "raw": ".", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate ๐Ÿš€.", "raw": "We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate ๐Ÿš€.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Itโ€™s all open source under the MIT license (", "raw": "Itโ€™s all open source under the MIT license (", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/finegrain/finegrain-box-segmenter", "href": null, "resource": { "type": "model", "id": "finegrain/finegrain-box-segmenter", "discussionNum": null }, "url": "https://huggingface.co/finegrain/finegrain-box-segmenter", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "), complete with a test set tailored for e-commerce (", "raw": "), complete with a test set tailored for e-commerce (", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/finegrain/finegrain-product-masks-lite", "href": null, "resource": { "type": "dataset", "id": "finegrain/finegrain-product-masks-lite", "discussionNum": null }, "url": "https://huggingface.co/datasets/finegrain/finegrain-product-masks-lite", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "). Have fun experimenting with it!", "raw": "). Have fun experimenting with it!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Hey everyone ๐Ÿค—! Check out this awesome new model for object segmentation! https://huggingface.co/spaces/finegrain/finegrain-object-cutter. We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate ๐Ÿš€. Itโ€™s all open source under the MIT license (https://huggingface.co/finegrain/finegrain-box-segmenter), complete with a test set tailored for e-commerce (https://huggingface.co/datasets/finegrain/finegrain-product-masks-lite). Have fun experimenting with it!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1669043420538-6364f1784f773b7e4cede70c.jpeg", "fullname": "Laureฮทt Fainsin", "name": "1aurent", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 80, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6364f1784f773b7e4cede70c%2FJMY0ulmDOCo5-gaEBNspI.mp4" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "limiteinductive", "Sri-Vigneshwar-DJ", "John6666", "fdaudens", "TDL123", "1aurent", "deltheil", "piercus", "Mefistofele", "osanseviero", "vincentweisser", "victor", "dsmonk" ], "count": 13 }, { "reaction": "๐Ÿค—", "users": [ "liuxiao1037", "louisbrulenaudet" ], "count": 2 }, { "reaction": "๐Ÿ‘", "users": [ "MohammedEltoum", "Norod78" ], "count": 2 } ]
2024-09-02T15:30:18.000Z
2024-09-02T15:30:18.329Z
[]
/posts/1aurent/389301188834529
4,351
0
102743494418226
[ { "type": "text", "value": "๐Ÿค– ๐—ง๐—ต๐—ฒ ๐—”๐—œ ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜: ๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ, ๐—ณ๐˜‚๐—น๐—น๐˜†-๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ถ๐—ฝ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฟ $๐Ÿญ๐Ÿฑ ๐—ฝ๐—ฒ๐—ฟ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ", "raw": "๐Ÿค– ๐—ง๐—ต๐—ฒ ๐—”๐—œ ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜: ๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ, ๐—ณ๐˜‚๐—น๐—น๐˜†-๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ถ๐—ฝ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฟ $๐Ÿญ๐Ÿฑ ๐—ฝ๐—ฒ๐—ฟ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Researchers have just created an AI system that ๐—ฐ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—ป๐—ฑ๐˜‚๐—ฐ๐˜ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ฟ๐—ผ๐—ท๐—ฒ๐—ฐ๐˜๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐˜๐—ฎ๐—ฟ๐˜ ๐˜๐—ผ ๐—ณ๐—ถ๐—ป๐—ถ๐˜€๐—ต, ๐—ฝ๐—ผ๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น๐—น๐˜† ๐—ฟ๐—ฒ๐˜ƒ๐—ผ๐—น๐˜‚๐˜๐—ถ๐—ผ๐—ป๐—ถ๐˜‡๐—ถ๐—ป๐—ด ๐—ต๐—ผ๐˜„ ๐˜€๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฐ ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜ƒ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐—บ๐—ฎ๐—ฑ๐—ฒ.", "raw": "Researchers have just created an AI system that ๐—ฐ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—ป๐—ฑ๐˜‚๐—ฐ๐˜ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ฟ๐—ผ๐—ท๐—ฒ๐—ฐ๐˜๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐˜๐—ฎ๐—ฟ๐˜ ๐˜๐—ผ ๐—ณ๐—ถ๐—ป๐—ถ๐˜€๐—ต, ๐—ฝ๐—ผ๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น๐—น๐˜† ๐—ฟ๐—ฒ๐˜ƒ๐—ผ๐—น๐˜‚๐˜๐—ถ๐—ผ๐—ป๐—ถ๐˜‡๐—ถ๐—ป๐—ด ๐—ต๐—ผ๐˜„ ๐˜€๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฐ ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜ƒ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐—บ๐—ฎ๐—ฑ๐—ฒ.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "It doesn't just assist with specific tasks - it automates the entire research process, from generating ideas to writing and reviewing papers.", "raw": "It doesn't just assist with specific tasks - it automates the entire research process, from generating ideas to writing and reviewing papers.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "1 - brainstorm novel research directions, 2- write and execute code for experiments & visualize results, get references, and even 3- write up findings in a full academic paper format!", "raw": "1 - brainstorm novel research directions, 2- write and execute code for experiments & visualize results, get references, and even 3- write up findings in a full academic paper format!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "And it can do all this for under $15 per paper! ๐Ÿคฏ", "raw": "And it can do all this for under $15 per paper! ๐Ÿคฏ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:", "raw": "๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿง  Generates novel research ideas across multiple topics (e.g. diffusion modeling, transformers, learning dynamics aka โ€œgrokkingโ€)", "raw": "๐Ÿง  Generates novel research ideas across multiple topics (e.g. diffusion modeling, transformers, learning dynamics aka โ€œgrokkingโ€)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘จโ€๐Ÿ’ป Uses open-source coding assistant Aider to implement ideas and run experiments. This is especially important since this agentic assistant can iterate if it fails somewhere.", "raw": "๐Ÿ‘จโ€๐Ÿ’ป Uses open-source coding assistant Aider to implement ideas and run experiments. This is especially important since this agentic assistant can iterate if it fails somewhere.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ“Š Visualizes results and plans follow-up experiments (up to 5 rounds)", "raw": "๐Ÿ“Š Visualizes results and plans follow-up experiments (up to 5 rounds)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โœ๏ธ Writes full academic papers, including finding references using Semantic Search API", "raw": "โœ๏ธ Writes full academic papers, including finding references using Semantic Search API", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ•ต๏ธ Runs a simulated peer review process to evaluate paper quality", "raw": "๐Ÿ•ต๏ธ Runs a simulated peer review process to evaluate paper quality", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ฐ Total cost per paper is under $15. This system can generate \"hundreds of interesting, medium-quality papers\" in just a week !", "raw": "๐Ÿ’ฐ Total cost per paper is under $15. This system can generate \"hundreds of interesting, medium-quality papers\" in just a week !", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐—ฆ๐˜๐—ถ๐—น๐—น ๐—ป๐—ผ๐˜ ๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐˜๐—ผ ๐—ณ๐—ถ๐—น๐—น ๐—œ๐—–๐—Ÿ๐—ฅ ๐˜„๐—ถ๐˜๐—ต ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€:", "raw": "๐—ฆ๐˜๐—ถ๐—น๐—น ๐—ป๐—ผ๐˜ ๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐˜๐—ผ ๐—ณ๐—ถ๐—น๐—น ๐—œ๐—–๐—Ÿ๐—ฅ ๐˜„๐—ถ๐˜๐—ต ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ” Ideas generated in one domain tend to be repetitive across different runs, and even different language model", "raw": "๐Ÿ” Ideas generated in one domain tend to be repetitive across different runs, and even different language model", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘€ Does not use vision capabilities to fix visual issues in plots", "raw": "๐Ÿ‘€ Does not use vision capabilities to fix visual issues in plots", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ’ญ Models occasionally hallucinate entire results tables", "raw": "๐Ÿ’ญ Models occasionally hallucinate entire results tables", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "โ‡’ Only few of the generated papers would actually meet the threshold for acceptance at a top AI conference", "raw": "โ‡’ Only few of the generated papers would actually meet the threshold for acceptance at a top AI conference", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ‘‰ย Read their paper: ", "raw": "๐Ÿ‘‰ย Read their paper: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/papers/2408.06292", "href": null, "resource": { "type": "paper", "id": "2408.06292", "discussionNum": null }, "url": "https://huggingface.co/papers/2408.06292", "code": null, "user": null, "label": "The AI Scientist: Towards Fully Automated Open-Ended Scientific\n Discovery (2408.06292)", "lang": null } ]
๐Ÿค– ๐—ง๐—ต๐—ฒ ๐—”๐—œ ๐—ฆ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐˜€๐˜: ๐—”๐—ด๐—ฒ๐—ป๐˜๐—ถ๐—ฐ, ๐—ณ๐˜‚๐—น๐—น๐˜†-๐—ฎ๐˜‚๐˜๐—ผ๐—บ๐—ฎ๐˜๐—ฒ๐—ฑ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ถ๐—ฝ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ ๐—ณ๐—ผ๐—ฟ ๐˜‚๐—ป๐—ฑ๐—ฒ๐—ฟ $๐Ÿญ๐Ÿฑ ๐—ฝ๐—ฒ๐—ฟ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ Researchers have just created an AI system that ๐—ฐ๐—ฎ๐—ป ๐—ฐ๐—ผ๐—ป๐—ฑ๐˜‚๐—ฐ๐˜ ๐—ฒ๐—ป๐˜๐—ถ๐—ฟ๐—ฒ ๐—ฟ๐—ฒ๐˜€๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—ฝ๐—ฟ๐—ผ๐—ท๐—ฒ๐—ฐ๐˜๐˜€ ๐—ณ๐—ฟ๐—ผ๐—บ ๐˜€๐˜๐—ฎ๐—ฟ๐˜ ๐˜๐—ผ ๐—ณ๐—ถ๐—ป๐—ถ๐˜€๐—ต, ๐—ฝ๐—ผ๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น๐—น๐˜† ๐—ฟ๐—ฒ๐˜ƒ๐—ผ๐—น๐˜‚๐˜๐—ถ๐—ผ๐—ป๐—ถ๐˜‡๐—ถ๐—ป๐—ด ๐—ต๐—ผ๐˜„ ๐˜€๐—ฐ๐—ถ๐—ฒ๐—ป๐˜๐—ถ๐—ณ๐—ถ๐—ฐ ๐—ฑ๐—ถ๐˜€๐—ฐ๐—ผ๐˜ƒ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐˜€ ๐—ฎ๐—ฟ๐—ฒ ๐—บ๐—ฎ๐—ฑ๐—ฒ. It doesn't just assist with specific tasks - it automates the entire research process, from generating ideas to writing and reviewing papers. 1 - brainstorm novel research directions, 2- write and execute code for experiments & visualize results, get references, and even 3- write up findings in a full academic paper format! And it can do all this for under $15 per paper! ๐Ÿคฏ ๐—ž๐—ฒ๐˜† ๐—ถ๐—ป๐˜€๐—ถ๐—ด๐—ต๐˜๐˜€: ๐Ÿง  Generates novel research ideas across multiple topics (e.g. diffusion modeling, transformers, learning dynamics aka โ€œgrokkingโ€) ๐Ÿ‘จโ€๐Ÿ’ป Uses open-source coding assistant Aider to implement ideas and run experiments. This is especially important since this agentic assistant can iterate if it fails somewhere. ๐Ÿ“Š Visualizes results and plans follow-up experiments (up to 5 rounds) โœ๏ธ Writes full academic papers, including finding references using Semantic Search API ๐Ÿ•ต๏ธ Runs a simulated peer review process to evaluate paper quality ๐Ÿ’ฐ Total cost per paper is under $15. This system can generate "hundreds of interesting, medium-quality papers" in just a week ! ๐—ฆ๐˜๐—ถ๐—น๐—น ๐—ป๐—ผ๐˜ ๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐˜๐—ผ ๐—ณ๐—ถ๐—น๐—น ๐—œ๐—–๐—Ÿ๐—ฅ ๐˜„๐—ถ๐˜๐—ต ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€: ๐Ÿ” Ideas generated in one domain tend to be repetitive across different runs, and even different language model ๐Ÿ‘€ Does not use vision capabilities to fix visual issues in plots ๐Ÿ’ญ Models occasionally hallucinate entire results tables โ‡’ Only few of the generated papers would actually meet the threshold for acceptance at a top AI conference ๐Ÿ‘‰ย Read their paper: https://huggingface.co/papers/2408.06292
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2F7p7-OmWM6PqqCs7ZStPGD.jpeg", "fullname": "Aymeric Roucher", "name": "m-ric", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 494, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F63d10d4e8eaa4831005e92b5%2FGZenzPhe-mYVWefu3CUn-.png" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "wsuff", "Sri-Vigneshwar-DJ", "John6666", "Bruhn", "sugatoray", "KingNish", "louisbrulenaudet", "toshihikochen", "alielfilali01" ], "count": 9 }, { "reaction": "๐Ÿ‘€", "users": [ "Svngoku" ], "count": 1 }, { "reaction": "๐Ÿš€", "users": [ "Csplk" ], "count": 1 } ]
2024-09-02T15:22:25.000Z
2024-09-02T15:22:25.282Z
[]
/posts/m-ric/102743494418226
2,210
0
512165858999722
[ { "type": "text", "value": "Plugins in NiansuhAI", "raw": "Plugins in NiansuhAI", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Plugin Names:", "raw": "Plugin Names:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "1. WebSearch: Searches the web using search engines.", "raw": "1. WebSearch: Searches the web using search engines.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "2. Calculator: Evaluates mathematical expressions, extending the base Tool class.", "raw": "2. Calculator: Evaluates mathematical expressions, extending the base Tool class.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "3. WebBrowser: Extracts and summarizes information from web pages.", "raw": "3. WebBrowser: Extracts and summarizes information from web pages.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "4. Wikipedia: Retrieves information from Wikipedia using its API.", "raw": "4. Wikipedia: Retrieves information from Wikipedia using its API.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "5. Arxiv: Searches and fetches article information from Arxiv.", "raw": "5. Arxiv: Searches and fetches article information from Arxiv.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "6. WolframAlphaTool: Provides answers on math, science, technology, culture, society, and everyday life.", "raw": "6. WolframAlphaTool: Provides answers on math, science, technology, culture, society, and everyday life.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "These plugins currently support the GPT-4O-2024-08-06 model, which also supports image analysis.", "raw": "These plugins currently support the GPT-4O-2024-08-06 model, which also supports image analysis.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Try it now: ", "raw": "Try it now: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/spaces/NiansuhAI/chat", "href": "https://huggingface.co/spaces/NiansuhAI/chat", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Similar to: ", "raw": "Similar to: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://hf.co/chat", "href": "https://hf.co/chat", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Plugins in NiansuhAI Plugin Names: 1. WebSearch: Searches the web using search engines. 2. Calculator: Evaluates mathematical expressions, extending the base Tool class. 3. WebBrowser: Extracts and summarizes information from web pages. 4. Wikipedia: Retrieves information from Wikipedia using its API. 5. Arxiv: Searches and fetches article information from Arxiv. 6. WolframAlphaTool: Provides answers on math, science, technology, culture, society, and everyday life. These plugins currently support the GPT-4O-2024-08-06 model, which also supports image analysis. Try it now: https://huggingface.co/spaces/NiansuhAI/chat Similar to: https://hf.co/chat
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F64cba00d710645aa7b04f281%2Fa_-LPwd4wqRyi8sJ1QxjI.jpeg", "fullname": "Husnain", "name": "Niansuh", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 64, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "John6666", "TheDrunkenSnail", "leeloolee", "Sri-Vigneshwar-DJ", "Joseph717171", "Niansuh" ], "count": 6 }, { "reaction": "๐Ÿš€", "users": [ "Niansuh", "John6666", "Joseph717171" ], "count": 3 } ]
2024-09-02T12:57:09.000Z
2024-09-02T13:03:34.993Z
[]
/posts/Niansuh/512165858999722
2,414
0
680631748831020
[ { "type": "text", "value": "๐Ÿคฉ Amazing day. AWPortrait-FL finally here!", "raw": "๐Ÿคฉ Amazing day. AWPortrait-FL finally here!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿฆ– AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality. ", "raw": "๐Ÿฆ– AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค—Model: ", "raw": "๐Ÿค—Model: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/Shakker-Labs/AWPortrait-FL", "href": null, "resource": { "type": "model", "id": "Shakker-Labs/AWPortrait-FL", "discussionNum": null }, "url": "https://huggingface.co/Shakker-Labs/AWPortrait-FL", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ™‡Demo: ", "raw": "๐Ÿ™‡Demo: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/vilarin/flux-labs", "href": null, "resource": { "type": "space", "id": "vilarin/flux-labs", "discussionNum": null }, "url": "https://huggingface.co/spaces/vilarin/flux-labs", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿคฉ Amazing day. AWPortrait-FL finally here! ๐Ÿฆ– AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality. ๐Ÿค—Model: https://huggingface.co/Shakker-Labs/AWPortrait-FL ๐Ÿ™‡Demo: https://huggingface.co/spaces/vilarin/flux-labs
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F642827944fe87caede802784%2Fa7s3Ub9Cy6-PuuaX8wwXm.png", "fullname": "VILARIN", "name": "vilarin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 67, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2F8ZkmV-C5Uc-4U8N41_HmC.webp" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2FGUyhoP12XQZ-DqK5zXW5Y.webp" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2FMCP4knqBEFyKldRKShT9H.webp" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2FLUGWK_jOjGP8ngXg97Ee2.webp" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2Fpxq-PgFj2eYUPYic2e3N-.webp" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2F9lgsu9DkYO0LcqVv3pBPj.webp" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2F6Ldc5VkkAexuFcK-67Od3.webp" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "orrinin", "samapika", "dillfrescott", "John6666", "wanghaofan", "ijohn07", "Bruhn", "ajibawa-2023", "gshreyash", "tanfar", "lunarflu", "louisbrulenaudet", "ShakkerAi-Labs", "linoyts", "Despina", "KingNish", "victor", "sasikiran", "Pranavan", "nbroad", "Sri-Vigneshwar-DJ", "privategeek24", "traltyaziking", "TDL123", "koochikoo25", "Taylor658", "AtAndDev", "huangy1", "tayyabmehar27", "Mefistofele", "ibrahim313", "awplanet" ], "count": 32 }, { "reaction": "๐Ÿคฏ", "users": [ "ibrahim313" ], "count": 1 }, { "reaction": "๐Ÿ”ฅ", "users": [ "ibrahim313" ], "count": 1 }, { "reaction": "๐Ÿš€", "users": [ "ibrahim313" ], "count": 1 } ]
2024-09-01T13:21:19.000Z
2024-09-05T16:11:33.300Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F64282d3deb2891d3746a1f1e%2FV7xBCMfcShiMTjjJYaJBv.png", "fullname": "orrin", "name": "orrinin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F66d47c04b2302a63f24f1253%2FqWm-a6vYAmJgrhQ8kvner.jpeg", "fullname": "Samapika Priyadarshini", "name": "samapika", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F64aea8ff67511bd3d965697b%2FJxn52EmDF5RApJh8antxn.jpeg", "fullname": "Feynman Innovations", "name": "ajibawa-2023", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 138, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6340651b388c3fa40f9a5bc0%2Fav1C4_S7bHGxAzOu8lOmG.jpeg", "fullname": "Adam Molnar", "name": "lunarflu", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 333, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63a567cdce5763e06f7af435%2F6E6ijsMOl9ys__Aznx4Si.jpeg", "fullname": "DynamicWang", "name": "awplanet", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 35, "isFollowing": false }, { "avatarUrl": "/avatars/f3839f73cd47dff15be3bdb0dbd3d50d.svg", "fullname": "001Anas", "name": "Mohammad121", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/vilarin/680631748831020
6,012
6
922470981780593
[ { "type": "text", "value": "Understanding the json format response with HF's Serverless Inference API ๐Ÿค—", "raw": "Understanding the json format response with HF's Serverless Inference API ๐Ÿค—", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "As it stands, there seems to be an inconsistency with the OpenAI documentation on the question of implementing the JSON response format using the InferenceClient completion API.", "raw": "As it stands, there seems to be an inconsistency with the OpenAI documentation on the question of implementing the JSON response format using the InferenceClient completion API.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "After investigating the InferenceClient source code, I share the official solution using a JSON Schema. This consolidates the structure of the response and simplifies parsing as part of an automated process for extracting metadata, information:", "raw": "After investigating the InferenceClient source code, I share the official solution using a JSON Schema. This consolidates the structure of the response and simplifies parsing as part of an automated process for extracting metadata, information:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```python\nfrom huggingface_hub import InferenceClient\n\nclient = InferenceClient(\"meta-llama/Meta-Llama-3-70B-Instruct\")\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": \"I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?\",\n },\n]\n\nresponse_format = {\n \"type\": \"json\",\n \"value\": {\n \"properties\": {\n \"location\": {\"type\": \"string\"},\n \"activity\": {\"type\": \"string\"},\n \"animals_seen\": {\"type\": \"integer\", \"minimum\": 1, \"maximum\": 5},\n \"animals\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n },\n \"required\": [\"location\", \"activity\", \"animals_seen\", \"animals\"],\n },\n}\n\nresponse = client.chat_completion(\n messages=messages,\n response_format=response_format,\n max_tokens=500,\n)\n\nprint(response.choices[0].message.content)\n```", "href": null, "resource": null, "url": null, "code": "from huggingface_hub import InferenceClient\n\nclient = InferenceClient(\"meta-llama/Meta-Llama-3-70B-Instruct\")\n\nmessages = [\n {\n \"role\": \"user\",\n \"content\": \"I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?\",\n },\n]\n\nresponse_format = {\n \"type\": \"json\",\n \"value\": {\n \"properties\": {\n \"location\": {\"type\": \"string\"},\n \"activity\": {\"type\": \"string\"},\n \"animals_seen\": {\"type\": \"integer\", \"minimum\": 1, \"maximum\": 5},\n \"animals\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n },\n \"required\": [\"location\", \"activity\", \"animals_seen\", \"animals\"],\n },\n}\n\nresponse = client.chat_completion(\n messages=messages,\n response_format=response_format,\n max_tokens=500,\n)\n\nprint(response.choices[0].message.content)", "user": null, "label": null, "lang": "python" }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "As a reminder, json mode is activated with the OpenAI client as follows:", "raw": "As a reminder, json mode is activated with the OpenAI client as follows:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```python\nresponse = client.chat.completions.create(\n model=\"gpt-3.5-turbo-0125\",\n messages=[...],\n response_format={\"type\": \"json_object\"}\n)\n```", "href": null, "resource": null, "url": null, "code": "response = client.chat.completions.create(\n model=\"gpt-3.5-turbo-0125\",\n messages=[...],\n response_format={\"type\": \"json_object\"}\n)", "user": null, "label": null, "lang": "python" }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "One question remains unanswered, however, and will perhaps be answered by the community: it seems that an incompatibility persists for list of dictionaries generation, and currently, the production of simple dictionaries seems to be the only functional option.", "raw": "One question remains unanswered, however, and will perhaps be answered by the community: it seems that an incompatibility persists for list of dictionaries generation, and currently, the production of simple dictionaries seems to be the only functional option.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Understanding the json format response with HF's Serverless Inference API ๐Ÿค— As it stands, there seems to be an inconsistency with the OpenAI documentation on the question of implementing the JSON response format using the InferenceClient completion API. After investigating the InferenceClient source code, I share the official solution using a JSON Schema. This consolidates the structure of the response and simplifies parsing as part of an automated process for extracting metadata, information: ```python from huggingface_hub import InferenceClient client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct") messages = [ { "role": "user", "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?", }, ] response_format = { "type": "json", "value": { "properties": { "location": {"type": "string"}, "activity": {"type": "string"}, "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5}, "animals": {"type": "array", "items": {"type": "string"}}, }, "required": ["location", "activity", "animals_seen", "animals"], }, } response = client.chat_completion( messages=messages, response_format=response_format, max_tokens=500, ) print(response.choices[0].message.content) ``` As a reminder, json mode is activated with the OpenAI client as follows: ```python response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=[...], response_format={"type": "json_object"} ) ``` One question remains unanswered, however, and will perhaps be answered by the community: it seems that an incompatibility persists for list of dictionaries generation, and currently, the production of simple dictionaries seems to be the only functional option.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6459fa0f5b3111fbe83286e1%2FUhCa7JNbtTjC6dgOjZtH0.jpeg", "fullname": "Louis Brulรฉ Naudet", "name": "louisbrulenaudet", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 174, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "victor", "wsuff", "rreed-pha", "osanseviero" ], "count": 5 } ]
2024-09-01T12:11:31.000Z
2024-09-02T12:21:08.618Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f17f0a0925b9863e28ad517%2FX7QKoiXbUtEZSG9jyvfk3.jpeg", "fullname": "Victor Mustar", "name": "victor", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 2607, "isFollowing": false } ]
/posts/louisbrulenaudet/922470981780593
1,871
1
147425380710766
[ { "type": "text", "value": "I am training a controlnet model for Flux. And some of my experiences:", "raw": "I am training a controlnet model for Flux. And some of my experiences:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Checkpoint-10000:", "raw": "Checkpoint-10000:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/kadirnar_ai/status/1829831750471606668", "href": "https://x.com/kadirnar_ai/status/1829831750471606668", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Checkpoint-12000:", "raw": "Checkpoint-12000:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/kadirnar_ai/status/1829889524962640001", "href": "https://x.com/kadirnar_ai/status/1829889524962640001", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Checkpoint-14000:", "raw": "Checkpoint-14000:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/kadirnar_ai/status/1829989622878744711", "href": "https://x.com/kadirnar_ai/status/1829989622878744711", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Checkpoint (16000-18000):", "raw": "Checkpoint (16000-18000):", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/kadirnar_ai/status/1830179551407665654", "href": "https://x.com/kadirnar_ai/status/1830179551407665654", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Dataset: ", "raw": "Dataset: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/kadirnar/fluxdev_controlnet_16k", "href": null, "resource": { "type": "dataset", "id": "kadirnar/fluxdev_controlnet_16k", "discussionNum": null }, "url": "https://huggingface.co/datasets/kadirnar/fluxdev_controlnet_16k", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "GPU: 1xA100(80GB)", "raw": "GPU: 1xA100(80GB)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "GPU Hours: 65 ", "raw": "GPU Hours: 65 ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I am training a controlnet model for Flux. And some of my experiences: Checkpoint-10000: https://x.com/kadirnar_ai/status/1829831750471606668 Checkpoint-12000: https://x.com/kadirnar_ai/status/1829889524962640001 Checkpoint-14000: https://x.com/kadirnar_ai/status/1829989622878744711 Checkpoint (16000-18000): https://x.com/kadirnar_ai/status/1830179551407665654 Dataset: https://huggingface.co/datasets/kadirnar/fluxdev_controlnet_16k GPU: 1xA100(80GB) GPU Hours: 65
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1678181702571-619f7ba90df8731e0d8b6c54.jpeg", "fullname": "Kadir Nar", "name": "kadirnar", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 198, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F619f7ba90df8731e0d8b6c54%2FB5s69n0q8_HNlI6TRtlUI.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F619f7ba90df8731e0d8b6c54%2FvKCMZUw57mTfkMXQyJrjA.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F619f7ba90df8731e0d8b6c54%2F4DIOMrpRmxzXG7h8ZOKua.png" } ]
[]
[ { "reaction": "๐Ÿš€", "users": [ "tolgacangoz", "John6666", "gokaygokay", "Shinku", "AtAndDev", "xziayro", "Saugatkafley", "l3x13" ], "count": 8 }, { "reaction": "โค๏ธ", "users": [ "tolgacangoz", "louisbrulenaudet", "gokaygokay", "Pranavan", "AtAndDev", "xziayro", "osanseviero" ], "count": 7 }, { "reaction": "๐Ÿ”ฅ", "users": [ "tolgacangoz", "gokaygokay", "AtAndDev", "Sri-Vigneshwar-DJ" ], "count": 4 }, { "reaction": "๐Ÿ‘", "users": [ "jefinpaul", "Kazabra", "AtAndDev", "bomze" ], "count": 4 } ]
2024-09-01T09:55:15.000Z
2024-10-26T07:59:33.466Z
[ { "avatarUrl": "/avatars/bbdc1d48c816cb373013fb2d38501866.svg", "fullname": "ๆฒˆๆŸฏ", "name": "SKKK123", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/kadirnar/147425380710766
3,844
1
713480041248724
[ { "type": "text", "value": "Last Week in Medical AI: Top Research ", "raw": "Last Week in Medical AI: Top Research ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Papers/Models", "raw": "Papers/Models", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ… (August 25 - August 31, 2024)", "raw": "๐Ÿ… (August 25 - August 31, 2024)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- MultiMed: Multimodal Medical Benchmark", "raw": "- MultiMed: Multimodal Medical Benchmark", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- A Foundation model for generating chest X-ray images", "raw": "- A Foundation model for generating chest X-ray images", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- MEDSAGE: Medical Dialogue Summarization", "raw": "- MEDSAGE: Medical Dialogue Summarization", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Knowledge Graphs for Radiology Report Generation", "raw": "- Knowledge Graphs for Radiology Report Generation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Exploring Multi-modal LLMs for Chest X-ray", "raw": "- Exploring Multi-modal LLMs for Chest X-ray", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Improving Clinical Note Generation", "raw": "- Improving Clinical Note Generation", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "...", "raw": "...", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check the full thread : ", "raw": "Check the full thread : ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://x.com/OpenlifesciAI/status/1829984701324448051", "href": "https://x.com/OpenlifesciAI/status/1829984701324448051", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Last Week in Medical AI: Top Research Papers/Models ๐Ÿ… (August 25 - August 31, 2024) - MultiMed: Multimodal Medical Benchmark - A Foundation model for generating chest X-ray images - MEDSAGE: Medical Dialogue Summarization - Knowledge Graphs for Radiology Report Generation - Exploring Multi-modal LLMs for Chest X-ray - Improving Clinical Note Generation ... Check the full thread : https://x.com/OpenlifesciAI/status/1829984701324448051
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F5f3fe13d79c1ba4c353d0c19%2FXswyGe3OtOdZ6g7rnrgfc.png", "fullname": "Aaditya Ura", "name": "aaditya", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 224, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F5f3fe13d79c1ba4c353d0c19%2FUU2oDlXjjqDPKToE74hxD.jpeg" } ]
[]
[ { "reaction": "โค๏ธ", "users": [ "aaditya", "jaebumskiyomi", "aiisthebest", "ai-everyday", "adityaSaligram", "dblasko", "victor", "JCDentonInTheFresh" ], "count": 8 }, { "reaction": "๐Ÿค—", "users": [ "aaditya", "John6666", "jaebumskiyomi", "aiisthebest", "JCDentonInTheFresh" ], "count": 5 }, { "reaction": "๐Ÿ”ฅ", "users": [ "aaditya", "charanhu", "JCDentonInTheFresh" ], "count": 3 }, { "reaction": "๐Ÿš€", "users": [ "aaditya", "JCDentonInTheFresh", "Taylor658" ], "count": 3 }, { "reaction": "๐Ÿ‘", "users": [ "aiisthebest" ], "count": 1 } ]
2024-08-31T21:42:43.000Z
2024-09-04T09:58:24.502Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6032802e1f993496bc14d9e3%2Fw6hr-DEQot4VVkoyRIBiy.png", "fullname": "Omar Sanseviero", "name": "osanseviero", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 2868, "isFollowing": false } ]
/posts/aaditya/713480041248724
3,002
1
416847424881120
[ { "type": "text", "value": "๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง ", "raw": "๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Check out this app where you convert: ", "raw": "Check out this app where you convert: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Pytorch/tensorflow summary -> needed VRAM ", "raw": "Pytorch/tensorflow summary -> needed VRAM ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "or ", "raw": "or ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Parameter count -> needed VRAM", "raw": "Parameter count -> needed VRAM", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Use it in: ", "raw": "Use it in: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "http://howmuchvram.com", "href": "http://howmuchvram.com", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "And everything is open source! Ask for new functionalities or contribute in:", "raw": "And everything is open source! Ask for new functionalities or contribute in:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/AlexBodner/How_Much_VRAM", "href": "https://github.com/AlexBodner/How_Much_VRAM", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful! ", "raw": "If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful! ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ’พ๐Ÿง How much VRAM will you need for training your AI model? ๐Ÿ’พ๐Ÿง  Check out this app where you convert: Pytorch/tensorflow summary -> needed VRAM or Parameter count -> needed VRAM Use it in: http://howmuchvram.com And everything is open source! Ask for new functionalities or contribute in: https://github.com/AlexBodner/How_Much_VRAM If it's useful to you leave a star ๐ŸŒŸand share it to someone that will find the tool useful!
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FwMv9-ZsJUw4QQnld_cci7.jpeg", "fullname": "Alexander Dylan Bodner", "name": "AlexBodner", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "tuanlda78202", "louisbrulenaudet", "AlexBodner", "Bruhn", "den0620", "victor", "AtAndDev" ], "count": 8 }, { "reaction": "๐Ÿš€", "users": [ "rmayormartins", "AlexBodner", "whitebill", "erkhem-gantulga", "AtAndDev" ], "count": 5 }, { "reaction": "๐Ÿ‘", "users": [ "mwz", "AtAndDev", "ajibawa-2023", "jchataigne" ], "count": 4 }, { "reaction": "๐Ÿง ", "users": [ "AntonioTepsich", "AtAndDev" ], "count": 2 } ]
2024-08-31T19:07:47.000Z
2024-09-02T13:27:12.929Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F66d47c04b2302a63f24f1253%2FqWm-a6vYAmJgrhQ8kvner.jpeg", "fullname": "Samapika Priyadarshini", "name": "samapika", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/avatars/781a110b5ac82d4fd4e28c9dd54e2667.svg", "fullname": "marcos", "name": "marcos9", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }, { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FwMv9-ZsJUw4QQnld_cci7.jpeg", "fullname": "Alexander Dylan Bodner", "name": "AlexBodner", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28, "isFollowing": false } ]
/posts/AlexBodner/416847424881120
3,782
3
660821907550330
[ { "type": "text", "value": "From Article 50 of the EU AI Act: ", "raw": "From Article 50 of the EU AI Act: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "\"2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.\"", "raw": "\"2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.\"", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "How might this be put into practice?", "raw": "How might this be put into practice?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "I'm interested to know how content might be deemed as being \"detectable\" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on?", "raw": "I'm interested to know how content might be deemed as being \"detectable\" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images.", "raw": "Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Some sort of hash, which allowing content to be looked up, and verified as AI generated?", "raw": "Some sort of hash, which allowing content to be looked up, and verified as AI generated?", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Would a pop up saying \"this output was generated with AI\"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026.", "raw": "Would a pop up saying \"this output was generated with AI\"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "src: ", "raw": "src: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://artificialintelligenceact.eu/article/50/", "href": "https://artificialintelligenceact.eu/article/50/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
From Article 50 of the EU AI Act: "2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated." How might this be put into practice? I'm interested to know how content might be deemed as being "detectable" as artificially generated. I wonder if this will require an image be detectable as AI generated if it was copied out of the site / application it was created on? Some sort of a watermark? LSB Stegranography? I wonder if openAI are already sneaking something like this into DALL-E images. Some sort of hash, which allowing content to be looked up, and verified as AI generated? Would a pop up saying "this output was generated with AI"? suffice? Any ideas? Time is on the system provider's side, at least for now, as from what I can see this doesn't come into effect until August 2026. src: https://artificialintelligenceact.eu/article/50/
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F651d4e73acd8e9168ac92b04%2FWMYCWKx9MM8Xxj8vXursD.png", "fullname": "Jonah Ramponi", "name": "jonah-ramponi", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-08-31T18:44:33.000Z
2024-09-01T08:41:50.378Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F60b4acc69c978cce68723b34%2FeEnAT3CgDcnYKa7PIj5FB.jpeg", "fullname": "Jannes Stubbemann", "name": "stubbi", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false } ]
/posts/jonah-ramponi/660821907550330
657
1
872552437419473
[ { "type": "text", "value": "I found this paper to be thought-provoking: \"Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling\" by Bansal, Hosseini, Agarwal, Tran, and Kazemi.", "raw": "I found this paper to be thought-provoking: \"Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling\" by Bansal, Hosseini, Agarwal, Tran, and Kazemi.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://arxiv.org/abs/2408.16737", "href": "https://arxiv.org/abs/2408.16737", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one is free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.", "raw": "The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one is free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
I found this paper to be thought-provoking: "Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling" by Bansal, Hosseini, Agarwal, Tran, and Kazemi. https://arxiv.org/abs/2408.16737 The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one is free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F65c992424936ab38ecf706b0%2Faq7vuHFPO1S93fwJk0Cuq.jpeg", "fullname": "Jim Lai", "name": "grimjim", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 166, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "anakin87", "AtAndDev", "tachyon-beep", "tommulder", "victor", "gghfez", "louisbrulenaudet", "djuna" ], "count": 9 }, { "reaction": "๐Ÿ‘", "users": [ "trollek", "AymaneElfirdo", "ajibawa-2023" ], "count": 3 }, { "reaction": "๐Ÿ”ฅ", "users": [ "aaditya", "tommulder" ], "count": 2 } ]
2024-08-31T13:48:47.000Z
2024-09-02T14:49:59.726Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F630f3e4002ce39336c411048%2FFXJON7b-aRUiH0_V2uRsi.jpeg", "fullname": "alkinun", "name": "AtAndDev", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 19, "isFollowing": false }, { "avatarUrl": "/avatars/52a153d04d325469e1be69bce610ebe5.svg", "fullname": "ecyht2", "name": "ecyht2", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 3, "isFollowing": false } ]
/posts/grimjim/872552437419473
3,227
2
461803347660596
[ { "type": "text", "value": "Just tried LitServe from the good folks at ", "raw": "Just tried LitServe from the good folks at ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "mention", "value": null, "raw": "@LightningAI", "href": null, "resource": null, "url": null, "code": null, "user": "LightningAI", "label": null, "lang": null }, { "type": "text", "value": "!", "raw": "!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Between llama.cpp and vLLM, there is a small gap where a few large models are not deployable!", "raw": "Between llama.cpp and vLLM, there is a small gap where a few large models are not deployable!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "That's where LitServe comes in!", "raw": "That's where LitServe comes in!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "LitServe is a high-throughput serving engine for AI models built on FastAPI.", "raw": "LitServe is a high-throughput serving engine for AI models built on FastAPI.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Yes, built on FastAPI. That's where the advantage and the issue lie.", "raw": "Yes, built on FastAPI. That's where the advantage and the issue lie.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "It's extremely flexible and supports multi-modality and a variety of models out of the box.", "raw": "It's extremely flexible and supports multi-modality and a variety of models out of the box.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "But in my testing, it lags far behind in speed compared to vLLM.", "raw": "But in my testing, it lags far behind in speed compared to vLLM.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Also, no OpenAI API-compatible endpoint is available as of now.", "raw": "Also, no OpenAI API-compatible endpoint is available as of now.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "But as we move to multi-modal models and agents, this serves as a good starting point. However, itโ€™s got to become faster...", "raw": "But as we move to multi-modal models and agents, this serves as a good starting point. However, itโ€™s got to become faster...", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "GitHub: ", "raw": "GitHub: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/Lightning-AI/LitServe", "href": "https://github.com/Lightning-AI/LitServe", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Just tried LitServe from the good folks at @LightningAI! Between llama.cpp and vLLM, there is a small gap where a few large models are not deployable! That's where LitServe comes in! LitServe is a high-throughput serving engine for AI models built on FastAPI. Yes, built on FastAPI. That's where the advantage and the issue lie. It's extremely flexible and supports multi-modality and a variety of models out of the box. But in my testing, it lags far behind in speed compared to vLLM. Also, no OpenAI API-compatible endpoint is available as of now. But as we move to multi-modal models and agents, this serves as a good starting point. However, itโ€™s got to become faster... GitHub: https://github.com/Lightning-AI/LitServe
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FWXYLnjjJ4SROkoveIi7If.png", "fullname": "Kuldeep Singh Sidhu", "name": "singhsidhukuldeep", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 219, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F662bf5bfe93bb73804ef9344%2FXOmdrDLp3U0jXHvs912yB.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "victor", "theQuert" ], "count": 3 }, { "reaction": "๐Ÿš€", "users": [ "Norod78" ], "count": 1 } ]
2024-08-31T13:18:26.000Z
2024-08-31T13:18:26.748Z
[]
/posts/singhsidhukuldeep/461803347660596
869
0
817816295636972
[ { "type": "text", "value": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธHey there folks,", "raw": "๐Ÿ™‹๐Ÿปโ€โ™‚๏ธHey there folks,", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "just published a demo for Salesforce's new Function Calling Model ", "raw": "just published a demo for Salesforce's new Function Calling Model ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`Salesforce/xLAM`", "href": null, "resource": null, "url": null, "code": "Salesforce/xLAM", "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/Tonic/Salesforce-Xlam-7b-r", "href": null, "resource": { "type": "space", "id": "Tonic/Salesforce-Xlam-7b-r", "discussionNum": null }, "url": "https://huggingface.co/spaces/Tonic/Salesforce-Xlam-7b-r", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ", "raw": "- ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/Tonic/On-Device-Function-Calling", "href": null, "resource": { "type": "space", "id": "Tonic/On-Device-Function-Calling", "discussionNum": null }, "url": "https://huggingface.co/spaces/Tonic/On-Device-Function-Calling", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "just try em out, and it comes with ", "raw": "just try em out, and it comes with ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`on-device`", "href": null, "resource": null, "url": null, "code": "on-device", "user": null, "label": null, "lang": null }, { "type": "text", "value": "version too ! cool ! ๐Ÿš€", "raw": "version too ! cool ! ๐Ÿš€", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ™‹๐Ÿปโ€โ™‚๏ธHey there folks, just published a demo for Salesforce's new Function Calling Model `Salesforce/xLAM` - https://huggingface.co/spaces/Tonic/Salesforce-Xlam-7b-r - https://huggingface.co/spaces/Tonic/On-Device-Function-Calling just try em out, and it comes with `on-device`version too ! cool ! ๐Ÿš€
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62a3bb1cd0d8c2c2169f0b88%2FeT2TS0IlQbZtz-F_zHLz9.jpeg", "fullname": "Joseph [open/acc] Pollack", "name": "Tonic", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 313, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 }, { "reaction": "๐Ÿ”ฅ", "users": [ "Deeran" ], "count": 1 } ]
2024-08-31T06:37:37.000Z
2024-08-31T06:37:37.030Z
[]
/posts/Tonic/817816295636972
791
0
880839703733954
[ { "type": "text", "value": "new synthetic general chat dataset! meet Supernova, a dataset using prompts from UltraFeedback and responses from Llama 3.1 405b Instruct: ", "raw": "new synthetic general chat dataset! meet Supernova, a dataset using prompts from UltraFeedback and responses from Llama 3.1 405b Instruct: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/sequelbox/Supernova", "href": null, "resource": { "type": "dataset", "id": "sequelbox/Supernova", "discussionNum": null }, "url": "https://huggingface.co/datasets/sequelbox/Supernova", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "new model(s) using the Supernova dataset will follow next week, along with Other Things. (One of these will be a newly updated version of Enigma, utilizing the next version of ", "raw": "new model(s) using the Supernova dataset will follow next week, along with Other Things. (One of these will be a newly updated version of Enigma, utilizing the next version of ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/sequelbox/Tachibana", "href": null, "resource": { "type": "dataset", "id": "sequelbox/Tachibana", "discussionNum": null }, "url": "https://huggingface.co/datasets/sequelbox/Tachibana", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " with approximately 2x the rows!)", "raw": " with approximately 2x the rows!)", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
new synthetic general chat dataset! meet Supernova, a dataset using prompts from UltraFeedback and responses from Llama 3.1 405b Instruct: https://huggingface.co/datasets/sequelbox/Supernova new model(s) using the Supernova dataset will follow next week, along with Other Things. (One of these will be a newly updated version of Enigma, utilizing the next version of https://huggingface.co/datasets/sequelbox/Tachibana with approximately 2x the rows!)
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F63444f2687964b331809eb55%2FWvZivsvKsM_t0tBtakovK.png", "fullname": "t.d.a.g.", "name": "sequelbox", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 51, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "djuna", "kristaller486", "osanseviero" ], "count": 4 } ]
2024-08-30T20:11:29.000Z
2024-08-30T20:11:58.460Z
[]
/posts/sequelbox/880839703733954
824
0
583635727849608
[ { "type": "text", "value": "Very excited to have made the list and been invited to OpenAI DevDay 2024 at the London event 30 October! Looking forward to seeing what the future of AI dev holds, connecting with other professionals in the field, and advocating for open source AI!", "raw": "Very excited to have made the list and been invited to OpenAI DevDay 2024 at the London event 30 October! Looking forward to seeing what the future of AI dev holds, connecting with other professionals in the field, and advocating for open source AI!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://openai.com/devday/", "href": "https://openai.com/devday/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Very excited to have made the list and been invited to OpenAI DevDay 2024 at the London event 30 October! Looking forward to seeing what the future of AI dev holds, connecting with other professionals in the field, and advocating for open source AI! https://openai.com/devday/
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F656e3808d4de03a07d116850%2FJZh4lrjFueJZVqugjoloP.jpeg", "fullname": "Kenneth Hamilton", "name": "ZennyKenny", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 33, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-08-30T18:06:10.000Z
2024-08-30T18:06:10.577Z
[]
/posts/ZennyKenny/583635727849608
693
0
421434775993783
[ { "type": "text", "value": "๐Ÿ’พ๐Ÿง Want to know how much VRAM you will need for training your model? ๐Ÿ’พ๐Ÿง ", "raw": "๐Ÿ’พ๐Ÿง Want to know how much VRAM you will need for training your model? ๐Ÿ’พ๐Ÿง ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Now you can use this app in which you can input a torch/tensorflow summary or the parameters count and get an estimate of the required memory!", "raw": "Now you can use this app in which you can input a torch/tensorflow summary or the parameters count and get an estimate of the required memory!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Use it in: howmuchvram.com ", "raw": "Use it in: howmuchvram.com ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Also, everything is Open Source so you can contribute in repo: ", "raw": "Also, everything is Open Source so you can contribute in repo: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/AlexBodner/How_Much_VRAM", "href": "https://github.com/AlexBodner/How_Much_VRAM", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Leave it a starโญ", "raw": "Leave it a starโญ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
๐Ÿ’พ๐Ÿง Want to know how much VRAM you will need for training your model? ๐Ÿ’พ๐Ÿง  Now you can use this app in which you can input a torch/tensorflow summary or the parameters count and get an estimate of the required memory! Use it in: howmuchvram.com Also, everything is Open Source so you can contribute in repo: https://github.com/AlexBodner/How_Much_VRAM Leave it a starโญ
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F658880d499ed106ac888dd7a%2FwMv9-ZsJUw4QQnld_cci7.jpeg", "fullname": "Alexander Dylan Bodner", "name": "AlexBodner", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 28, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "Mehyaar", "MexIvanov", "Bruhn", "den0620", "AtAndDev" ], "count": 5 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "AtAndDev" ], "count": 2 } ]
2024-08-30T17:49:25.000Z
2024-08-30T17:49:25.232Z
[]
/posts/AlexBodner/421434775993783
1,586
0
416542379891081
[ { "type": "text", "value": "Shakker-Labs brings an amazing LoRA trained on FLUX.1-dev for blended realistic illustration by Muertu ๐Ÿ˜ the front character is in illustration style, while the background is realistic. ๐Ÿคฉ", "raw": "Shakker-Labs brings an amazing LoRA trained on FLUX.1-dev for blended realistic illustration by Muertu ๐Ÿ˜ the front character is in illustration style, while the background is realistic. ๐Ÿคฉ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿค™Model: ", "raw": "๐Ÿค™Model: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-blended-realistic-illustration", "href": "https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-blended-realistic-illustration", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "๐Ÿ™‡โ€โ™‚๏ธMy space for demo: ", "raw": "๐Ÿ™‡โ€โ™‚๏ธMy space for demo: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/vilarin/flux-lab-light", "href": null, "resource": { "type": "space", "id": "vilarin/flux-lab-light", "discussionNum": null }, "url": "https://huggingface.co/spaces/vilarin/flux-lab-light", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Shakker-Labs brings an amazing LoRA trained on FLUX.1-dev for blended realistic illustration by Muertu ๐Ÿ˜ the front character is in illustration style, while the background is realistic. ๐Ÿคฉ ๐Ÿค™Model: https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-blended-realistic-illustration ๐Ÿ™‡โ€โ™‚๏ธMy space for demo: https://huggingface.co/spaces/vilarin/flux-lab-light
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F642827944fe87caede802784%2Fa7s3Ub9Cy6-PuuaX8wwXm.png", "fullname": "VILARIN", "name": "vilarin", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 67, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2Foz-yfW-ou3-NT1uhcL6DK.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2FyMXRIQ4gDlK1fPBVLZYYO.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2Fpqpg26PSUbsrf1run2-LX.png" }, { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F642827944fe87caede802784%2F5REmkwmEdfOz-dcinakUU.png" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "orrinin", "YaTharThShaRma999", "John6666", "chris-fung", "djuna", "alielfilali01", "ngxson", "Felladrin", "wanghaofan", "AtAndDev", "keakohv", "louisbrulenaudet" ], "count": 12 }, { "reaction": "โค๏ธ", "users": [ "Amr-khaled", "AtAndDev" ], "count": 2 } ]
2024-08-30T16:51:15.000Z
2024-09-05T05:33:55.982Z
[]
/posts/vilarin/416542379891081
2,449
0
580555903414737
[ { "type": "text", "value": "Made a fun Space powered by Llama 405B for creating real, working react apps with the awesome plus that you can contribute to an open react dataset by upvoting or downvoting the response ๐Ÿค—.", "raw": "Made a fun Space powered by Llama 405B for creating real, working react apps with the awesome plus that you can contribute to an open react dataset by upvoting or downvoting the response ๐Ÿค—.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/cfahlgren1/llama-artifacts", "href": null, "resource": { "type": "space", "id": "cfahlgren1/llama-artifacts", "discussionNum": null }, "url": "https://huggingface.co/spaces/cfahlgren1/llama-artifacts", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/cfahlgren1/react-code-instructions", "href": null, "resource": { "type": "dataset", "id": "cfahlgren1/react-code-instructions", "discussionNum": null }, "url": "https://huggingface.co/datasets/cfahlgren1/react-code-instructions", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Made a fun Space powered by Llama 405B for creating real, working react apps with the awesome plus that you can contribute to an open react dataset by upvoting or downvoting the response ๐Ÿค—. https://huggingface.co/spaces/cfahlgren1/llama-artifacts https://huggingface.co/datasets/cfahlgren1/react-code-instructions
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FYPwSOrronoozwHbJchPn3.jpeg", "fullname": "Caleb Fahlgren", "name": "cfahlgren1", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 123, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FPcRRDxywqQxW04zptVdp3.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 }, { "reaction": "๐Ÿ”ฅ", "users": [ "cloudjumbo" ], "count": 1 } ]
2024-08-30T15:32:53.000Z
2024-08-30T15:35:08.779Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F648a374f00f7a3374ee64b99%2FYPwSOrronoozwHbJchPn3.jpeg", "fullname": "Caleb Fahlgren", "name": "cfahlgren1", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 123, "isFollowing": false } ]
/posts/cfahlgren1/580555903414737
1,130
1
255000504996462
[ { "type": "text", "value": "Here's a 1-minute video tutorial on how to fine-tune ", "raw": "Here's a 1-minute video tutorial on how to fine-tune ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/unsloth/llama-3-8b-bnb-4bit", "href": null, "resource": { "type": "model", "id": "unsloth/llama-3-8b-bnb-4bit", "discussionNum": null }, "url": "https://huggingface.co/unsloth/llama-3-8b-bnb-4bit", "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " with unsloth", "raw": " with unsloth", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Using Roller Coaster Tycoon peep thoughts as an example", "raw": "Using Roller Coaster Tycoon peep thoughts as an example", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Here's a 1-minute video tutorial on how to fine-tune https://huggingface.co/unsloth/llama-3-8b-bnb-4bit with unsloth Using Roller Coaster Tycoon peep thoughts as an example
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1672164046414-624b4a964056e2a6914a05c5.png", "fullname": "Dylan Ebert", "name": "dylanebert", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 1764, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F624b4a964056e2a6914a05c5%2FVkNapafGp7lrlLbyJU2e8.mp4" } ]
[]
[ { "reaction": "๐Ÿ”ฅ", "users": [ "victor", "John6666", "prithivMLmods", "KingNish", "thisisanshgupta", "Bruhn", "budotsmedia", "AtAndDev", "mambiux" ], "count": 9 } ]
2024-08-30T15:27:11.000Z
2024-08-30T15:27:11.620Z
[]
/posts/dylanebert/255000504996462
2,517
0
917996280846812
[ { "type": "text", "value": "AI in the News: Llama 10x growth, Apple & Nvidia in talks with OpenAI, universal basic income, AI & art", "raw": "AI in the News: Llama 10x growth, Apple & Nvidia in talks with OpenAI, universal basic income, AI & art", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* Meta leads open-source AI boom, Llama downloads surge 10x year-over-year - VB", "raw": "* Meta leads open-source AI boom, Llama downloads surge 10x year-over-year - VB", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://venturebeat.com/ai/meta-leads-open-source-ai-boom-llama-downloads-surge-10x-year-over-year/", "href": "https://venturebeat.com/ai/meta-leads-open-source-ai-boom-llama-downloads-surge-10x-year-over-year/", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* Apple, Nvidia Are in Talks to Invest in OpenAI - WSJ", "raw": "* Apple, Nvidia Are in Talks to Invest in OpenAI - WSJ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.wsj.com/tech/ai/openai-apple-funding-chatgpt-50754cd6?mod=rss_Technology", "href": "https://www.wsj.com/tech/ai/openai-apple-funding-chatgpt-50754cd6?mod=rss_Technology", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* The Report Card on Guaranteed Income Is Still Incomplete - NYT", "raw": "* The Report Card on Guaranteed Income Is Still Incomplete - NYT", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.nytimes.com/2024/08/30/business/economy/the-report-card-on-guaranteed-income-is-still-incomplete.html", "href": "https://www.nytimes.com/2024/08/30/business/economy/the-report-card-on-guaranteed-income-is-still-incomplete.html", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "* Ethically dubious or a creative gift? How artists are grappling with AI in their work - The Guardian", "raw": "* Ethically dubious or a creative gift? How artists are grappling with AI in their work - The Guardian", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.theguardian.com/artanddesign/article/2024/aug/30/xanthe-dobbie-futuer-sex-love-sounds-ai-video-celebrity-clones", "href": "https://www.theguardian.com/artanddesign/article/2024/aug/30/xanthe-dobbie-futuer-sex-love-sounds-ai-video-celebrity-clones", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Want more? Subscribe to my daily newsletter!", "raw": "Want more? Subscribe to my daily newsletter!", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://linkedin.com/build-relation/newsletter-follow?entityUrn=7233909926606053377", "href": "https://linkedin.com/build-relation/newsletter-follow?entityUrn=7233909926606053377", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
AI in the News: Llama 10x growth, Apple & Nvidia in talks with OpenAI, universal basic income, AI & art * Meta leads open-source AI boom, Llama downloads surge 10x year-over-year - VB https://venturebeat.com/ai/meta-leads-open-source-ai-boom-llama-downloads-surge-10x-year-over-year/ * Apple, Nvidia Are in Talks to Invest in OpenAI - WSJ https://www.wsj.com/tech/ai/openai-apple-funding-chatgpt-50754cd6?mod=rss_Technology * The Report Card on Guaranteed Income Is Still Incomplete - NYT https://www.nytimes.com/2024/08/30/business/economy/the-report-card-on-guaranteed-income-is-still-incomplete.html * Ethically dubious or a creative gift? How artists are grappling with AI in their work - The Guardian https://www.theguardian.com/artanddesign/article/2024/aug/30/xanthe-dobbie-futuer-sex-love-sounds-ai-video-celebrity-clones Want more? Subscribe to my daily newsletter! https://linkedin.com/build-relation/newsletter-follow?entityUrn=7233909926606053377
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F647f36a8454af0237bd49574%2FjshkqBUTY-GZL8As8y6Aq.jpeg", "fullname": "Florent Daudens", "name": "fdaudens", "type": "user", "isPro": false, "isHf": true, "isMod": false, "followerCount": 384, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-08-30T13:55:59.000Z
2024-08-30T13:55:59.187Z
[]
/posts/fdaudens/917996280846812
447
0
357701279407928
[ { "type": "text", "value": "Sharing for anyone using Diffusers ", "raw": "Sharing for anyone using Diffusers ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`from_single_file`", "href": null, "resource": null, "url": null, "code": "from_single_file", "user": null, "label": null, "lang": null }, { "type": "text", "value": " loading and affected by the Runway SD 1.5 issue.", "raw": " loading and affected by the Runway SD 1.5 issue.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If you have ", "raw": "If you have ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`runwayml/stable-diffusion-v1-5`", "href": null, "resource": null, "url": null, "code": "runwayml/stable-diffusion-v1-5", "user": null, "label": null, "lang": null }, { "type": "text", "value": " saved locally in your HF cache then loading single file checkpoints in the following way should still work. ", "raw": " saved locally in your HF cache then loading single file checkpoints in the following way should still work. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```\nfrom diffusers import StableDiffusionPipeline\n\npipe = StableDiffusionPipeline.from_single_file(\"<url or path to single file checkpoint>\")\n```", "href": null, "resource": null, "url": null, "code": "from diffusers import StableDiffusionPipeline\n\npipe = StableDiffusionPipeline.from_single_file(\"<url or path to single file checkpoint>\")", "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "If you do not have the model repo saved in your cache, then automatically inferring the pipeline config will not work since the reference repo ", "raw": "If you do not have the model repo saved in your cache, then automatically inferring the pipeline config will not work since the reference repo ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "inline_code", "value": null, "raw": "`runwayml/stable-diffusion-v1-5`", "href": null, "resource": null, "url": null, "code": "runwayml/stable-diffusion-v1-5", "user": null, "label": null, "lang": null }, { "type": "text", "value": " doesn't exist anymore. ", "raw": " doesn't exist anymore. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "You can use an alternative SD1.5 repo id to still configure your pipeline.", "raw": "You can use an alternative SD1.5 repo id to still configure your pipeline.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```\nfrom diffusers import StableDiffusionPipeline\n\npipe = StableDiffusionPipeline.from_single_file(\"<url or path to single file checkpoint>\", config=\"Lykon/DreamShaper\")\n```", "href": null, "resource": null, "url": null, "code": "from diffusers import StableDiffusionPipeline\n\npipe = StableDiffusionPipeline.from_single_file(\"<url or path to single file checkpoint>\", config=\"Lykon/DreamShaper\")", "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "We're working on resolving the issue ASAP. ", "raw": "We're working on resolving the issue ASAP. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Sharing for anyone using Diffusers `from_single_file` loading and affected by the Runway SD 1.5 issue. If you have `runwayml/stable-diffusion-v1-5` saved locally in your HF cache then loading single file checkpoints in the following way should still work. ``` from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_single_file("<url or path to single file checkpoint>") ``` If you do not have the model repo saved in your cache, then automatically inferring the pipeline config will not work since the reference repo `runwayml/stable-diffusion-v1-5` doesn't exist anymore. You can use an alternative SD1.5 repo id to still configure your pipeline. ``` from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_single_file("<url or path to single file checkpoint>", config="Lykon/DreamShaper") ``` We're working on resolving the issue ASAP.
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F1630334896986-6126e46848005fa9ca5c578c.jpeg", "fullname": "Dhruv Nair", "name": "dn6", "type": "user", "isPro": true, "isHf": true, "isMod": false, "followerCount": 34, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "clem", "wardred1962", "Nymbo" ], "count": 4 }, { "reaction": "โค๏ธ", "users": [ "clem", "sayakpaul", "aaditya", "Nymbo" ], "count": 4 }, { "reaction": "๐Ÿ‘", "users": [ "John6666", "clem", "Nymbo" ], "count": 3 } ]
2024-08-30T05:39:38.000Z
2024-09-11T08:24:39.918Z
[ { "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6640bbd0220cfa8cbfdce080%2FwiAHUu5ewawyipNs0YFBR.png", "fullname": "John Smith", "name": "John6666", "type": "user", "isPro": true, "isHf": false, "isMod": false, "followerCount": 398, "isFollowing": false } ]
/posts/dn6/357701279407928
2,550
2
672926050183277
[ { "type": "text", "value": "The only 405B spaces still freely accessible are powered by SN fast api. ", "raw": "The only 405B spaces still freely accessible are powered by SN fast api. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/spaces/xianbao/SambaNova-fast", "href": null, "resource": { "type": "space", "id": "xianbao/SambaNova-fast", "discussionNum": null }, "url": "https://huggingface.co/spaces/xianbao/SambaNova-fast", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://sambanova.ai/fast-api?api_ref=907266", "href": "https://sambanova.ai/fast-api?api_ref=907266", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
The only 405B spaces still freely accessible are powered by SN fast api. https://huggingface.co/spaces/xianbao/SambaNova-fast https://sambanova.ai/fast-api?api_ref=907266
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FFTiirwS_L6IaLHmHwIo2g.png", "fullname": "Kaizhao Liang", "name": "kz919", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 34, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62140dcdcf7928035e8135ad%2FNNrFwd6s5BM2fTpWmX3px.jpeg" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666", "clem", "AtAndDev", "kz919", "Amr-khaled", "louisbrulenaudet" ], "count": 6 }, { "reaction": "๐Ÿ”ฅ", "users": [ "kz919", "alielfilali01", "KingNish", "tousif1988" ], "count": 4 }, { "reaction": "๐Ÿค—", "users": [ "kz919", "andito" ], "count": 2 }, { "reaction": "๐Ÿ˜Ž", "users": [ "kz919" ], "count": 1 } ]
2024-08-30T03:04:03.000Z
2024-08-30T03:04:03.244Z
[]
/posts/kz919/672926050183277
1,683
0
914900735326223
[ { "type": "text", "value": "The word 'Lead' has three definitions. When an LLM model tokenizes this word, it is always the same token. Imagine being able to put any particular embedding at any particular time into a 'Quantum State'. When an Embedding is in a Quantum State, the word token could have up to 3 different meanings (x1, x2, x3). The Quantum State gets collapsed based on the individual context surrounding the word. 'Jill lead Joy to the store' would collapse to x1. 'Jill and Joy stumbled upon a pile of lead' would collapse to x3. Very simple, right? This method produces OFF THE CHARTS results:", "raw": "The word 'Lead' has three definitions. When an LLM model tokenizes this word, it is always the same token. Imagine being able to put any particular embedding at any particular time into a 'Quantum State'. When an Embedding is in a Quantum State, the word token could have up to 3 different meanings (x1, x2, x3). The Quantum State gets collapsed based on the individual context surrounding the word. 'Jill lead Joy to the store' would collapse to x1. 'Jill and Joy stumbled upon a pile of lead' would collapse to x3. Very simple, right? This method produces OFF THE CHARTS results:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://www.youtube.com/watch?v=tuQI6A-EOqE", "href": "https://www.youtube.com/watch?v=tuQI6A-EOqE", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": " ", "raw": " ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
The word 'Lead' has three definitions. When an LLM model tokenizes this word, it is always the same token. Imagine being able to put any particular embedding at any particular time into a 'Quantum State'. When an Embedding is in a Quantum State, the word token could have up to 3 different meanings (x1, x2, x3). The Quantum State gets collapsed based on the individual context surrounding the word. 'Jill lead Joy to the store' would collapse to x1. 'Jill and Joy stumbled upon a pile of lead' would collapse to x3. Very simple, right? This method produces OFF THE CHARTS results: https://www.youtube.com/watch?v=tuQI6A-EOqE
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2Fnoauth%2FcA64Ix1vh75C7HoClUBhx.png", "fullname": "Richard A Aragon", "name": "TuringsSolutions", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 146, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿง ", "users": [ "maximuspowers", "maier-s", "nicolollo" ], "count": 3 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-08-29T21:39:19.000Z
2024-08-29T21:39:19.548Z
[]
/posts/TuringsSolutions/914900735326223
1,405
0
230212031259808
[ { "type": "text", "value": "Continuing my streak by releasing the Wikireading dataset: a large collection of scraped non-fiction books predominantly in Russian language.", "raw": "Continuing my streak by releasing the Wikireading dataset: a large collection of scraped non-fiction books predominantly in Russian language.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "resource", "value": null, "raw": "https://huggingface.co/datasets/its5Q/wikireading", "href": null, "resource": { "type": "dataset", "id": "its5Q/wikireading", "discussionNum": null }, "url": "https://huggingface.co/datasets/its5Q/wikireading", "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Here's the highlights:", "raw": "Here's the highlights:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- ~7B tokens, or ~28B characters, making it a great candidate for use in pretraining", "raw": "- ~7B tokens, or ~28B characters, making it a great candidate for use in pretraining", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Contains non-fiction works from many knowledge domains", "raw": "- Contains non-fiction works from many knowledge domains", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "- Includes both the original HTML and extracted text of book chapters", "raw": "- Includes both the original HTML and extracted text of book chapters", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Continuing my streak by releasing the Wikireading dataset: a large collection of scraped non-fiction books predominantly in Russian language. https://huggingface.co/datasets/its5Q/wikireading Here's the highlights: - ~7B tokens, or ~28B characters, making it a great candidate for use in pretraining - Contains non-fiction works from many knowledge domains - Includes both the original HTML and extracted text of book chapters
{ "avatarUrl": "/avatars/a692e2e2a3b0222e2f8cdfc44ac8d64c.svg", "fullname": "its5Q", "name": "its5Q", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 14, "isFollowing": false }
[]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "lukmanaj", "clem", "kristaller486", "nyuuzyou" ], "count": 4 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 }, { "reaction": "โค๏ธ", "users": [ "clem" ], "count": 1 } ]
2024-08-29T18:36:41.000Z
2024-08-29T18:36:41.732Z
[]
/posts/its5Q/230212031259808
1,278
0
672761214253429
[ { "type": "text", "value": "Thought this was an interesting graphic from the EAGLE blog post. It made me wonder if certain sampling methods have been shown to work better for certain tasks.", "raw": "Thought this was an interesting graphic from the EAGLE blog post. It made me wonder if certain sampling methods have been shown to work better for certain tasks.", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Does anyone know of any work looking at trends in the output token probability distribution by task type? (or similar) ", "raw": "Does anyone know of any work looking at trends in the output token probability distribution by task type? (or similar) ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Source: ", "raw": "Source: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://sites.google.com/view/eagle-llm", "href": "https://sites.google.com/view/eagle-llm", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Thought this was an interesting graphic from the EAGLE blog post. It made me wonder if certain sampling methods have been shown to work better for certain tasks. Does anyone know of any work looking at trends in the output token probability distribution by task type? (or similar) Source: https://sites.google.com/view/eagle-llm
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F651d4e73acd8e9168ac92b04%2FWMYCWKx9MM8Xxj8vXursD.png", "fullname": "Jonah Ramponi", "name": "jonah-ramponi", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": null, "isFollowing": false }
[ { "type": "image", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F651d4e73acd8e9168ac92b04%2F775TUAesRzcshWIVKmo_G.png" } ]
[]
[ { "reaction": "๐Ÿ‘€", "users": [ "John6666" ], "count": 1 } ]
2024-08-29T18:06:07.000Z
2024-08-29T18:06:44.888Z
[]
/posts/jonah-ramponi/672761214253429
497
0
858442795091051
[ { "type": "text", "value": "Automated web scraping with playwright is becoming easier by the day. Now, using ollama tool calling, its possible to perform very high accuracy web scraping (in some cases 100% accurate) through just asking an LLM to scrape the content for you. ", "raw": "Automated web scraping with playwright is becoming easier by the day. Now, using ollama tool calling, its possible to perform very high accuracy web scraping (in some cases 100% accurate) through just asking an LLM to scrape the content for you. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "This can be completed in a multistep process similar to cohere's platform. If you have tried the cohere playground with web scraping, this will feel very similar. In my experience, the Llama 3.1 version is much better due to the larger context window. Both tools are great, but the difference is the ollama + playwright version is completely controlled by you. ", "raw": "This can be completed in a multistep process similar to cohere's platform. If you have tried the cohere playground with web scraping, this will feel very similar. In my experience, the Llama 3.1 version is much better due to the larger context window. Both tools are great, but the difference is the ollama + playwright version is completely controlled by you. ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "All you need to do is wrap your scraper in a function:", "raw": "All you need to do is wrap your scraper in a function:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```\n async def query_web_scraper(url: str) -> dict:\n scraper = WebScraper(headless=False)\n return await scraper.query_page_content(url)\n```", "href": null, "resource": null, "url": null, "code": " async def query_web_scraper(url: str) -> dict:\n scraper = WebScraper(headless=False)\n return await scraper.query_page_content(url)", "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "and then make your request:", "raw": "and then make your request:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "code_fence", "value": null, "raw": "```\n# First API call: Send the query and function description to the model\nresponse = ollama.chat(\n model=model,\n messages=messages,\n tools=[\n {\n 'type': 'function',\n 'function': {\n 'name': 'query_web_scraper',\n 'description': 'Scrapes the content of a web page and returns the structured JSON object with titles, articles, and associated links.',\n 'parameters': {\n 'type': 'object',\n 'properties': {\n 'url': {\n 'type': 'string',\n 'description': 'The URL of the web page to scrape.',\n },\n },\n 'required': ['url'],\n },\n },\n },\n ]\n)\n```", "href": null, "resource": null, "url": null, "code": "# First API call: Send the query and function description to the model\nresponse = ollama.chat(\n model=model,\n messages=messages,\n tools=[\n {\n 'type': 'function',\n 'function': {\n 'name': 'query_web_scraper',\n 'description': 'Scrapes the content of a web page and returns the structured JSON object with titles, articles, and associated links.',\n 'parameters': {\n 'type': 'object',\n 'properties': {\n 'url': {\n 'type': 'string',\n 'description': 'The URL of the web page to scrape.',\n },\n },\n 'required': ['url'],\n },\n },\n },\n ]\n)", "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "To learn more:", "raw": "To learn more:", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Github w/ Playground: ", "raw": "Github w/ Playground: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://github.com/tdolan21/tool-calling-playground/blob/main/notebooks/ollama-playwright-web-scraping.ipynb", "href": "https://github.com/tdolan21/tool-calling-playground/blob/main/notebooks/ollama-playwright-web-scraping.ipynb", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "text", "value": "Complete Guide: ", "raw": "Complete Guide: ", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "link", "value": null, "raw": "https://medium.com/@tdolan21/building-an-llm-powered-web-scraper-with-ollama-and-playwright-6274d5d938b5", "href": "https://medium.com/@tdolan21/building-an-llm-powered-web-scraper-with-ollama-and-playwright-6274d5d938b5", "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null }, { "type": "new_line", "value": null, "raw": "\n", "href": null, "resource": null, "url": null, "code": null, "user": null, "label": null, "lang": null } ]
Automated web scraping with playwright is becoming easier by the day. Now, using ollama tool calling, its possible to perform very high accuracy web scraping (in some cases 100% accurate) through just asking an LLM to scrape the content for you. This can be completed in a multistep process similar to cohere's platform. If you have tried the cohere playground with web scraping, this will feel very similar. In my experience, the Llama 3.1 version is much better due to the larger context window. Both tools are great, but the difference is the ollama + playwright version is completely controlled by you. All you need to do is wrap your scraper in a function: ``` async def query_web_scraper(url: str) -> dict: scraper = WebScraper(headless=False) return await scraper.query_page_content(url) ``` and then make your request: ``` # First API call: Send the query and function description to the model response = ollama.chat( model=model, messages=messages, tools=[ { 'type': 'function', 'function': { 'name': 'query_web_scraper', 'description': 'Scrapes the content of a web page and returns the structured JSON object with titles, articles, and associated links.', 'parameters': { 'type': 'object', 'properties': { 'url': { 'type': 'string', 'description': 'The URL of the web page to scrape.', }, }, 'required': ['url'], }, }, }, ] ) ``` To learn more: Github w/ Playground: https://github.com/tdolan21/tool-calling-playground/blob/main/notebooks/ollama-playwright-web-scraping.ipynb Complete Guide: https://medium.com/@tdolan21/building-an-llm-powered-web-scraper-with-ollama-and-playwright-6274d5d938b5
{ "avatarUrl": "/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F6455cc8d679315e4ef16fbec%2FM6Cfifn05BUzkCFd2QDIT.png", "fullname": "Tim Dolan", "name": "macadeliccc", "type": "user", "isPro": false, "isHf": false, "isMod": false, "followerCount": 152, "isFollowing": false }
[ { "type": "video", "url": "/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6455cc8d679315e4ef16fbec%2FhVNJY2mBa3mNtCXWFGaKf.mp4" } ]
[]
[ { "reaction": "๐Ÿ‘", "users": [ "RalFinger", "xsa-dev", "wsuff", "alielfilali01", "Bruhn" ], "count": 5 }, { "reaction": "๐Ÿ‘€", "users": [ "John6666", "alielfilali01", "louisbrulenaudet" ], "count": 3 } ]
2024-08-29T16:24:10.000Z
2024-08-29T18:52:11.663Z
[]
/posts/macadeliccc/858442795091051
1,596
0