temp-org

AI & ML interests

None defined yet.

Recent Activity

temp-org's activity

victorย 
posted an update about 1 month ago
view post
Post
1856
Qwen/QwQ-32B-Preview shows us the future (and it's going to be exciting)...

I tested it against some really challenging reasoning prompts and the results are amazing ๐Ÿคฏ.

Check this dataset for the results: victor/qwq-misguided-attention
  • 2 replies
ยท
victorย 
posted an update about 1 month ago
view post
Post
2325
Perfect example of why Qwen/Qwen2.5-Coder-32B-Instruct is insane?

Introducing: AI Video Composer ๐Ÿ”ฅ
huggingface-projects/ai-video-composer

Drag and drop your assets (images/videos/audios) to create any video you want using natural language!

It works by asking the model to output a valid FFMPEG and this can be quite complex but most of the time Qwen2.5-Coder-32B gets it right (that thing is a beast). It's an update of an old project made with GPT4 and it was almost impossible to make it work with open models back then (~1.5 years ago), but not anymore, let's go open weights ๐Ÿš€.
victorย 
posted an update about 1 month ago
view post
Post
1825
Qwen2.5-72B is now the default HuggingChat model.
This model is so good that you must try it! I often get better results on rephrasing with it than Sonnet or GPT-4!!
fffiloniย 
posted an update about 2 months ago
victorย 
posted an update 3 months ago
victorย 
posted an update 3 months ago
view post
Post
2671
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome ๐Ÿ˜Š
  • 2 replies
ยท
fffiloniย 
posted an update 3 months ago
view post
Post
16023
Visionary Walter Murch (editor for Francis Ford Coppola), in 1999:

โ€œ So let's suppose a technical apotheosis some time in the middle of the 21st century, when it somehow becomes possible for one person to make an entire feature film, with virtual actors. Would this be a good thing?

If the history of oil painting is any guide, the broadest answer would be yes, with the obvious caution to keep a wary eye on the destabilizing effect of following too intently a hermetically personal vision. One need only look at the unraveling of painting or classical music in the 20th century to see the risks.

Let's go even further, and force the issue to its ultimate conclusion by supposing the diabolical invention of a black box that could directly convert a single person's thoughts into a viewable cinematic reality. You would attach a series of electrodes to various points on your skull and simply think the film into existence.

And since we are time-traveling, let us present this hypothetical invention as a Faustian bargain to the future filmmakers of the 21st century. If this box were offered by some mysterious cloaked figure in exchange for your eternal soul, would you take it?

The kind of filmmakers who would accept, even leap, at the offer are driven by the desire to see their own vision on screen in as pure a form as possible. They accept present levels of collaboration as the evil necessary to achieve this vision. Alfred Hitchcock, I imagine, would be one of them, judging from his description of the creative process: "The film is already made in my head before we start shooting."โ€
โ€”
Read "A Digital Cinema of the Mind? Could Be" by Walter Murch: https://archive.nytimes.com/www.nytimes.com/library/film/050299future-film.html

  • 1 reply
ยท
victorย 
posted an update 5 months ago
view post
Post
5587
๐Ÿ™‹ Calling all Hugging Face users! We want to hear from YOU!

What feature or improvement would make the biggest impact on Hugging Face?

Whether it's the Hub, better documentation, new integrations, or something completely different โ€“ we're all ears!

Your feedback shapes the future of Hugging Face. Drop your ideas in the comments below! ๐Ÿ‘‡
ยท
victorย 
posted an update 5 months ago
view post
Post
4132
How good are you at spotting AI-generated images?

Find out by playing Fake Insects ๐Ÿž a Game where you need to identify which insects are fake (AI generated). Good luck & share your best score in the comments!

victor/fake-insects
ยท
victorย 
posted an update 5 months ago
view post
Post
4047
Hugging Face famous organisations activity. Guess which one has the word "Open" in it ๐Ÿ˜‚
  • 2 replies
ยท
multimodalartย 
posted an update 6 months ago
victorย 
posted an update 6 months ago
victorย 
posted an update 7 months ago
view post
Post
4002
Together MoA is a really interesting approach based on open source models!

"We introduce Mixture of Agents (MoA), an approach to harness the collective strengths of multiple LLMs to improve state-of-the-art quality. And we provide a reference implementation, Together MoA, which leverages several open-source LLM agents to achieve a score of 65.1% on AlpacaEval 2.0, surpassing prior leader GPT-4o (57.5%)."

Read more here: https://www.together.ai/blog/together-moa

PS: they provide some demo code: (https://github.com/togethercomputer/MoA/blob/main/bot.py) - if someone release a Space for it it could go ๐Ÿš€
  • 1 reply
ยท
victorย 
posted an update 7 months ago
view post
Post
2393
Congrats to @alvdansen for one of the nicest SD LoRA ever. It's so sharp and beautiful!
Check the model page to try it on your own prompts: alvdansen/BandW-Manga
And follow @alvdansen for more ๐Ÿ˜™
ยท
victorย 
posted an update 7 months ago
view post
Post
1856
> We introduced a new model designed for the Code generation task. Its test accuracy on the HumanEval base dataset surpasses that of GPT-4 Turbo (April 2024). (90.9% vs 90.2%).

@Bin12345 interested in a ZeroGPU Spaces for Bin12345/AutoCoder
  • 6 replies
ยท
victorย 
posted an update 7 months ago
view post
Post
1548
โœจ Tools are now available in HuggingChat (https://hf.co/chat)

In short, Tools allow HuggingChat to plug any ZeroGPU Space as a tool HuggingChat can use, offering limitless possibilities.

For the release we plugged 6 tools that you can use right now on command-R+, we plan to expand to more models.

We'll also allow you to add your own tools (any ZeroGPU space is compatible). For more info check out this discussion: huggingchat/chat-ui#470

Kudos to @nsarrazin @Saghen and @mishig for the release <3
ยท
fffiloniย 
posted an update 7 months ago
view post
Post
19516
๐Ÿ‡ซ๐Ÿ‡ท
Quel impact de lโ€™IA sur les filiรจres du cinรฉma, de lโ€™audiovisuel et du jeu vidรฉo?
Etude prospective ร  destination des professionnels
โ€” CNC & BearingPoint | 09/04/2024

Si lโ€™Intelligence Artificielle (IA) est utilisรฉe de longue date dans les secteurs du cinรฉma, de lโ€™audiovisuel et du jeu vidรฉo, les nouvelles applications de lโ€™IA gรฉnรฉrative bousculent notre vision de ce dont est capable une machine et possรจdent un potentiel de transformation inรฉdit. Elles impressionnent par la qualitรฉ de leurs productions et suscitent par consรฉquent de nombreux dรฉbats, entre attentes et apprรฉhensions.

Le CNC a donc dรฉcider de lancer un nouvel Observatoire de lโ€™IA Afin de mieux comprendre les usages de lโ€™IA et ses impacts rรฉels sur la filiรจre de lโ€™image. Dans le cadre de cet Observatoire, le CNC a souhaitรฉ dresser un premier รฉtat des lieux ร  travers la cartographie des usages actuels ou potentiels de lโ€™IA ร  chaque รฉtape du processus de crรฉation et de diffusion dโ€™une ล“uvre, en identifiant les opportunitรฉs et risques associรฉs, notamment en termes de mรฉtiers et dโ€™emploi. Cette รฉtude CNC / Bearing Point en a prรฉsentรฉ les principaux enseignements le 6 mars, lors de la journรฉe CNC ยซ Crรฉer, produire, diffuser ร  lโ€™heure de lโ€™intelligence artificielle ยป.

Le CNC publie la version augmentรฉe de la cartographie des usages de lโ€™IA dans les filiรจres du cinรฉma, de lโ€™audiovisuel et du jeu vidรฉo.

Lien vers la cartographie complรจte: https://www.cnc.fr/documents/36995/2097582/Cartographie+des+usages+IA_rapport+complet.pdf/96532829-747e-b85e-c74b-af313072cab7?t=1712309387891
ยท
radamesย 
posted an update 8 months ago
view post
Post
5793
Thanks to @OzzyGT for pushing the new Anyline preprocessor to https://github.com/huggingface/controlnet_aux. Now you can use the TheMistoAI/MistoLine ControlNet with Diffusers completely.

Here's a demo for you: radames/MistoLine-ControlNet-demo
Super resolution version: radames/Enhance-This-HiDiffusion-SDXL

from controlnet_aux import AnylineDetector

anyline = AnylineDetector.from_pretrained(
    "TheMistoAI/MistoLine", filename="MTEED.pth", subfolder="Anyline"
).to("cuda")

source = Image.open("source.png")
result = anyline(source, detect_resolution=1280)
radamesย 
posted an update 8 months ago
view post
Post
6564
At Google I/O 2024, we're collaborating with the Google Visual Blocks team (https://visualblocks.withgoogle.com) to release custom Hugging Face nodes. Visual Blocks for ML is a browser-based tool that allows users to create machine learning pipelines using a visual interface. We're launching nodes with Transformers.js, running models on the browser, as well as server-side nodes running Transformers pipeline tasks and LLMs using our hosted inference. With @Xenova @JasonMayes

You can learn more about it here https://huggingface.co/blog/radames/hugging-face-google-visual-blocks

Source-code for the custom nodes:
https://github.com/huggingface/visual-blocks-custom-components
radamesย 
posted an update 8 months ago
view post
Post
2011
AI-town now runs on Hugging Face Spaces with our API for LLMs and embeddings, including the open-source Convex backend, all in one container. Easy to duplicate and config on your own

Demo: radames/ai-town
Instructions: https://github.com/radames/ai-town-huggingface
ยท