Stefano Fiorucci PRO

anakin87

AI & ML interests

Contributing to Haystack LLM framework ๐Ÿ—๏ธ. Language Models: orchestration, post-training, synthetic data...

Recent Activity

Articles

Organizations

deepset's profile picture Blog-explorers's profile picture ZeroGPU Explorers's profile picture Hugging Face Discord Community's profile picture

anakin87's activity

reacted to tomaarsen's post with โค๏ธ 3 days ago
view post
Post
2348
That didn't take long! Nomic AI has finetuned the new ModernBERT-base encoder model into a strong embedding model for search, classification, clustering and more!

Details:
๐Ÿค– Based on ModernBERT-base with 149M parameters.
๐Ÿ“Š Outperforms both nomic-embed-text-v1 and nomic-embed-text-v1.5 on MTEB!
๐ŸŽ๏ธ Immediate FA2 and unpacking support for super efficient inference.
๐Ÿช† Trained with Matryoshka support, i.e. 2 valid output dimensionalities: 768 and 256.
โžก๏ธ Maximum sequence length of 8192 tokens!
2๏ธโƒฃ Trained in 2 stages: unsupervised contrastive data -> high quality labeled datasets.
โž• Integrated in Sentence Transformers, Transformers, LangChain, LlamaIndex, Haystack, etc.
๐Ÿ›๏ธ Apache 2.0 licensed: fully commercially permissible

Try it out here: nomic-ai/modernbert-embed-base

Very nice work by Zach Nussbaum and colleagues at Nomic AI.
reacted to anton-l's post with ๐Ÿ”ฅ 15 days ago
view post
Post
2104
Introducing ๐Ÿ“๐…๐ข๐ง๐ž๐Œ๐š๐ญ๐ก: the best public math pre-training dataset with 50B+ tokens!
HuggingFaceTB/finemath

Math remains challenging for LLMs and by training on FineMath we see considerable gains over other math datasets, especially on GSM8K and MATH.

We build the dataset by:
๐Ÿ› ๏ธ carefully extracting math data from Common Crawl;
๐Ÿ”Ž iteratively filtering and recalling high quality math pages using a classifier trained on synthetic annotations to identify math reasoning and deduction.

We conducted a series of ablations comparing the performance of Llama-3.2-3B-Base after continued pre-training on FineMath and observe notable gains compared to the baseline model and other public math datasets.

We hope this helps advance the performance of LLMs on math and reasoning! ๐Ÿš€
Weโ€™re also releasing all the ablation models as well as the evaluation code.

HuggingFaceTB/finemath-6763fb8f71b6439b653482c2
reacted to lewtun's post with ๐Ÿ”ฅ 18 days ago
view post
Post
6631
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute ๐Ÿ”ฅ

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

๐Ÿ“ˆ Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

๐ŸŽ„ Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

๐Ÿงญ Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
ยท
reacted to DawnC's post with ๐Ÿ‘ 22 days ago
view post
Post
1419
๐Ÿ’ก Curious about dog breeds? ๐Ÿ• Meet PawMatchAI!
I've created this fun and interactive project to help you recognize dog breeds, find the perfect pup for your lifestyle, and even compare different breeds! Recently upgraded with smarter AI detection - it can now better distinguish between dogs and non-dogs (no more confusing cats for huskies! ๐Ÿ˜บโžก๏ธ๐Ÿ•).

๐Ÿพ What's cool about it?
Smart breed recognition powered by AI
Lifestyle-based breed recommendations
Detailed breed comparisons
And now with enhanced non-dog filtering!

๐ŸŒŸ Why try it?
Whether you're a dog lover, considering a new furry friend, or just curious, PawMatchAI makes discovering breeds fun and informative! As someone passionate about both AI and pets, I'm combining my two loves while working toward my goal of contributing to the AI industry.

๐Ÿ”Ž Got feedback?
While it's not perfect, your input helps make it better! I'd love to hear your thoughts as I continue improving this project on my journey into AI development.

๐Ÿ‘‰ Try it now: DawnC/PawMatchAI

๐ŸŽฏ Your support matters!
Every like ๐Ÿ‘ or comment ๐Ÿ“ helps fuel my passion for AI development and keeps me motivated to create more helpful tools. Let's make the AI journey fun and impactful together!

#AI #MachineLearning #DeepLearning #Pytorch #ComputerVision
reacted to Narsil's post with โค๏ธ 23 days ago
view post
Post
1026
Performance leap: TGI v3 is out. Processes 3x more tokens, 13x faster than vLLM on long prompts. Zero config !



3x more tokens.

By reducing our memory footprint, weโ€™re able to ingest many more tokens and more dynamically than before. A single L4 (24GB) can handle 30k tokens on llama 3.1-8B, while vLLM gets barely 10k. A lot of work went into reducing the footprint of the runtime and its effect are best seen on smaller constrained environments.
13x faster

On long prompts (200k+ tokens) conversation replies take 27.5s in vLLM, while it takes only 2s in TGI. How so ? We keep the initial conversation around, so when a new reply comes in, we can answer almost instantly. The overhead of the lookup is ~5us. Thanks @Dani รซl de Kok for the beast data structure.
Zero config

Thatโ€™s it. Remove all the flags your are using and youโ€™re likely to get the best performance. By evaluating the hardware and model, TGI carefully selects automatic values to give best performance. In production, we donโ€™t have any flags anymore in our deployments. We kept all existing flags around, they may come in handy in niche scenarios.

Read more: https://huggingface.co/docs/text-generation-inference/conceptual/chunking
replied to their post 23 days ago
posted an update 23 days ago
view post
Post
1592
Tulu 3 SFT Mixture by AllenAI is a massive, good, multilingual dataset for fine-tuning Language Models.

Unfortunately, it was missing the "language" column.

I added it using the good old fastText.

Check out the dataset here ๐Ÿ‘‰ anakin87/tulu-3-sft-mixture-with-language

  • 1 reply
ยท
reacted to dvilasuero's post with โค๏ธ 27 days ago
view post
Post
2281
๐ŸŒ Announcing Global-MMLU: an improved MMLU Open dataset with evaluation coverage across 42 languages, built with Argilla and the Hugging Face community.

Global-MMLU is the result of months of work with the goal of advancing Multilingual LLM evaluation. It's been an amazing open science effort with collaborators from Cohere For AI, Mila - Quebec Artificial Intelligence Institute, EPFL, Massachusetts Institute of Technology, AI Singapore, National University of Singapore, KAIST, Instituto Superior Tรฉcnico, Carnegie Mellon University, CONICET, and University of Buenos Aires.

๐Ÿท๏ธ +200 contributors used Argilla MMLU questions where regional, dialect, or cultural knowledge was required to answer correctly. 85% of the questions required Western-centric knowledge!

Thanks to this annotation process, the open dataset contains two subsets:

1. ๐Ÿ—ฝ Culturally Agnostic: no specific regional, cultural knowledge is required.
2. โš–๏ธ Culturally Sensitive: requires dialect, cultural knowledge or geographic knowledge to answer correctly.

Moreover, we provide high quality translations of 25 out of 42 languages, thanks again to the community and professional annotators leveraging Argilla on the Hub.

I hope this will ensure a better understanding of the limitations and challenges for making open AI useful for many languages.

Dataset: CohereForAI/Global-MMLU
posted an update about 1 month ago
view post
Post
402
๐Ÿ๐Ÿ๐Ÿ ๐€ ๐’๐ฐ๐š๐ซ๐ฆ ๐จ๐Ÿ ๐€๐ ๐ž๐ง๐ญ๐ฌ ๐ฐ๐ข๐ญ๐ก ๐‹๐ฅ๐š๐ฆ๐š 3.2, ๐†๐๐“-4๐จ ๐ฆ๐ข๐ง๐ข ๐š๐ง๐ ๐‚๐ฅ๐š๐ฎ๐๐ž 3.5 ๐’๐จ๐ง๐ง๐ž๐ญ

๐“๐‹;๐ƒ๐‘: I reimplemented the Swarm concept using Haystack, but made it work with both open and proprietary models ๐Ÿ’ซ

โœ๏ธ blog article: https://haystack.deepset.ai/blog/swarm-of-agents
๐Ÿ““ notebook: https://haystack.deepset.ai/cookbook/swarm


Some time ago OpenAI published Swarm: an educational framework for building multi-agent systems.

Their approach focuses on two main concepts:
ใƒป ๐‘๐จ๐ฎ๐ญ๐ข๐ง๐ž๐ฌ: Each agent follows specific ๐Ÿ“œ instructions and uses ๐Ÿ› ๏ธ tools to execute them.
ใƒป ๐‡๐š๐ง๐๐จ๐Ÿ๐Ÿ๐ฌ ๐Ÿค: Agents can transfer control to one another using tool/function calling.


When I first read these ideas, I thought: ๐˜ด๐˜ช๐˜ฎ๐˜ฑ๐˜ญ๐˜ฆ ๐˜ฃ๐˜ถ๐˜ต ๐˜ฑ๐˜ฐ๐˜ธ๐˜ฆ๐˜ณ๐˜ง๐˜ถ๐˜ญ! And they pair well with the recent unified tool support in Haystack.

๐Ÿง‘โ€๐Ÿ’ป So, I decided to re-implement these concepts using Haystack, and in just a few lines of code, I had a working prototype.

๐Ÿ†’ Bonus feature: this implementation isn't tied to a single model provider - different agents can be powered by different models!

I replicated the ACME customer service example from the original article, with 3 Agents:
๐Ÿ Triage Agent - Llama 3.2 running on Ollama
๐Ÿ Sales Agent - Anthropic Claude 3.5 Sonnet
๐Ÿ Issues and Repairs Agent - OpenAI GPT-4o mini


Want to see the full implementation and give it a try? Check out the blog post and notebook! โœจ
reacted to davanstrien's post with โค๏ธ about 2 months ago
replied to their post 2 months ago
view reply

๐Ÿ’ก ๐Œ๐š๐ ๐ฉ๐ข๐ž ๐ฐ๐ข๐ญ๐ก ๐ฌ๐ฒ๐ฌ๐ญ๐ž๐ฆ ๐ฆ๐ž๐ฌ๐ฌ๐š๐ ๐ž

I had another idea: use the system message to steer generation towards a specific language.

The system message should be in the target language, like:
"You are an artificial intelligence that answers users' questions in TARGET_LANGUAGE in a useful and detailed way. The user asks complex questions in TARGET_LANGUAGE."

It is a simple approach, but it might work...

It turns out the authors had a similar idea, which they included in the latest revision of their paper. ๐ŸŽ‰


๐Ÿช Resources

Magpie paper and repository: https://huggingface.co/papers/2406.08464 https://github.com/magpie-align/magpie

Magpie demo by @davanstrien : https://huggingface.co/spaces/davanstrien/magpie

Magpie Ollama Datagen by @mrm8488 : https://github.com/mrm8488/magpie-ollama-datagen

magpie-ultra dataset - massive dataset built with Magpie by Argilla: https://huggingface.co/datasets/argilla/magpie-ultra-v0.1

โš—๏ธ distilabel framework - framework for synthetic data generation and AI feedback at scale: https://distilabel.argilla.io/latest/

posted an update 2 months ago
view post
Post
1105
Ok, you're finally convinced that synthetic data works... โš—๏ธ

๐๐จ๐ฐ ๐ฒ๐จ๐ฎ ๐ฐ๐š๐ง๐ญ ๐ญ๐จ ๐ ๐ž๐ง๐ž๐ซ๐š๐ญ๐ž ๐š๐ง ๐ข๐ง๐ฌ๐ญ๐ซ๐ฎ๐œ๐ญ๐ข๐จ๐ง ๐๐š๐ญ๐š๐ฌ๐ž๐ญ ๐Ÿ๐จ๐ซ ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ข๐ง ๐š ๐ฅ๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐จ๐ญ๐ก๐ž๐ซ ๐ญ๐ก๐š๐ง ๐„๐ง๐ ๐ฅ๐ข๐ฌ๐ก.
But how do you get started?

I explore how to do this with Magpie in my new article
https://huggingface.co/blog/anakin87/multilingual-magpie

---

๐Ÿฆโ€โฌ› ๐–๐ก๐š๐ญ ๐ข๐ฌ ๐Œ๐š๐ ๐ฉ๐ข๐ž?

It's a recent technique for creating synthetic instruction datasets.

Magpie is based on a simple but ingenious idea ๐Ÿ‘‡
if you prompt an instruction-tuned model with a pre-query template, you can make it generate a plausible user query/instruction

Here's an example:
model: Llama-3-8B-Instruct
pre-query template: "<|begin_of_text|><|start_header_id|>user<|end_header_id|>"
generated user instruction: "What are some of the responsibilities of a commercial pilot?"

You can then feed this instruction back into the same model to get the assistant response.

By repeating this process, it's possible to generate large synthetic datasets with relatively little effort.

๐Ÿช„ The authors demonstrate that using these datasets for Supervised Fine Tuning (SFT) can yield strong performance, even competitive with the original instruct model.


๐Ÿง—๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐ง๐  ๐ง๐จ๐ง-๐„๐ง๐ ๐ฅ๐ข๐ฌ๐ก ๐๐š๐ญ๐š

Most Language Models are primarily trained on English texts, so they tend to produce data in English.

How can we overcome this?

Earlier approaches were complex or costly.

Then @mrm8488 found a simple solution: add the target language to the pre-query template.
For Spanish, the template becomes "<|begin_of_text|><|start_header_id|>user<|end_header_id|>spanish:".

This method works for Spanish and German!

โŒ Unfortunately, it does not work well for other languages (๐Ÿ‡ฎ๐Ÿ‡น, ๐Ÿ‡ณ๐Ÿ‡ฑ, ...)

๐Ÿ‘‡
  • 1 reply
ยท
posted an update 3 months ago
view post
Post
1739
๐Ÿ•ต๐Ÿป ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐‘๐€๐† ๐ฐ๐ข๐ญ๐ก ๐Ÿฆ™ ๐‹๐ฅ๐š๐ฆ๐š 3.2

I was excited to explore Llama 3.2, but as a simple ๐Ÿ‡ช๐Ÿ‡บ EU guy, I don't have access to Meta's multimodal models ๐Ÿ˜ฟ

๐Ÿค” So I thought: why not challenge the small 3B text model with Agentic RAG?

๐ŸŽฏ The plan:
- Build a system that tries to answer questions using a knowledge base.
- If the documents don't contain the answer, use Web search for additional context.


Check out my experimental notebook here: ๐Ÿ““ https://colab.research.google.com/github/deepset-ai/haystack-cookbook/blob/main/notebooks/llama32_agentic_rag.ipynb


My stack:
๐Ÿ—๏ธ haystack (https://haystack.deepset.ai/): open-source LLM orchestration framework
๐Ÿฆ™ meta-llama/Llama-3.2-3B-Instruct
๐Ÿฆ†๐ŸŒ free DuckDuckGo API, integrated with Haystack

โœจ ๐˜›๐˜ฉ๐˜ฆ ๐˜ณ๐˜ฆ๐˜ด๐˜ถ๐˜ญ๐˜ต๐˜ด? ๐˜Œ๐˜ฏ๐˜ค๐˜ฐ๐˜ถ๐˜ณ๐˜ข๐˜จ๐˜ช๐˜ฏ๐˜จ - ๐˜ข ๐˜ง๐˜ฆ๐˜ธ ๐˜ฎ๐˜ฐ๐˜ฏ๐˜ต๐˜ฉ๐˜ด ๐˜ข๐˜จ๐˜ฐ, ๐˜ต๐˜ฉ๐˜ช๐˜ด ๐˜ญ๐˜ฆ๐˜ท๐˜ฆ๐˜ญ ๐˜ฐ๐˜ง ๐˜ฑ๐˜ฆ๐˜ณ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ข๐˜ฏ๐˜ค๐˜ฆ ๐˜ง๐˜ณ๐˜ฐ๐˜ฎ ๐˜ข ๐˜ด๐˜ฎ๐˜ข๐˜ญ๐˜ญ ๐˜ฎ๐˜ฐ๐˜ฅ๐˜ฆ๐˜ญ ๐˜ธ๐˜ฐ๐˜ถ๐˜ญ๐˜ฅ'๐˜ท๐˜ฆ ๐˜ฃ๐˜ฆ๐˜ฆ๐˜ฏ ๐˜ถ๐˜ฏ๐˜ต๐˜ฉ๐˜ช๐˜ฏ๐˜ฌ๐˜ข๐˜ฃ๐˜ญ๐˜ฆ!
This probably reflects the impressive IFEval score of the model (comparable to Llama 3.1 8B).
posted an update 4 months ago
view post
Post
1089
๐Œ๐ฒ ๐Ÿ๐ข๐ซ๐ฌ๐ญ ๐œ๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ ๐š๐ซ๐ญ๐ข๐œ๐ฅ๐ž! ๐’๐ž๐ฅ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐Ÿ๐ข๐ง๐ž-๐ญ๐ฎ๐ง๐ข๐ง๐  ๐ฐ๐ข๐ญ๐ก ๐’๐ฉ๐ž๐œ๐ญ๐ซ๐ฎ๐ฆ ๐ŸŽฏ

Full walkthrough on how to get started with Spectrum and TRL for efficient fine-tuning.
๐Ÿ“” ๐Ÿ‘ฃ https://huggingface.co/blog/anakin87/spectrum

---

Looking to fine-tune Language Models efficiently and save on computational resources?

One popular method is QLoRa, which quantizes the original model and trains low-rank adapters on top.
It's quite effective and uses less GPU than full fine-tuning.

However, QLoRa applies Low-Rank Adaptation uniformly across the entire model.

What if we could identify the most informative layers and only fine-tune those? ๐Ÿค”

This is exactly what Spectrum does! ๐Ÿ‘‡

๐Ÿ”ฌ Spectrum analyzes the weight matrices for all layers in a Language Model and calculates a Signal to Noise Ratio (SNR) for each one.
(It uses Random Matrix Theory and Marchenko-Pastur distribution to distinguish signal from noise.)

๐ŸŽฏ Based on a chosen percentage (say, 25%), Spectrum selects the most informative layers of each type (mlp.down_proj, self_attn.o_proj, etc.).

You can then โ„๏ธ freeze the rest of the model and focus your ๐Ÿ‹๏ธโ€โ™‚๏ธ training on the chosen layers.


๐Ÿ† Results/Evaluation
- Spectrum is competitive with full fine-tuning and beats QLoRA on benchmarks.
- While QLoRA is more memory-efficient on a single GPU, Spectrum shines in distributed training setups.
- Great models trained with Spectrum: Dolphin models, Llama 3.1 Storm, numerous models by VAGO Solutions...

---

For a practical guide, check out the article above.
reacted to grimjim's post with ๐Ÿ‘€ 4 months ago
view post
Post
3241
I found this paper to be thought-provoking: "Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling" by Bansal, Hosseini, Agarwal, Tran, and Kazemi.
https://arxiv.org/abs/2408.16737
The direct implication is that smaller models could be used to create cost-effective synthetic datasets. And on that note, in the Gemma terms of use, Google explicitly claims no rights on outputs generated from those models, which means one is free to synthgen from the Gemma line. Meta's Llama 3 licence forbids synthetic generation of outputs if used to improve other models. Relevant Mistral, Qwen, and Yi models under the Apache 2.0 license are unrestricted for this purpose.
  • 2 replies
ยท
replied to their post 4 months ago
posted an update 4 months ago
view post
Post
1659
๐Ÿ’ฌ ๐Ÿ‡ฎ๐Ÿ‡น Phi 3.5 mini ITA: a Small Language Model for Italian

Lately, I've spent some time fine-tuning language models.

Now I am happy to release Phi 3.5 mini ITA: a fine-tuned version of Phi-3.5-mini-instruct to improve performance on the Italian language

๐Ÿ”น Small (3.82 B parameters) but capable model
๐Ÿ”น 128k context length

Chat with it on ๐Ÿค— Spaces: anakin87/Phi-3.5-mini-ITA
Model card: anakin87/Phi-3.5-mini-ITA

๐Ÿ—ƒ๏ธ Data
Supervised fine-tuning using a good mix of English and Italian data:
- mlabonne/FineTome-100k by @mlabonne
- efederici/capybara-claude-15k-ita by @efederici
๐Ÿ™ Thanks to the authors for the datasets.


๐ŸŽฏ Targeted training with Spectrum
I used Spectrum, a relatively new technique for parameter-efficient learning.
The idea is to train only the layers of the model with high Signal-to-Noise Ratio (SNR) and โ„๏ธ freeze the rest.
I trained the top 30% of model layers.

๐Ÿ“ Spectrum paper: https://arxiv.org/abs/2406.06623


๐Ÿ“Š Vibe check and performance on Italian benchmarks seem encouraging
  • 2 replies
ยท
reacted to efederici's post with โค๏ธ 4 months ago
view post
Post
1628
Finally, I can post! ๐Ÿš€

I created a Capybara-inspired Italian dataset by translating the initial instruction and running it through a pipeline to generate conversations. I used Claude Sonnet for translation and instruction generation, and Opus for generating the answers.

I hope this dataset proves useful for people working on ๐Ÿ‡ฎ๐Ÿ‡น language models.

โ› Open sourcing the dataset here: efederici/capybara-claude-15k-ita
  • 1 reply
ยท
reacted to gabrielmbmb's post with โค๏ธ 5 months ago
view post
Post
2902
distilabel 1.3.0 is out! This release contains many core improvements and new tasks that help us building argilla/magpie-ultra-v0.1!

Distributed pipeline execution with Ray, new Magpie tasks, reward models, components for dataset diversity based on sentence embeddings, Argilla 2.0 compatibility and many more features!

Check the new release in GitHub: https://github.com/argilla-io/distilabel

reacted to Ameeeee's post with ๐Ÿ”ฅ 5 months ago
view post
Post
3588
โค๏ธโ€๐Ÿ”ฅย Just released version 2.0 of Argilla!

This small revolution includes:

๐Ÿ”Œย You can now integrate with the Hugging Face Hub and get started in under five minutes.
๐Ÿช‚ย A single Dataset class is now designed to handle multiple tasks.
๐Ÿ”งย Itโ€™s 100 times simpler to configure your dataset now with the new SDK!
๐Ÿ“–ย The documentation has been revamped to be cleaner and more user-friendly.
๐ŸŒย  A new feature automates splitting annotation tasks among a team.
โœ๏ธย The layout has been made more flexible to accommodate many use cases.

Check out the release highlights for more details: https://github.com/argilla-io/argilla/releases/tag/v2.0.0
  • 1 reply
ยท