2A2I

community
Activity Feed

AI & ML interests

Arabic LLMs & Diffusion Models

Recent Activity

2A2I's activity

alielfilali01Β 
posted an update 5 days ago
view post
Post
1673
~75% on the challenging GPQA with only 40M parameters πŸ”₯πŸ₯³

GREAT ACHIEVEMENT ! Or is it ?

This new Work, "Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation", take out the mystery about many models i personally suspected their results. Speacially on leaderboards other than the english one, Like the Open Arabic LLM Leaderbaord OALL/Open-Arabic-LLM-Leaderboard.

The authors of this work, first started by training a model on the GPQA data, which, unsurprisingly, led to the model achieving 100% performance.

Afterward, they trained what they referred to as a 'legitimate' model on legitimate data (MedMCQA). However, they introduced a distillation loss from the earlier, 'cheated' model.

What they discovered was fascinating: the knowledge of GPQA leaked through this distillation loss, even though the legitimate model was never explicitly trained on GPQA during this stage.

This raises important questions about the careful use of distillation in model training, especially when the training data is opaque. As they demonstrated, it’s apparently possible to (intentionally or unintentionally) leak test data through this method.

Find out more: Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation (2412.15255)
  • 1 reply
Β·
alielfilali01Β 
posted an update 22 days ago
view post
Post
3383
Unpopular opinion: Open Source takes courage to do !

Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged !
It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !

Cheers to the heroes here who see this!
Β·
alielfilali01Β 
posted an update 26 days ago
view post
Post
1505
Apparently i forgot to put this here !

Well, this is a bit late but consider given our recent blog a read if you are interested in Evaluation.

You don't have to be into Arabic NLP in order to read it, the main contribution we are introducing is a new evaluation measure for NLG. We made the fisrt application of this measure on Arabic for now and we will be working with colleagues from the community to expand it to other languages.

Blog:
Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard
https://huggingface.co/blog/leaderboard-3c3h-aragen

Space:
inceptionai/AraGen-Leaderboard

Give it a read and let me know your thoughts πŸ€—
alielfilali01Β 
posted an update about 2 months ago
view post
Post
2185
Unpopular opinion : o1-preview is more stupid than 4o and Qwen2.5-72B-Instruct in extremely underrated !
  • 2 replies
Β·
alielfilali01Β 
posted an update 2 months ago
view post
Post
1702
I feel like this incredible resource hasn't gotten the attention it deserves in the community!

@clefourrier and generally the HuggingFace evaluation team put together a fantastic guidebook covering a lot about π—˜π—©π—”π—Ÿπ—¨π—”π—§π—œπ—’π—‘ from basics to advanced tips.

link : https://github.com/huggingface/evaluation-guidebook

I haven’t finished it yet, but i'am enjoying every piece of it so far. Huge thanks @clefourrier and the team for this invaluable resource !
  • 3 replies
Β·
alielfilali01Β 
posted an update 3 months ago
view post
Post
1829
Why nobdoy is talking about the new training corpus released by MBZUAI today.

TxT360 is +15 Trillion tokens corpus outperforming FineWeb on several metrics. Ablation studies were done up to 1T tokens.

Read blog here : LLM360/TxT360
Dataset : LLM360/TxT360
  • 2 replies
Β·
alielfilali01Β 
posted an update 3 months ago
view post
Post
2571
Don't you think we should add a tag "Evaluation" for datasets that are meant to be benchmarks and not for training ?

At least, when someone is collecting a group of datasets from an organization or let's say the whole hub can filter based on that tag and avoid somehow contaminating their "training" data.
alielfilali01Β 
posted an update 3 months ago
view post
Post
873
We need a fork feature for models and datasets similar to "Duplicate this space" in spaces ! Don't you think ?

Sometimes you just want to save something in your profile privately and work on it later without the hassle of "load_.../push_to_hub" in a code file.

I know this is super lazy πŸ˜… But it is what it is ...

tag : @victor
Β·
alielfilali01Β 
posted an update 3 months ago
view post
Post
1204
@mariagrandury (SomosNLP) and team releases the Spanish leaderboard !!!
It is impressive how they choosed to design this leaderboard and how it support 4 languages (all part of Spain ofc).

Check it out from this link :
la-leaderboard/la-leaderboard
  • 1 reply
Β·
painΒ 
posted an update 3 months ago
alielfilali01Β 
posted an update 4 months ago
alielfilali01Β 
posted an update 4 months ago
view post
Post
575
Are the servers down or what ? Am i the only one experiencing this error :
HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/...../)

Internal Error - We're working hard to fix this as soon as possible!
  • 2 replies
Β·
alielfilali01Β 
posted an update 4 months ago
view post
Post
1089
Datapluck: Portability Tool for Huggingface Datasets

"I found myself recently whipping up notebooks just to pull huggingface datasets locally, annotate or operate changes and update them again. This happened often enough that I made a cli tool out of it, which I've been using successfully for the last few months.

While huggingface uses open formats, I found the official toolchain relatively low-level and not adapted to quick operations such as what I am doing."
~ @omarkamali

Link : https://omarkama.li/blog/datapluck
  • 1 reply
Β·
alielfilali01Β 
posted an update 5 months ago
view post
Post
721
Any idea if this "scheduled"/"dynamic" batch size is available in HF Trainers ? I've never seenΒ itΒ personally
alielfilali01Β 
posted an update 7 months ago
view post
Post
1984
I'm officially considered #gpu_poor πŸ’€
But I'm #data_rich 😎
alielfilali01Β 
posted an update 7 months ago
view post
Post
675
Did you know you can't push a model to hub with an id over 96 chars 🫠
Β·
alielfilali01Β 
posted an update 7 months ago
view post
Post
1064
The 100 models milestone on the OALL/Open-Arabic-LLM-Leaderboard is successfully reached within 10 days after the leaderboard's release πŸ₯³

meta-llama/Meta-Llama-3-70B-Instruct is still the king of the leaderboard πŸ‘‘ with a 3.46 points difference compared to its successor CohereForAI/c4ai-command-r-plus who took the 2nd place πŸ₯ˆ from his younger brother CohereForAI/c4ai-command-r-v01 that lives today in the 5th floor just behind Ashmal/MBZUAI-oryx -3rd place πŸ₯‰- (AFAIK an experimental model from MBZUAI) and https://huggingface.co/core42/jais-30b-chat-v3 -4th place- from Core42.

PS : I should consider a career in sports commentary πŸ˜‚
Would you recommend me to BeIN Sports πŸ˜€ ?
  • 1 reply
Β·
alielfilali01Β 
posted an update 8 months ago
alielfilali01Β 
posted an update 8 months ago
view post
Post
1150
Yesterday was just CRAZY ! HF x LangChain, PaliGemma and Google I/O ... which made me totally forget posting here about our newly released leaderboard (The Open Arabic LLM Leaderboard - OALL)

Here's a quick update for our community that is waiting for new results. Some of you noticed that since the release yesterday, the finished evaluations tab has stayed at 14 models up until now (May 15th, 12 PM). For those concerned, rest assuredβ€”we had a minor memory issue in our cluster yesterday that we overlooked. The problem is now fixed, and 7 models are currently being evaluated in parallel, so expect to hit the 20 milestone today! πŸŽ‰

Check the discussion below for more info :

OALL/Open-Arabic-LLM-Leaderboard#3
alielfilali01Β 
posted an update 8 months ago
view post
Post
2873
Is it just me or is it real that whenever APPLE releases an open model, they accompany it with a library !? First was MLX, about a month ago AXLEARN and now CORENET ! Could it be just coincidences or does Apple playing some game ? if yes then what is it ... ? What do you think ? maybe i'm just hallucinating now πŸ˜