AI & ML interests

None defined yet.

Recent Activity

t0-0  updated a dataset about 3 hours ago
llm-jp/leaderboard-requests
t0-0  updated a dataset about 4 hours ago
llm-jp/leaderboard-contents
t0-0  updated a dataset about 4 hours ago
llm-jp/leaderboard-results
View all activity

llm-jp's activity

AkimfromParis 
posted an update 3 days ago
view post
Post
1485
💵 Polymarket is leveraging “Chatbot Arena LLM Leaderboard” on HuggingFace for online gambling on the “Top AI model on January 31?”. 🤗

As of January 3rd, 2025:
-1./ Gemini (83%) -2./ ChatGPT (13%) -3./ Other (2%) -4./ Claude (2%) -5./ Grok (1%) -6./ Llama (<1%)

🇺🇸 The market opinion is following historical data. It's clearly bias towards US historical AI giants, yet Polymarket is forbidden in the USA and for US citizens.

🇨🇳 In the “Other”, you might have Chinese AI labs that are probably the future AI leaders (Qwen, DeepSeek, Yi).

⚖️ In the market resolution, if two models are tied in the evaluation, they will take the alphabetical order. (e.g. if both were tied, “Google” would resolve to “Yes”, and “xAI” would resolve to “No”). 🙃

That might be illegal usage of the Chatbot Arena policy? And maybe HuggingFace? @clem
Or maybe authors and contributors should get a cut each month as “market markers”.  @weichiang @angelopoulos
AkimfromParis 
posted an update 11 days ago
view post
Post
1639
🇺🇸 🇨🇦 🇬🇧 Nobel Prize winners against USSR & Japanese AI pioneers ☭🇯🇵

🇩🇪 Prof. Jürgen Schmidhuber:  “The #NobelPrize in Physics 2024 for Hopfield & Hinton turns out to be a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques, without citing the original inventors.”

1965 - First Deep Learning - USSR ☭ (Ukraine 🇺🇦 now)
Ivakhnenko and Lapa introduced the first deep learning in deep MLPs that learn internal representations of input data.

1967/68 - Deep Learning by Stochastic Gradient Descent - Japan 🇯🇵
Shun-Ichi Amari trained MLPs with many layers in non-incremental end-to-end fashion from scratch by stochastic gradient descent (SGD).

1969 - Rectified linear unit - Japan 🇯🇵
In 1969, Kunihiko Fukushima introduced ReLU in the context of visual feature extraction in hierarchical neural networks.

1970 - Backpropagation - Finland 🇫🇮 😃
In 1970, Seppo Linnainmaa was the first the reverse mode of automatic differentiation, now known as backpropagation.

1972 - Recurrent Neural Network - Japan 🇯🇵
In 1972, Shun-Ichi Amari published a learning recurrent neural network based on Lenz-Ising model (Amari's net was later called the "Hopfield network". Hopfield republished in 1982, without citing Amari papers.)

1979 - First Convolutional neural network - Japan 🇯🇵
CNN architecture was introduced in 1979 by Kunihiko Fukushima, also known as Neocognitron.

https://people.idsia.ch/~juergen/deep-learning-history.html#AMH2
  • 11 replies
·
AkimfromParis 
posted an update about 2 months ago
view post
Post
1471
🇯🇵 The Open Japanese LLM Leaderboard created by LLM-jp 🌸 in partnership with HuggingFace 🤗 was released today!

Blog: https://huggingface.co/blog/leaderboard-japanese
Space: llm-jp/open-japanese-llm-leaderboard

🌍 The leaderboard is available in both Japanese and English
📚 Based on the evaluation tool, llm-jp-eval with more than 20 datasets for Japanese LLMs
📊 The leaderboard showcases all the metrics for NLP experts, plus averages for NLP beginners
💻 For the comfort of users, we chose a horizontal UI, and implemented it in a light and dark theme on Gradio
🔬 The radar chart provides a very interesting visualization of metrics!
🌱 We are using the Japanese research platform, MDX, so please be patient!
⚡ LLMs bigger than +70B will be evaluated soon…

How do you say “GPUs Go Brrr” in Japanese - > GPUがブンブン~! (To pronounce "GPU ga bunbun!") 🔥
  • 4 replies
·
AkimfromParis 
posted an update 3 months ago
view post
Post
621
Philosopher Gilles Deleuze in 1985-86 about society of control, probabilities, and power. Visionary words in an era of autoregressive models:

"The biopolitics of populations appears when right sets about administering life, says Foucault, administering life in any open multiplicities whatever. You see the importance of the difference between discipline and biopolitics. The one is in an open space, with large multiplicities to which limits are not assignable. They can only be treated by the calculus of probabilities, hence the development of the calculus of probabilities and the meaning [sens] of the social control of probabilities, the probabilities of marriage in a nation, the probabilities of mortality, probabilities of natality. Natality, nuptiality, mortality …

... When Foucault directly addresses the question of power, namely, one of his great theses: no, power does not repress, or it represses only secondarily. What does it do? It does something much more profound and, doubtless, more formidable that repressing: it forms, it shapes. It does not silence, it does worse: it makes speak. It disciplines, it standardizes [normalise]. But repression is entirely secondary in relation to the positive operations of power.

Power does not repress, it disciplines, it manages, it controls, it standardizes, etcetera. It does not silence, it makes speak. It does not prevent acting, it makes act."

From the Deleuze Seminars at Université Paris 8 translated by Purdue University -> https://deleuze.cla.purdue.edu/