Gradio-Themes-Party

company

AI & ML interests

None defined yet.

Recent Activity

Gradio-Themes's activity

akhaliq 
posted an update about 1 month ago
view post
Post
5509
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: akhaliq/anychat
  • 1 reply
·
akhaliq 
posted an update about 2 months ago
view post
Post
6619
QwQ-32B-Preview is now available in anychat

A reasoning model that is competitive with OpenAI o1-mini and o1-preview

try it out: akhaliq/anychat
  • 1 reply
·
akhaliq 
posted an update about 2 months ago
view post
Post
3827
New model drop in anychat

allenai/Llama-3.1-Tulu-3-8B is now available

try it here: akhaliq/anychat
akhaliq 
posted an update about 2 months ago
view post
Post
2796
anychat

supports chatgpt, gemini, perplexity, claude, meta llama, grok all in one app

try it out there: akhaliq/anychat
louisbrulenaudet 
posted an update 2 months ago
view post
Post
1836
I’ve published a new dataset to simplify model merging 🤗

This dataset facilitates the search for compatible architectures for model merging with @arcee_ai’s mergekit, streamlining the automation of high-performance merge searches 📖

Dataset : louisbrulenaudet/mergekit-configs
  • 1 reply
·
louisbrulenaudet 
posted an update 3 months ago
view post
Post
1223
Introducing Lemone-router, a series of classification models designed to produce an optimal multi-agent system for different branches of tax law.

Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impôts :

label2id = {
    "Bénéfices professionnels": 0,
    "Contrôle et contentieux": 1,
    "Dispositifs transversaux": 2,
    "Fiscalité des entreprises": 3,
    "Patrimoine et enregistrement": 4,
    "Revenus particuliers": 5,
    "Revenus patrimoniaux": 6,
    "Taxes sur la consommation": 7
}
	
id2label = {
    0: "Bénéfices professionnels",
    1: "Contrôle et contentieux",
    2: "Dispositifs transversaux",
    3: "Fiscalité des entreprises",
    4: "Patrimoine et enregistrement",
    5: "Revenus particuliers",
    6: "Revenus patrimoniaux",
    7: "Taxes sur la consommation"
}

It achieves the following results on the evaluation set:
- Loss: 0.4734
- Accuracy: 0.9191

Link to the collection: louisbrulenaudet/lemone-router-671cce21d6410f3570514762
louisbrulenaudet 
posted an update 3 months ago
view post
Post
3120
🚨 I have $3,500 in Azure credits, including access to an H100 (96 Go), expiring on November 12, 2024.

I won’t be able to use it all myself, so I’m reaching out to the @huggingface community: Are there any open-source projets with data ready for some compute power?

Let’s collaborate and make the most of it together 🔗
·
louisbrulenaudet 
posted an update 4 months ago
view post
Post
2118
My biggest release of the year: a series of 7 specialized embedding models for information retrieval within tax documents, is now available for free on Hugging Face 🤗

These new models aim to offer an open source alternative for in-domain semantic search from large text corpora and will improve RAG systems and context addition for large language models.

Trained on more than 43 million tax tokens derived from semi-synthetic and raw-synthetic data, enriched by various methods (in particular MSFT's evol-instruct by @intfloat ), and corrected by humans, this project is the fruit of hundreds of hours of work and is the culmination of a global effort to open up legal technologies that has only just begun.

A big thank you to Microsoft for Startups for giving me access to state-of-the-art infrastructure to train these models, and to @julien-c , @clem 🤗, @thomwolf and the whole HF team for the inference endpoint API and the generous provision of Meta LLama-3.1-70B. Special thanks also to @tomaarsen for his invaluable advice on training embedding models and Loss functions ❤️

Models are available on my personal HF page, into the Lemone-embed collection: louisbrulenaudet/lemone-embed-66fdc24000df732b395df29b
  • 1 reply
·
Wauplin 
posted an update 4 months ago
view post
Post
2938
What a great milestone to celebrate! The huggingface_hub library is slowly becoming a cornerstone of the Python ML ecosystem when it comes to interacting with the @huggingface Hub. It wouldn't be there without the hundreds of community contributions and feedback! No matter if you are loading a model, sharing a dataset, running remote inference or starting jobs on our infra, you are for sure using it! And this is only the beginning so give a star if you wanna follow the project 👉 https://github.com/huggingface/huggingface_hub
  • 1 reply
·
Wauplin 
posted an update 4 months ago
view post
Post
4637
🚀 Exciting News! 🚀

We've just released 𝚑𝚞𝚐𝚐𝚒𝚗𝚐𝚏𝚊𝚌𝚎_𝚑𝚞𝚋 v0.25.0 and it's packed with powerful new features and improvements!

✨ 𝗧𝗼𝗽 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀:

• 📁 𝗨𝗽𝗹𝗼𝗮𝗱 𝗹𝗮𝗿𝗴𝗲 𝗳𝗼𝗹𝗱𝗲𝗿𝘀 with ease using huggingface-cli upload-large-folder. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model 🤡
• 🔎 𝗦𝗲𝗮𝗿𝗰𝗵 𝗔𝗣𝗜: new search filters (gated status, inference status) and fetch trending score.
• ⚡𝗜𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝗖𝗹𝗶𝗲𝗻𝘁: major improvements simplifying chat completions and handling async tasks better.

We’ve also introduced tons of bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! 💪

💡 Check out the release notes: Wauplin/huggingface_hub#8

Want to try it out? Install the release with:

pip install huggingface_hub==0.25.0

  • 1 reply
·
louisbrulenaudet 
posted an update 4 months ago
view post
Post
2604
The Romulus model series has been released on Hugging Face, continually pre-trained on 34,864,949 tokens of French laws and intended to serve as a foundation for fine-tuning on labeled data 🤗

The training code, dataset and model weights are open and available free on HF and the training was based on H100 provided by Microsoft for Startups using Unsloth AI by @danielhanchen and @shimmyshimmer 🦥

Link to the base model: louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1

Link to the instruct model: louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1-Instruct

Link to the dataset: louisbrulenaudet/Romulus-cpt-fr

Please note that these models have not been aligned for the production of usable texts as they stand, and will certainly need to be refined for the desired tasks in order to produce satisfactory results.
  • 1 reply
·
louisbrulenaudet 
posted an update 4 months ago
view post
Post
1576
An example of the application of LegalKit is the production of knowledge graphs, here is a demo Space 🔗

With the update of the French legal code data model uploaded to 🤗 and the introduction of a column dedicated to HTML text, it's now easy to extract links between different articles and produce complex graphs with just a few lines of Python.

This simplified demo highlights the ease of implementation and creative potential, and enables the generation of complete data sets, although requiring a powerful graphics card for display. The framework used for the moment is D3.js, but perhaps other solutions are possible. I'd be delighted to hear your suggestions, and look forward to hearing from the community.

Link to the 🤗 Space: louisbrulenaudet/legalkit-knowledge-graph
  • 2 replies
·
Niansuh 
posted an update 5 months ago
view post
Post
2640
Plugins in NiansuhAI

Plugin Names:
1. WebSearch: Searches the web using search engines.
2. Calculator: Evaluates mathematical expressions, extending the base Tool class.
3. WebBrowser: Extracts and summarizes information from web pages.
4. Wikipedia: Retrieves information from Wikipedia using its API.
5. Arxiv: Searches and fetches article information from Arxiv.
6. WolframAlphaTool: Provides answers on math, science, technology, culture, society, and everyday life.

These plugins currently support the GPT-4O-2024-08-06 model, which also supports image analysis.

Try it now: https://huggingface.co/spaces/NiansuhAI/chat

Similar to: https://hf.co/chat
louisbrulenaudet 
posted an update 5 months ago
view post
Post
2026
Understanding the json format response with HF's Serverless Inference API 🤗

As it stands, there seems to be an inconsistency with the OpenAI documentation on the question of implementing the JSON response format using the InferenceClient completion API.

After investigating the InferenceClient source code, I share the official solution using a JSON Schema. This consolidates the structure of the response and simplifies parsing as part of an automated process for extracting metadata, information:
from huggingface_hub import InferenceClient

client = InferenceClient("meta-llama/Meta-Llama-3-70B-Instruct")

messages = [
    {
        "role": "user",
        "content": "I saw a puppy a cat and a raccoon during my bike ride in the park. What did I saw and when?",
    },
]

response_format = {
    "type": "json",
    "value": {
        "properties": {
            "location": {"type": "string"},
            "activity": {"type": "string"},
            "animals_seen": {"type": "integer", "minimum": 1, "maximum": 5},
            "animals": {"type": "array", "items": {"type": "string"}},
        },
        "required": ["location", "activity", "animals_seen", "animals"],
    },
}

response = client.chat_completion(
    messages=messages,
    response_format=response_format,
    max_tokens=500,
)

print(response.choices[0].message.content)

As a reminder, json mode is activated with the OpenAI client as follows:
response = client.chat.completions.create(
     model="gpt-3.5-turbo-0125",
     messages=[...],
     response_format={"type": "json_object"}
)

One question remains unanswered, however, and will perhaps be answered by the community: it seems that an incompatibility persists for list of dictionaries generation, and currently, the production of simple dictionaries seems to be the only functional option.
  • 2 replies
·
Hev832 
posted an update 5 months ago
view post
Post
1441
i today make Shadow Chat, that make you can Chat with Shadow the Hedgehog (i was just bored, so i make this lol)

try it now in:
Hev832/Shadow_Chat
·