FrenchLlama - The French Grammar Tutor

This is a custom-built language model, fine-tuned from Meta's Llama 3.2 3B Instruct. I trained the first iteration on a dataset of 1500+ examples that explain French grammar. It's basically my experiment to see how much a tiny LLM can actually learn to produce coherent outputs. The version you are looking at is a quantization for use in apps like LM Studio. Because the 3B model is heavier than the 1B one, a few more quants are available.

As an added note, I will be updating this every once in a while. As of December 29, 2024, I have decided to include all versions as downloadable ggufs.

I used the following settings during training:

training_args = TrainingArguments(
    output_dir="./llama_fine_tuned",  # Changed output directory
    per_device_train_batch_size=3,
    gradient_accumulation_steps=4,
    learning_rate=1e-3,
    num_train_epochs=5,
    weight_decay=0.01,
    warmup_steps=100,
    save_steps=50,
    fp16=False,
    bf16=True,
    logging_steps=50,
    save_total_limit=2,
    report_to='none'
)

Moreover, I've noticed that using a system prompt to steer the output in the right direction appears to make a noticeable impact on both the base Llama 3.2 1B model and FrenchLlama. This applies to the 3B version of the models as well.

You are FrenchLlama, a highly knowledgeable and polite AI assistant whose primary purpose is to help users improve their understanding of French grammar. You are an expert in all aspects of French grammar, such as verb tenses, moods, sentence structure, the usage of prepositions, pronouns, and how to accurately use different clauses. You always respond in clear, concise, and accurate ways, and you always provide examples that are relevant to the context.

Your main goal is to assist users with any questions or tasks relating to French grammar. You do this by providing helpful responses that clarify the definitions, rules, and nuances of the French language. When a user asks a question, you must first try to understand their intent, and ask clarifying questions if needed. If their question is too vague, or it does not directly relate to French grammar, you should politely ask them to rephrase the question or ask for more specifics.

You are not meant to be a chatbot, and you should never respond to questions that do not directly relate to French grammar. You should always be professional, polite, and you should always be focusing on providing accurate grammar explanations. You should also make an effort to explain why and when a specific grammatical construct or rule is to be used, instead of just stating the rule. If you are providing examples, you should always make sure that those examples are accurate and relevant to the topic that the user is asking about. When you are asked to perform a task that involves writing, such as a love letter, you should always try to create a piece of writing that is both beautiful, creative and also includes a variety of different grammar rules.

Available Versions (3B)

Example Outputs

The 1B family of FrenchLlama models was in all honesty a bit of a disaster in my eyes, but it was also tons of fun seeing them come to life and actually produce results! The very first version of the 3B version of FrenchLlama shows a significant improvement in French grammar and explanations, displaying significantly fewer quirks. I'm certain that with the right training and the right settings, tiny LLMs can be finetuned to the point of matching SOTA-level LLMs at very specific tasks. This could pave the way for router-based architectures to skyrocket, where a network of field-specific LLMs infer based on the user's input. Unlike MoE (Mixture of Experts), these architectures (I call my version CAMR - Context-Aware Model Routing) use a routing model which mounts the appropriate finetuned LLMs (ideally small ones) to answer domain-specific inquiries. Because most people can't run massive LLMs on their local hardware, networks of tiny LLMs being mounted based on the router model's interpretation of the user input can democratize the use of local LLMs while ensuring higher-quality responses.

  1. Verbe Tense Explanations

User: What is the 'passé composé'?

The 'passé composé' (past completed) is one of the main tenses in French. It's used to describe an action that was completed at a specific time in the past. The tense is formed by combining the auxiliary verb 'avoir' or 'être' with the past participle of the main verb.

For example:

- 'J'ai mangé une pomme.' (I ate an apple.)
- 'Elle est allée au parc.' (She went to the park.)

The choice between 'avoir' and 'être' as the auxiliary depends on the main verb. If the main verb is a regular verb that ends in '-ir,' use 'avoir'; if it's a regular verb that ends in '-re,' use 'être'; or use an irregular verb, which will have its own rules.

The past participle must agree with the subject in gender and number when used as an adjective. When used with the auxiliary verb to form the tense, it remains the same regardless of gender and number of the subject.

For example: 

- 'Elle a mangé la pomme.' (She ate the apple.) The 'mangé' agrees with 'elle', which is feminine singular. 
- 'Ils ont mangé les pommes.' (They ate the apples.) The 'mangé' does not change, even though there are masculine plural subjects.

This first example is a pretty decent explanation of the past compose tense in French. Key mistakes to point out:

  1. The English translation of the tense is Past Compose, not Past Completed as claimed by FrenchLlama.
  2. FrenchLlama incorrectly explains that the choice between the two auxiliary verbs 'avoir' and 'être' depends on the suffix of the main verb. In truth, there is a group of verbs that require 'être', but this is determined by whether a verb is intransitive or transitive.
  3. FrenchLlama also says that irregular verbs have their own rules, which is a fabrication in this context.
  4. The remainder of the explanation is balderdash. While the past participle must agree with the direct object when it precedes the past participle, FrenchLlama hallucinated the rest of the response.
Downloads last month
35
GGUF
Model size
3.21B params
Architecture
llama

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Sufi2425/FrenchLlama-3.2-3B-Instruct-GGUF

Quantized
(1)
this model

Dataset used to train Sufi2425/FrenchLlama-3.2-3B-Instruct-GGUF