File size: 2,632 Bytes
7c39322
 
 
 
 
 
 
 
 
 
 
922bb4d
7c39322
 
 
 
 
 
 
 
 
 
002b9af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
922bb4d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
base_model: nvidia/Mistral-NeMo-Minitron-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
pipeline_tag: text-generation
---

# Uploaded  model

- **Developed by:** prithivMLmods
- **License:** apache-2.0
- **Finetuned from model :** nvidia/Mistral-NeMo-Minitron-8B-Instruct

This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.


# Run with Ollama 🦙

### Download and Install Ollama

To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system.

### Run Your Own Model in Minutes

### Steps to Run GGUF Models:

#### 1. Create the Model File
   - Name your model file appropriately, for example, `nemo-minitron`.

#### 2. Add the Template Command
   - Include a `FROM` line with the base model file. For instance:

     ```bash
     FROM Nemo-Minitron-8B-Instruct-GGUF
     ```

   - Make sure the model file is in the same directory as your script.

#### 3. Create and Patch the Model
   - Use the following command in your terminal to create and patch your model:

     ```bash
     ollama create nemo-minitron -f ./nemo-minitron
     ```

   - Upon success, a confirmation message will appear.

   - To verify that the model was created successfully, run:

     ```bash
     ollama list
     ```

     Ensure that `metallama` appears in the list of models.

---

## Running the Model

To run the model, use:

```bash
ollama run nemo-minitron
```

### Sample Usage

In the command prompt, run:

```bash
D:\>ollama run nemo-minitron
```

Example interaction:

```plaintext
>>> write a mini passage about space x
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration.
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X
plays a pivotal role in pushing the boundaries of human exploration and settlement.
```

---