File size: 3,429 Bytes
1720482
 
 
 
77bb24e
 
 
eee8792
 
 
 
 
 
1720482
c06720c
1720482
eee8792
 
 
1720482
 
eee8792
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1720482
eee8792
1720482
eee8792
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: apache-2.0
language:
- en
datasets:
- Magpie-Align/Magpie-Reasoning-V1-150K-CoT-QwQ
library_name: transformers
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
---
![image](./image.webp)

# Sky-T1-32B-Preview Fine-Tuned Model

## Model Details

- **Developed by:** Daemontatox
- **Model type:** Text Generation
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** [NovaSky-AI/Sky-T1-32B-Preview](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview)
- **Training dataset:** [Magpie-Reasoning-V1-150K-CoT-QwQ](https://huggingface.co/datasets/Magpie-Align/Magpie-Reasoning-V1-150K-CoT-QwQ)
- **Training framework:** [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's [TRL](https://github.com/huggingface/trl) library

## Model Description

This model is a fine-tuned version of the **NovaSky-AI/Sky-T1-32B-Preview** model, specifically optimized for text generation tasks. It was trained on the **Magpie-Reasoning-V1-150K-CoT-QwQ** dataset, which focuses on reasoning and chain-of-thought (CoT) tasks. The training process was accelerated using **Unsloth**, achieving a 2x speedup compared to traditional methods.

## Intended Use

This model is intended for **text generation** tasks, particularly those requiring reasoning and logical coherence. It can be used for:

- Chain-of-thought reasoning
- Question answering
- Content generation
- Educational tools

## Training Details

- **Training framework:** Unsloth + Huggingface TRL
- **Training speed:** 2x faster than traditional methods
- **Dataset:** Magpie-Reasoning-V1-150K-CoT-QwQ
- **Base model:** NovaSky-AI/Sky-T1-32B-Preview

## How to Use

You can use this model with the Huggingface `transformers` library:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the model and tokenizer
model = AutoModelForCausalLM.from_pretrained("Daemontatox/Sky-T1-32B-Preview-Finetuned")
tokenizer = AutoTokenizer.from_pretrained("Daemontatox/Sky-T1-32B-Preview-Finetuned")

# Generate text
input_text = "Explain the concept of chain-of-thought reasoning."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)

# Decode and print the output
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Limitations
-**The model may generate incorrect or nonsensical responses if the input is ambiguous or outside its training domain.**

-**It is primarily trained on English data, so performance may degrade for other languages.**

## Ethical Considerations
-**Bias:** The model may inherit biases present in the training data. Users should be cautious when deploying it in sensitive applications.

-**Misuse:** The model should not be used for generating harmful, misleading, or unethical content.




```
@misc{novasky-sky-t1-32b-preview,
  author = {NovaSky-AI},
  title = {Sky-T1-32B-Preview},
  year = {2023},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview}},
}

@misc{unsloth,
  author = {Unsloth Team},
  title = {Unsloth: Faster Training for Transformers},
  year = {2023},
  publisher = {GitHub},
  howpublished = {\url{https://github.com/unslothai/unsloth}},
}

```


## Acknowledgements
Thanks to **NovaSky-AI** for the base model.

Thanks to **Unsloth** for the faster training framework.

Thanks to **Huggingface** for the TRL library and tools.