File size: 8,639 Bytes
6b922bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


Emollama-chat-13b - GGUF
- Model creator: https://huggingface.co/lzw1008/
- Original model: https://huggingface.co/lzw1008/Emollama-chat-13b/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Emollama-chat-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q2_K.gguf) | Q2_K | 4.52GB |
| [Emollama-chat-13b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [Emollama-chat-13b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [Emollama-chat-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [Emollama-chat-13b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [Emollama-chat-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q3_K.gguf) | Q3_K | 5.9GB |
| [Emollama-chat-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [Emollama-chat-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [Emollama-chat-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [Emollama-chat-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q4_0.gguf) | Q4_0 | 6.86GB |
| [Emollama-chat-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [Emollama-chat-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [Emollama-chat-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q4_K.gguf) | Q4_K | 7.33GB |
| [Emollama-chat-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [Emollama-chat-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q4_1.gguf) | Q4_1 | 7.61GB |
| [Emollama-chat-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q5_0.gguf) | Q5_0 | 8.36GB |
| [Emollama-chat-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [Emollama-chat-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q5_K.gguf) | Q5_K | 8.6GB |
| [Emollama-chat-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [Emollama-chat-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q5_1.gguf) | Q5_1 | 9.1GB |
| [Emollama-chat-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q6_K.gguf) | Q6_K | 9.95GB |
| [Emollama-chat-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/lzw1008_-_Emollama-chat-13b-gguf/blob/main/Emollama-chat-13b.Q8_0.gguf) | Q8_0 | 12.88GB |




Original model description:
---
license: mit
language:
- en
---

# Introduction

Emollama-chat-13b is part of the [EmoLLMs](https://github.com/lzw108/EmoLLMs) project, the first open-source large language model (LLM) series for 
comprehensive affective analysis with instruction-following capability. This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full AAID instruction tuning data.
The model can be used for affective classification tasks (e.g. sentimental polarity
or categorical emotions), and regression tasks (e.g. sentiment strength or emotion intensity).

# Ethical Consideration

Recent studies have indicated LLMs may introduce some potential
bias, such as gender gaps. Meanwhile, some incorrect prediction results, and over-generalization
also illustrate the potential risks of current LLMs. Therefore, there
are still many challenges in applying the model to real-scenario
affective analysis systems.

## Models in EmoLLMs

There are a series of EmoLLMs, including Emollama-7b, Emollama-chat-7b, Emollama-chat-13b,  Emoopt-13b, Emobloom-7b, Emot5-large, Emobart-large.

- **Emollama-7b**: This model is finetuned based on the LLaMA2-7B. 
- **Emollama-chat-7b**: This model is finetuned based on the LLaMA2-chat-7B. 
- **Emollama-chat-13b**: This model is finetuned based on the LLaMA2-chat-13B. 
- **Emoopt-13b**: This model is finetuned based on the OPT-13B. 
- **Emobloom-7b**: This model is finetuned based on the Bloomz-7b1-mt. 
- **Emot5-large**: This model is finetuned based on the T5-large. 
- **Emobart-large**: This model is finetuned based on the bart-large.

All models are trained on the full AAID instruction tuning data.



## Usage

You can use the Emollama-chat-13b model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:

```python
from transformers import LlamaTokenizer, LlamaForCausalLM
tokenizer = LlamaTokenizer.from_pretrained('lzw1008/Emollama-chat-13b')
model = LlamaForCausalLM.from_pretrained('lzw1008/Emollama-chat-13b', device_map='auto')
```

In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically
use the GPU if it's available.

## Prompt examples
 

### Emotion intensity

    Human: 
    Task: Assign a numerical value between 0 (least E) and 1 (most E) to represent the intensity of emotion E expressed in the text.
    Text: @CScheiwiller can't stop smiling 😆😆😆
    Emotion: joy
    Intensity Score:

    Assistant:
    >>0.896

### Sentiment strength

    Human:
    Task: Evaluate the valence intensity of the writer's mental state based on the text, assigning it a real-valued score from 0 (most negative) to 1 (most positive).
    Text: Happy Birthday shorty. Stay fine stay breezy stay wavy @daviistuart 😘
    Intensity Score:

    Assistant:
    >>0.879

### Sentiment classification

    Human:
    Task: Categorize the text into an ordinal class that best characterizes the writer's mental state, considering various degrees of positive and negative sentiment intensity. 3: very positive mental state can be inferred. 2: moderately positive mental state can be inferred. 1: slightly positive mental state can be inferred. 0: neutral or mixed mental state can be inferred. -1: slightly negative mental state can be inferred. -2: moderately negative mental state can be inferred. -3: very negative mental state can be inferred
    Text: Beyoncé resentment gets me in my feelings every time. 😩
    Intensity Class:

    Assistant:
    >>-3: very negative emotional state can be inferred

### Emotion classification

    Human:
    Task: Categorize the text's emotional tone as either 'neutral or no emotion' or identify the presence of one or more of the given emotions (anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, trust).
    Text: Whatever you decide to do make sure it makes you #happy.
    This text contains emotions:

    Assistant:
    >>joy, love, optimism

The task description can be adjusted according to the specific task.

## License

EmoLLMs series are licensed under MIT. For more details, please see the MIT file.

## Citation

If you use the series of EmoLLMs in your work, please cite our paper:

```bibtex
@article{liu2024emollms,
  title={EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective Analysis},
  author={Liu, Zhiwei and Yang, Kailai and Zhang, Tianlin and Xie, Qianqian and Yu, Zeping and Ananiadou, Sophia},
  journal={arXiv preprint arXiv:2401.08508},
  year={2024}
}
```