|
--- |
|
language: |
|
- pt |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- portugues |
|
- portuguese |
|
- QA |
|
- instruct |
|
- phi |
|
- gguf |
|
- f16 |
|
base_model: microsoft/Phi-3-mini-4k-instruct |
|
datasets: |
|
- rhaymison/superset |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# Phi3 portuguese tom cat 4k instruct GGUF |
|
|
|
<p align="center"> |
|
<img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
|
</p> |
|
|
|
This GGUF model, derived from the Phi3 Tom cat 4k, has been quantized in f16. The model was trained with a superset of 300,000 instructions in Portuguese, aiming to help fill the gap in models available in Portuguese. |
|
Tuned from Phi3-4k, this model has been primarily adjusted for instructional tasks. |
|
|
|
This model was trained with a superset of 300,000 instructions in Portuguese. |
|
The model comes to help fill the gap in models in Portuguese. Tuned from the microsoft/Phi-3-mini-4k. |
|
|
|
Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. |
|
Important points like these help models (even smaller models like 4b) to perform much better. |
|
|
|
|
|
```python |
|
!git lfs install |
|
!pip install langchain |
|
!pip install langchain-community langchain-core |
|
!pip install llama-cpp-python |
|
|
|
!git clone https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct-f16-gguf |
|
|
|
def llamacpp(): |
|
from langchain.llms import LlamaCpp |
|
from langchain.prompts import PromptTemplate |
|
from langchain.chains import LLMChain |
|
|
|
llm = LlamaCpp( |
|
model_path="/content/phi-3-portuguese-tom-cat-4k-instruct-f16-gguf", |
|
n_gpu_layers=40, |
|
n_batch=512, |
|
verbose=True, |
|
) |
|
|
|
template = """<s>[INST] Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. |
|
Escreva uma resposta que complete adequadamente o pedido. |
|
### {question} |
|
[/INST]""" |
|
|
|
prompt = PromptTemplate(template=template, input_variables=["question"]) |
|
|
|
llm_chain = LLMChain(prompt=prompt, llm=llm) |
|
|
|
question = "instrução: aja como um professor de matemática e me explique porque 2 + 2 = 4?" |
|
response = llm_chain.run({"question": question}) |
|
print(response) |
|
|
|
``` |
|
|
|
|
|
|
|
### Comments |
|
|
|
Any idea, help or report will always be welcome. |
|
|
|
email: [email protected] |
|
|
|
<div style="display:flex; flex-direction:row; justify-content:left"> |
|
<a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank"> |
|
<img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"> |
|
</a> |
|
<a href="https://github.com/rhaymisonbetini" target="_blank"> |
|
<img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white"> |
|
</a> |