File size: 2,182 Bytes
d8fa88f
 
 
 
 
 
 
12f79ee
 
d8fa88f
 
 
 
 
 
 
 
 
 
 
9e7e323
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d8fa88f
12f79ee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- trl
- phi
- text-generation
license: apache-2.0
language:
- en
---

# Uploaded  model

- **Developed by:** Ishika08
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit

This phi model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

## How to Use the Model for Inferencing

You can use the model for inferencing via Hugging Face's API by following the steps below:

### 1. Install Required Libraries

Ensure that you have the `requests` library installed:

```bash
pip install requests


```
## Steps to use the model for inferencing using Hugging Face API

import requests

# API URL for the model hosted on Hugging Face
API_URL = "/static-proxy?url=https%3A%2F%2Fapi-inference.huggingface.co%2Fmodels%2FIshika08%2Fphi-4_%3C%2Fspan%3Efine-tuned%3Cspan class="hljs-emphasis">_mdl"

# Set up your Hugging Face API token
HEADERS = {"Authorization": f"Bearer token_id"}  

# The input you want to pass to the model
payload = {
    "inputs": "What is the capital of France? Tell me some of the tourist places in bullet points."
}

# Make the request to the API
response = requests.post(API_URL, headers=HEADERS, json=payload)

# Print the response from the model
print(response.json())  # Get the response output

# OUTPUT
{
    "generated_text": "Paris is the capital of France. Some of the famous tourist places include:\n- Eiffel Tower\n- Louvre Museum\n- Notre-Dame Cathedral\n- Sacré-Cœur Basilica"
}


## Steps to use model using InferenceClient library from huggingface_hub


from huggingface_hub import InferenceClient

# Initialize the client with model name and Hugging Face token
client = InferenceClient(model="Ishika08/phi-4_fine-tuned_mdl", token=""")

# Perform inference (text generation in this case)
response = client.text_generation("What is the capital of France? Tell me about Eiffel Tower history in bullet points.")

# Print the response from the model
print(response)




[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)