nehulagrawal
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -42,16 +42,27 @@ Before running this project, make sure you have the following dependencies insta
|
|
42 |
- FAISS
|
43 |
- Ollama
|
44 |
|
45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
```
|
47 |
conda create -n VoiceAI python==3.10
|
48 |
conda activate VoiceAI
|
49 |
```
|
50 |
-
You can install most of these dependencies using pip:
|
|
|
|
|
|
|
|
|
51 |
```
|
52 |
pip install torch transformers speechrecognition pyttsx3 soundfile playsound TTS langchain faiss-cpu
|
53 |
```
|
54 |
|
|
|
55 |
For Ollama, follow the installation instructions on their official website https://ollama.com/library/llama3.
|
56 |
|
57 |
## Setup
|
@@ -66,7 +77,7 @@ For Ollama, follow the installation instructions on their official website https
|
|
66 |
To run the voice assistant, execute the following command in your terminal:
|
67 |
|
68 |
```
|
69 |
-
python
|
70 |
```
|
71 |
|
72 |
The assistant will start listening for your voice input. Speak clearly into your microphone to ask questions or give commands. The assistant will process your input and respond with synthesized speech.
|
@@ -85,7 +96,7 @@ The assistant will start listening for your voice input. Speak clearly into your
|
|
85 |
- To use a different knowledge base, replace `KnowledgeBase.pdf` with your own PDF file and update the filename in the script.
|
86 |
- You can experiment with different embedding models by changing the `model_name` in the `HuggingFaceEmbeddings` initialization.
|
87 |
- To use a different Ollama model, update the `model` parameter in the `Ollama` initialization.
|
88 |
-
- Try to use other TTS frameworks -
|
89 |
|
90 |
## Troubleshooting
|
91 |
|
|
|
42 |
- FAISS
|
43 |
- Ollama
|
44 |
|
45 |
+
# How to gt started with project
|
46 |
+
|
47 |
+
1. Clone this repository.
|
48 |
+
```
|
49 |
+
git clone https://huggingface.co/foduucom/Voice-Assistant-using-RAG
|
50 |
+
```
|
51 |
+
2. Create conda environment.
|
52 |
```
|
53 |
conda create -n VoiceAI python==3.10
|
54 |
conda activate VoiceAI
|
55 |
```
|
56 |
+
3. You can install most of these dependencies using pip:
|
57 |
+
```
|
58 |
+
pip install -r requirements.txt
|
59 |
+
```
|
60 |
+
or
|
61 |
```
|
62 |
pip install torch transformers speechrecognition pyttsx3 soundfile playsound TTS langchain faiss-cpu
|
63 |
```
|
64 |
|
65 |
+
|
66 |
For Ollama, follow the installation instructions on their official website https://ollama.com/library/llama3.
|
67 |
|
68 |
## Setup
|
|
|
77 |
To run the voice assistant, execute the following command in your terminal:
|
78 |
|
79 |
```
|
80 |
+
python Voice_Assistant.py
|
81 |
```
|
82 |
|
83 |
The assistant will start listening for your voice input. Speak clearly into your microphone to ask questions or give commands. The assistant will process your input and respond with synthesized speech.
|
|
|
96 |
- To use a different knowledge base, replace `KnowledgeBase.pdf` with your own PDF file and update the filename in the script.
|
97 |
- You can experiment with different embedding models by changing the `model_name` in the `HuggingFaceEmbeddings` initialization.
|
98 |
- To use a different Ollama model, update the `model` parameter in the `Ollama` initialization.
|
99 |
+
- Try to use other TTS frameworks - Melo TTS, coqui TTS, Mars5 TTS.
|
100 |
|
101 |
## Troubleshooting
|
102 |
|