Spaces:
Running
Running
cheesyFishes
commited on
Commit
·
32dc4e2
1
Parent(s):
a3b7a80
Update README.md
Browse files
README.md
CHANGED
@@ -8,21 +8,24 @@ pinned: false
|
|
8 |
---
|
9 |
|
10 |
|
11 |
-
# 🗂️ LlamaIndex 🦙
|
12 |
|
13 |
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
|
14 |
|
15 |
-
|
16 |
- LlamaIndex: https://pypi.org/project/llama-index/.
|
17 |
- GPT Index (duplicate): https://pypi.org/project/gpt-index/.
|
18 |
|
19 |
-
Documentation: https://gpt-index.readthedocs.io
|
20 |
|
21 |
Twitter: https://twitter.com/gpt_index.
|
22 |
|
23 |
Discord: https://discord.gg/dGcwcsnxhU.
|
24 |
|
25 |
-
|
|
|
|
|
|
|
26 |
|
27 |
|
28 |
## 💻 Example Usage
|
@@ -38,20 +41,33 @@ To build a simple vector store index:
|
|
38 |
import os
|
39 |
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'
|
40 |
|
41 |
-
from llama_index import
|
42 |
documents = SimpleDirectoryReader('data').load_data()
|
43 |
-
index =
|
44 |
```
|
45 |
|
46 |
-
|
|
|
47 |
```python
|
48 |
-
|
49 |
-
|
50 |
-
# load from disk
|
51 |
-
index = GPTSimpleVectorIndex.load_from_disk('index.json')
|
52 |
```
|
53 |
|
54 |
-
|
|
|
|
|
|
|
55 |
```python
|
56 |
-
index.
|
57 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
|
11 |
+
# 🗂️ LlamaIndex 🦙
|
12 |
|
13 |
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
|
14 |
|
15 |
+
PyPI:
|
16 |
- LlamaIndex: https://pypi.org/project/llama-index/.
|
17 |
- GPT Index (duplicate): https://pypi.org/project/gpt-index/.
|
18 |
|
19 |
+
Documentation: https://gpt-index.readthedocs.io/.
|
20 |
|
21 |
Twitter: https://twitter.com/gpt_index.
|
22 |
|
23 |
Discord: https://discord.gg/dGcwcsnxhU.
|
24 |
|
25 |
+
### Ecosystem
|
26 |
+
|
27 |
+
- LlamaHub (community library of data loaders): https://llamahub.ai
|
28 |
+
- LlamaLab (cutting-edge AGI projects using LlamaIndex): https://github.com/run-llama/llama-lab
|
29 |
|
30 |
|
31 |
## 💻 Example Usage
|
|
|
41 |
import os
|
42 |
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'
|
43 |
|
44 |
+
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader
|
45 |
documents = SimpleDirectoryReader('data').load_data()
|
46 |
+
index = GPTVectorStoreIndex.from_documents(documents)
|
47 |
```
|
48 |
|
49 |
+
|
50 |
+
To query:
|
51 |
```python
|
52 |
+
query_engine = index.as_query_engine()
|
53 |
+
query_engine.query("<question_text>?")
|
|
|
|
|
54 |
```
|
55 |
|
56 |
+
|
57 |
+
By default, data is stored in-memory.
|
58 |
+
To persist to disk (under `./storage`):
|
59 |
+
|
60 |
```python
|
61 |
+
index.storage_context.persist()
|
62 |
```
|
63 |
+
|
64 |
+
To reload from disk:
|
65 |
+
```python
|
66 |
+
from llama_index import StorageContext, load_index_from_storage
|
67 |
+
|
68 |
+
# rebuild storage context
|
69 |
+
storage_context = StorageContext.from_defaults(persist_dir='./storage')
|
70 |
+
# load index
|
71 |
+
index = load_index_from_storage(storage_context)
|
72 |
+
```
|
73 |
+
|