README / README.md
jerryjliu's picture
Update README.md
d58dec6
|
raw
history blame
1.62 kB
---
title: README
emoji: πŸ¦™
colorFrom: yellow
colorTo: purple
sdk: static
pinned: false
---
# πŸ—‚οΈ LlamaIndex πŸ¦™
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
PyPI:
- LlamaIndex: https://pypi.org/project/llama-index/.
- GPT Index (duplicate): https://pypi.org/project/gpt-index/.
Documentation: https://gpt-index.readthedocs.io/.
Twitter: https://twitter.com/llama_index.
Discord: https://discord.gg/dGcwcsnxhU.
### Ecosystem
- LlamaHub (community library of data loaders): https://llamahub.ai
- LlamaLab (cutting-edge AGI projects using LlamaIndex): https://github.com/run-llama/llama-lab
## πŸ’» Example Usage
```
pip install llama-index
```
Examples are in the `examples` folder. Indices are in the `indices` folder (see list of indices below).
To build a simple vector store index:
```python
import os
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader('data').load_data()
index = GPTVectorStoreIndex.from_documents(documents)
```
To query:
```python
query_engine = index.as_query_engine()
query_engine.query("<question_text>?")
```
By default, data is stored in-memory.
To persist to disk (under `./storage`):
```python
index.storage_context.persist()
```
To reload from disk:
```python
from llama_index import StorageContext, load_index_from_storage
# rebuild storage context
storage_context = StorageContext.from_defaults(persist_dir='./storage')
# load index
index = load_index_from_storage(storage_context)
```