Spaces:
Running
Running
title: README | |
emoji: π¦ | |
colorFrom: yellow | |
colorTo: purple | |
sdk: static | |
pinned: false | |
# ποΈ LlamaIndex π¦ | |
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. | |
PyPI: | |
- LlamaIndex: https://pypi.org/project/llama-index/. | |
- GPT Index (duplicate): https://pypi.org/project/gpt-index/. | |
Documentation: https://gpt-index.readthedocs.io/. | |
Twitter: https://twitter.com/gpt_index. | |
Discord: https://discord.gg/dGcwcsnxhU. | |
### Ecosystem | |
- LlamaHub (community library of data loaders): https://llamahub.ai | |
- LlamaLab (cutting-edge AGI projects using LlamaIndex): https://github.com/run-llama/llama-lab | |
## π» Example Usage | |
``` | |
pip install llama-index | |
``` | |
Examples are in the `examples` folder. Indices are in the `indices` folder (see list of indices below). | |
To build a simple vector store index: | |
```python | |
import os | |
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY' | |
from llama_index import GPTVectorStoreIndex, SimpleDirectoryReader | |
documents = SimpleDirectoryReader('data').load_data() | |
index = GPTVectorStoreIndex.from_documents(documents) | |
``` | |
To query: | |
```python | |
query_engine = index.as_query_engine() | |
query_engine.query("<question_text>?") | |
``` | |
By default, data is stored in-memory. | |
To persist to disk (under `./storage`): | |
```python | |
index.storage_context.persist() | |
``` | |
To reload from disk: | |
```python | |
from llama_index import StorageContext, load_index_from_storage | |
# rebuild storage context | |
storage_context = StorageContext.from_defaults(persist_dir='./storage') | |
# load index | |
index = load_index_from_storage(storage_context) | |
``` | |