2024-10-30 ODSC West
Python: docs.llamaindex.ai
TypeScript: ts.llamaindex.ai
World's best parser of complex documents
Free for 1000 pages/day!
cloud.llamaindex.ai
Turn-key RAG API for Enterprises
Available as SaaS or private cloud deployment
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
⚠️ Single-shot
⚠️ No query understanding/planning
⚠️ No tool use
⚠️ No reflection, error correction
⚠️ No memory (stateless)
✅ Multi-turn
✅ Query / task planning layer
✅ Tool interface for external environment
✅ Reflection
✅ Memory for personalization
and then go further
Agentic strategies
Full agent
from llama_index.llms.openai import OpenAI
class OpenAIGenerator(Workflow):
@step()
async def generate(self, ev: StartEvent) -> StopEvent:
query = ev.get("query")
llm = OpenAI()
response = await llm.acomplete(query)
return StopEvent(result=str(response))
w = OpenAIGenerator(timeout=10, verbose=False)
result = await w.run(query="What's LlamaIndex?")
print(result)
draw_all_possible_flows()
pip install llama-deploy
Follow me on BlueSky:
@seldo.com