Acme Design is a full service design agency.
The ability to store and retrieve information overtime
Maintain context across multiple tasks, remember past interactions
Stateless:
LLMs are designed to treat each input as a new, independent request, without retaining any memory of past interactions.
Stateful:
Agent memory allows AI systems to maintain context and learn from past interactions, becoming more effective over time
STM allows the agent to maintain context and recall recent interactions within a single conversation or session
- Model’s context window
--> Time passes and information begins to accumulate, it becomes increasingly difficult for the model to handle all the information effectively
- Does not persist once the current task ends
Ex: solve a complex problem today, but if it isn’t specifically trained to remember this solution, it will have to start from scratch the next time
- In-memory (Ex: LangGraph uses a checkpointer)
- In a temporary storage (Ex: Redis cache)
LTM enables the agent to store and recall information across multiple sessions, allowing for personalization and learning over time
- In persistent storage
- Databases
- Knowledge graphs
- Vector databases
RAG is a common technique used to integrate LTM with the agent's knowledge base
Mermaid chard
flowchart TD
A[User sends new message] --> B[Append message to short-term buffer]
B --> C{Buffer size > k?}
C -- No --> D[Send short-term buffer + long-term summary to LLM]
C -- Yes --> E[Extract oldest messages beyond k]
E --> F[Summarize extracted messages with previous long-term summary]
F --> G[Update long-term summary]
G --> H[Remove extracted messages from short-term buffer]
H --> D[Send short-term buffer + updated long-term summary to LLM]
D --> I[LLM generates response]
I --> J[Append response to short-term buffer]
A technique where bots, also known as crawlers or spiders, automatically browse the web to collect data
LLM needs:
LLM benefits Web Scraping: