by 🎙️ @aakash7539 & @priyas@aakash7539
@priyas
A tech startup with roots in the city of Chennai. Currently in San Francisco, Bangalore and Chennai. We want to build the best AI out there.
We are the fastest at deploying Llamas in the world. We put Llama-2 in 24 hours and Llama-3 in 1 hour from launch!
We have built two products, called Tune Studio and Tune Chat.
999,999,999…
Tokens Generated
6,768,000+
Over 6.5 Million conversations with AIs that can read files, generate images, send emails, surf internet and more.
431,300+
Signups on Tune AI
*Email*
follow this thread
*Analyst*
Hey, i found the
discrepancies
― Dr. Seuss
Multi agent framework in a nutshell
Internet Search 🔍
Maps 🗺️📍
Documents 📃
Call LLM with Tools
LLM responds function
You run function
LLM gives response
Allows access to application logic and data directly through the chat completion API.
Enhances the LLM's ability to provide accurate and relevant responses.
Identifies the function using its description from the list of available tools
Provides the function name and needed arguments in a JSON response.
You call the required function with the provided arguments.
Ensures the correct data or action is retrieved or completed
LLM uses the results from the function to generate a response.
Response is detailed and tailored to the user's query.
Function Calling Flow
Function calling Flow
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin urna odio, aliquam vulputate faucibus id, elementum lobortis felis. Mauris urna dolor, placerat ac sagittis quis.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Proin urna odio, aliquam vulputate faucibus id, elementum lobortis felis. Mauris urna dolor, placerat ac sagittis quis.
Tune Assistant Design
Text
tune.beehiiv.com