Note: this is mostly for me.
I started using Mastra for some projects recently.
It's a framework for working with agents.
You don't need a framework. I've also played with agents without them. But here are some things Mastra does that might be helpful:
Managing your chat history. Often you want to have a "conversation" with the model. You can give Mastra a storage location - database url/file.
- Tracks all the past messages.
- Manages context size - trims if needed
Also, it handles tool calling. You can create a tool - basically just a function wrapped in Mastra's syntax.
export const weatherInfo = createTool({
id: "Get Weather Information",
inputSchema: z.object({
city: z.string(),
}),
description: `Fetches the current weather information for a given city`,
execute: async ({ context: { city } }) => {
// Tool logic here (e.g., API call)
console.log("Using tool to fetch weather information for", city);
return { temperature: 20, conditions: "Sunny" }; // Example return
},
});
and then create an agent with a prompt and give it access to that tool
const weatherAgent = new Agent({
name: "Weather Agent",
instructions: `You are a helpful weather assistant that provides accurate weather information.
Your primary function is to help users get weather details for specific locations. When responding:
- Always ask for a location if none is provided
- If the location name isn’t in English, please translate it
- Include relevant details like humidity, wind conditions, and precipitation
- Keep responses concise but informative
Use the weatherTool to fetch current weather data.`,
model: openai("gpt-4o-mini"),
tools: { weatherTool },
});
Then you can just do simple cools to the agent and the tool calling will be handled under the hood.
And then you can just send a query to the agent
const agent = await mastra.getAgent("weatherAgent");
const result = await agent.generate("What is the weather in London?");
And they have a lot of things preconfigured that save time e.g. maxSteps
const result = await agent.generate("Your prompt here", {
maxSteps: 10, // Allow up to 10 steps/iterations
});
It's the sort of thing that everyone needs to do and is quite nice.
The last thing I really like is the workflows visualization that Mastra has.
I have found one big challenge of agent stuff is that the scope of what you're doing grows quickly and ways it can go wrong are many. AND, testing often takes longer (multiple API calls to model providers)
So, I like the way Mastra have a UI for visualizing the progress in your workflows.
I'd like to see them take it a step further and help us do testing better on the steps of the workflow.
Replaying a workflow step with the same inputs. And then maybe even instantly adding unit tests that match inptus for successful inputs/outputs.
They do have some tools for helping with evals which look quite interesting, though I've not tried them yet.
P.s. I wrote this at midnight on a Friday after a beer so apologies if it's a little sloppy.