Mistral AI provides language models, embeddings, and agent APIs. Braintrust instruments Mistral calls so you can inspect prompts, responses, streaming behavior, embeddings, fill-in-the-middle completions, and agent operations in Braintrust.
Add your Mistral API key to your organization’s AI providers or to a project’s AI providers before you start.
Setup
You can either trace the native @mistralai/mistralai SDK or use the Braintrust gateway through the OpenAI SDK.# pnpm
pnpm add braintrust @mistralai/mistralai
pnpm add braintrust openai
# npm
npm install braintrust @mistralai/mistralai
npm install braintrust openai
Set your environment variables:BRAINTRUST_API_KEY=<your-braintrust-api-key>
MISTRAL_API_KEY=<your-mistral-api-key>
# For organizations on the EU data plane, use https://api-eu.braintrust.dev
# For self-hosted deployments, use your data plane URL
# BRAINTRUST_API_URL=<your-braintrust-api-url>
If you only use the Braintrust gateway, your application code only needs BRAINTRUST_API_KEY.TypeScript resources: You can either trace the native mistralai SDK or use the Braintrust gateway through the OpenAI SDK.pip install braintrust openai "mistralai>=1.12.4"
Set your environment variables:BRAINTRUST_API_KEY=<your-braintrust-api-key>
MISTRAL_API_KEY=<your-mistral-api-key>
# For organizations on the EU data plane, use https://api-eu.braintrust.dev
# For self-hosted deployments, use your data plane URL
# BRAINTRUST_API_URL=<your-braintrust-api-url>
If you only use the Braintrust gateway, your application code only needs BRAINTRUST_API_KEY.Python resources:
API keys are stored as one-way cryptographic hashes, never in plaintext.
Instrument Mistral
Use wrapMistral() or Braintrust’s import hook if you want to trace the native @mistralai/mistralai SDK. Use the Braintrust gateway with the OpenAI SDK if you already have an OpenAI-compatible client.Requires @mistralai/mistralai>=1.0.0 for native SDK tracing.Automatic instrumentation
Run your app with Braintrust’s import hook to patch the native Mistral SDK automatically.import { initLogger } from "braintrust";
import { Mistral } from "@mistralai/mistralai";
initLogger({
projectName: "mistral-example", // Replace with your project name
apiKey: process.env.BRAINTRUST_API_KEY,
});
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });
const response = await client.chat.complete({
model: "mistral-small-latest",
messages: [{ role: "user", content: "Explain tracing in one sentence." }],
});
console.log(response.choices?.[0]?.message?.content);
Run with the import hook:node --import braintrust/hook.mjs trace-mistral-auto.ts
If you’re using a bundler, see Trace LLM calls for plugin and loader setup.Manual wrapper
Use wrapMistral() when you want to instrument only selected Mistral clients instead of patching the SDK globally.import { initLogger, wrapMistral } from "braintrust";
import { Mistral } from "@mistralai/mistralai";
initLogger({
projectName: "mistral-example", // Replace with your project name
apiKey: process.env.BRAINTRUST_API_KEY,
});
const client = wrapMistral(
new Mistral({ apiKey: process.env.MISTRAL_API_KEY }),
);
const response = await client.chat.complete({
model: "mistral-small-latest",
messages: [{ role: "user", content: "Explain tracing in one sentence." }],
});
console.log(response.choices?.[0]?.message?.content);
Braintrust gateway
Use the Braintrust gateway with the OpenAI SDK.import { initLogger } from "braintrust";
import OpenAI from "openai";
initLogger({
projectName: "mistral-example", // Replace with your project name
apiKey: process.env.BRAINTRUST_API_KEY,
});
const client = new OpenAI({
baseURL: "https://gateway.braintrust.dev/v1",
apiKey: process.env.BRAINTRUST_API_KEY,
});
const response = await client.responses.create({
model: "mistral-small-latest",
input: "Explain tracing in one sentence.",
});
console.log(response.output_text);
TypeScript resources: Use braintrust.auto_instrument() if you want to trace the native mistralai SDK alongside Braintrust’s other Python integrations. Use wrap_mistral() when you want to trace only specific Mistral clients.Requires mistralai>=1.12.4.Automatic Instrumentation
braintrust.auto_instrument() patches the native Mistral SDK. Call it before creating your Mistral client.Braintrust traces these native Mistral SDK operations:
- Chat completions, including sync, async, and streaming calls
- Fill-in-the-middle completions, including sync, async, and streaming calls
- Embeddings
- Mistral Agents calls, including sync, async, and streaming calls
import os
import braintrust
braintrust.auto_instrument()
braintrust.init_logger(
api_key=os.environ["BRAINTRUST_API_KEY"],
project="mistral-example", # Replace with your project name
)
from mistralai import Mistral
client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])
response = client.chat.complete(
model="mistral-small-latest",
messages=[
{"role": "user", "content": "Explain tracing in one sentence."},
],
)
print(response.choices[0].message.content)
Manual Wrapper
Use wrap_mistral() if you want to instrument only selected Mistral clients instead of patching the SDK globally.import os
from braintrust import init_logger
from braintrust.integrations.mistral import wrap_mistral
from mistralai import Mistral
init_logger(
api_key=os.environ["BRAINTRUST_API_KEY"],
project="mistral-example", # Replace with your project name
)
client = wrap_mistral(Mistral(api_key=os.environ["MISTRAL_API_KEY"]))
response = client.chat.complete(
model="mistral-small-latest",
messages=[
{"role": "user", "content": "Explain tracing in one sentence."},
],
)
print(response.choices[0].message.content)
If you want global Mistral-only patching without braintrust.auto_instrument(), import MistralIntegration from braintrust.integrations.mistral and call MistralIntegration.setup() before you create the client.from braintrust.integrations.mistral import MistralIntegration
MistralIntegration.setup()
Braintrust gateway
Use the Braintrust gateway with the OpenAI SDK.import os
from braintrust import init_logger
from openai import OpenAI
init_logger(
project="mistral-example", # Replace with your project name
api_key=os.environ["BRAINTRUST_API_KEY"],
)
client = OpenAI(
base_url="https://gateway.braintrust.dev/v1",
api_key=os.environ["BRAINTRUST_API_KEY"],
)
response = client.responses.create(
model="mistral-small-latest",
input="Explain tracing in one sentence.",
)
print(response.output_text)
Python resources:
Examples
In Braintrust, a Mistral trace typically includes the root LLM span plus any child work created by streaming or agent execution.
Braintrust captures:
- Prompt input and model output
- Model name and request metadata
- Streaming timing, including time to first token when available
- Embeddings and fill-in-the-middle requests for the native TypeScript and Python SDKs
- Agent operations for the native Python SDK
- Parent-child relationships when Mistral calls happen inside an existing Braintrust span
Evaluate with Mistral
You can evaluate Mistral-powered tasks with Braintrust the same way you evaluate other model providers. The example below uses the Braintrust gateway so the same pattern works in both TypeScript and Python.
import { Eval } from "braintrust";
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://gateway.braintrust.dev/v1",
apiKey: process.env.BRAINTRUST_API_KEY,
});
Eval("Mistral evaluation", {
data: () => [
{ input: "What is 2 + 2?", expected: "4" },
{ input: "What is the capital of France?", expected: "Paris" },
],
task: async (input) => {
const response = await client.responses.create({
model: "mistral-small-latest",
input,
});
return response.output_text;
},
scores: [
{
name: "accuracy",
scorer: ({ output, expected }) => (output === expected ? 1 : 0),
},
],
});
For more evaluation patterns, see Create experiments.
Resources