Tool/function calling
Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is merely coming up with the arguments to a tool, and actually running a tool (or not) is up to the user. For example, if you want to extract output matching some schema from unstructured text, you could give the model an “extraction” tool that takes parameters matching the desired schema, then treat the generated output as your final result. If you actually do want to execute called tools, you can use the Tool Calling Agent.
Note that not all chat models support tool calling currently.
A tool call
object
includes a name
, arguments
option, and an optional id
.
Many LLM providers, including Anthropic, Google Vertex, Mistral, OpenAI, and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools.
For instance, given a search engine tool, an LLM might handle a query by first calling the search engine tool by generated required parameters in the right format. The system calling the LLM can receive these generated parameters and use them to execute the tool, then the output to the LLM to inform its response. LangChain includes a suite of built-in tools and supports several methods for defining your own custom tools. Tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.
Providers adopt different conventions for formatting tool schemas and tool calls. For instance, Anthropic returns tool calls as parsed structures within a larger content block:
[
{
"text": "<thinking>\nI should use a tool.\n</thinking>",
"type": "text"
},
{
"id": "id_value",
"input": { "arg_name": "arg_value" },
"name": "tool_name",
"type": "tool_use"
}
]
whereas OpenAI separates tool calls into a distinct parameter, with arguments as JSON strings:
{
"tool_calls": [
{
"id": "id_value",
"function": {
"arguments": "{\"arg_name\": \"arg_value\"}",
"name": "tool_name"
},
"type": "function"
}
]
}
LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls.
Passing tools to LLMs​
Chat models that support tool calling features implement a
.bindTools()
method, which receives a list of LangChain tool
objects
and binds them to the chat model in its expected format. Subsequent
invocations of the chat model will include tool schemas in its calls to
the LLM.
Let’s walk through a few examples. You can use any tool calling model!
Pick your chat model:
- Anthropic
- OpenAI
- MistralAI
- FireworksAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic @langchain/core
yarn add @langchain/anthropic @langchain/core
pnpm add @langchain/anthropic @langchain/core
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai @langchain/core
yarn add @langchain/openai @langchain/core
pnpm add @langchain/openai @langchain/core
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo-0125",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai @langchain/core
yarn add @langchain/mistralai @langchain/core
pnpm add @langchain/mistralai @langchain/core
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community @langchain/core
yarn add @langchain/community @langchain/core
pnpm add @langchain/community @langchain/core
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/firefunction-v1",
temperature: 0
});
A number of models implement helper methods that will take care of formatting and binding different function-like objects to the model. Let’s take a look at how we might take the following Zod function schema and get different models to invoke it:
import { z } from "zod";
/**
* Note that the descriptions here are crucial, as they will be passed along
* to the model along with the class name.
*/
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
We can use the .bindTools()
method to handle the conversion from
LangChain tool to our model provider’s specific format and bind it to
the model (i.e., passing it in each time the model is invoked). Let’s
create a DynamicStructuredTool
implementing a tool based on the above
schema, then bind it to the model:
import { ChatOpenAI } from "@langchain/openai";
import { DynamicStructuredTool } from "@langchain/core/tools";
const calculatorTool = new DynamicStructuredTool({
name: "calculator",
description: "Can perform mathematical operations.",
schema: calculatorSchema,
func: async ({ operation, number1, number2 }) => {
// Functions must return strings
if (operation === "add") {
return `${number1 + number2}`;
} else if (operation === "subtract") {
return `${number1 - number2}`;
} else if (operation === "multiply") {
return `${number1 * number2}`;
} else if (operation === "divide") {
return `${number1 / number2}`;
} else {
throw new Error("Invalid operation.");
}
},
});
const llmWithTools = llm.bindTools([calculatorTool]);
Now, let’s invoke it! We expect the model to use the calculator to answer the question:
const res = await llmWithTools.invoke("What is 3 * 12");
console.log(res.tool_calls);
[
{
name: "calculator",
args: { operation: "multiply", number1: 3, number2: 12 },
id: "call_Ri9s27J17B224FEHrFGkLdxH"
}
]
See a LangSmith trace for the above here.
We can see that the response message contains a tool_calls
field when
the model decides to call the tool. This will be in LangChain’s
standardized format.
The .tool_calls
attribute should contain valid tool calls. Note that
on occasion, model providers may output malformed tool calls (e.g.,
arguments that are not valid JSON). When parsing fails in these cases,
the message will contain instances of of
InvalidToolCall
objects in the .invalid_tool_calls
attribute. An InvalidToolCall
can
have a name, string arguments, identifier, and error message.
Streaming​
When tools are called in a streaming context, message
chunks
will be populated with tool call
chunk
objects in a list via the .tool_call_chunks
attribute. A
ToolCallChunk
includes optional string fields for the tool name
,
args
, and id
, and includes an optional integer field index
that
can be used to join chunks together. Fields are optional because
portions of a tool call may be streamed across different chunks (e.g., a
chunk that includes a substring of the arguments may have null values
for the tool name and id).
Because message chunks inherit from their parent message class, an
AIMessageChunk
with tool call chunks will also include .tool_calls
and
.invalid_tool_calls
fields. These fields are parsed best-effort from
the message’s tool call chunks.
Note that not all providers currently support streaming for tool calls.
If this is the case for your specific provider, the model will yield a
single chunk with the entire call when you call .stream()
.
const stream = await llmWithTools.stream("What is 308 / 29");
for await (const chunk of stream) {
console.log(chunk.tool_call_chunks);
}
[
{
name: "calculator",
args: "",
id: "call_rGqPR1ivppYUeBb0iSAF8HGP",
index: 0
}
]
[ { name: undefined, args: '{"', id: undefined, index: 0 } ]
[ { name: undefined, args: "operation", id: undefined, index: 0 } ]
[ { name: undefined, args: '":"', id: undefined, index: 0 } ]
[ { name: undefined, args: "divide", id: undefined, index: 0 } ]
[ { name: undefined, args: '","', id: undefined, index: 0 } ]
[ { name: undefined, args: "number", id: undefined, index: 0 } ]
[ { name: undefined, args: "1", id: undefined, index: 0 } ]
[ { name: undefined, args: '":', id: undefined, index: 0 } ]
[ { name: undefined, args: "308", id: undefined, index: 0 } ]
[ { name: undefined, args: ',"', id: undefined, index: 0 } ]
[ { name: undefined, args: "number", id: undefined, index: 0 } ]
[ { name: undefined, args: "2", id: undefined, index: 0 } ]
[ { name: undefined, args: '":', id: undefined, index: 0 } ]
[ { name: undefined, args: "29", id: undefined, index: 0 } ]
[ { name: undefined, args: "}", id: undefined, index: 0 } ]
[]
Note that using the concat
method on message chunks will merge their
corresponding tool call chunks. This is the principle by which
LangChain’s various tool output
parsers
support streaming.
For example, below we accumulate tool call chunks:
const streamWithAccumulation = await llmWithTools.stream(
"What is 32993 - 2339"
);
let final;
for await (const chunk of streamWithAccumulation) {
if (!final) {
final = chunk;
} else {
final = final.concat(chunk);
}
}
console.log(final.tool_calls);
[
{
name: "calculator",
args: { operation: "subtract", number1: 32993, number2: 2339 },
id: "call_WMhL5X0fMBBZPNeyUZY53Xuw"
}
]
Few shotting with tools​
You can give the model examples of how you would like tools to be called
in order to guide generation by inputting manufactured tool call turns.
For example, given the above calculator tool, we could define a new
operator, 🦜
. Let’s see what happens when we use it naively:
const res = await llmWithTools.invoke("What is 3 🦜 12");
console.log(res.content);
console.log(res.tool_calls);
It seems like you've used an emoji (🦜) in your expression, which I'm not familiar with in a mathematical context. Could you clarify what operation you meant by using the parrot emoji? For example, did you mean addition, subtraction, multiplication, or division?
[]
It doesn’t quite know how to interpret 🦜
as an operation. Now, let’s
try giving it an example in the form of a manufactured messages to steer
it towards divide
:
import { HumanMessage, AIMessage, ToolMessage } from "@langchain/core/messages";
const res = await llmWithTools.invoke([
new HumanMessage("What is 333382 🦜 1932?"),
new AIMessage({
content: "",
tool_calls: [
{
id: "12345",
name: "calulator",
args: {
number1: 333382,
number2: 1932,
operation: "divide",
},
},
],
}),
new ToolMessage({
tool_call_id: "12345",
content: "The answer is 172.558.",
}),
new AIMessage("The answer is 172.558."),
new HumanMessage("What is 3 🦜 12"),
]);
console.log(res.tool_calls);
[
{
name: "calculator",
args: { operation: "divide", number1: 3, number2: 12 },
id: "call_BDuJv8QkDZ7N7Wsd6v5VDeVa"
}
]
Next steps​
- Agents: For more on how to execute tasks with these populated parameters, check out the Tool Calling Agent.
- Structured output chains: Some models have constructors that handle creating a structured output chain for you (OpenAI, Mistral).
- Tool use: See how to construct chains and agents that actually call the invoked tools in these guides.