MainHistoryExamplesRecommended Reading

How Do LLMs Use Tools?

Help others learn from this page

Tool use overcomes a lot of the core limitations of LLMs...Maybe you give it a calculator so it doesn’t have to rely on its internal, unreliable arithmetic abilities. Maybe you let it search the web or view your calendar, or you give it (read-only!) access to a company database so it can pull up information or search technical documentation.
Matthew Carrigan/ HuggingFace Engineer
image for entry

A language model using external tools as part of a reasoning loop, via Tools Fail: Detecting Silent Errors in Faulty Tools (Sun et al.)

Tool use in LLMs refers to a model's ability to access and invoke external software — such as APIs, databases, search engines, or other functions — in order to accomplish a task.

Language models are stateless text predictors. But with tool use, they can:

  • Run searches
  • Call APIs
  • Access live data
  • Perform calculations
  • Control applications or workflows

This turns them into agents: systems that reason, plan, and take action using a growing toolkit.

How it works:

  1. Function signatures are defined and passed into the model
  2. The model predicts which tool to call (and with what arguments)
  3. The developer or runtime executes the tool
  4. The response is optionally returned to the model for further reasoning

Most tools are exposed via structured formats like JSON, and protocols like JSON-RPC or MCP enable secure, model-agnostic integrations.

Why this matters:

Tool use lets LLMs solve real problems that require up-to-date or external knowledge — making them interactive, composable, and extensible.

FAQ

What is tool use in LLMs?
It means the model can interact with external software — like calling an API — as part of its response process.
How do LLMs decide which tool to use?
The model is trained or fine-tuned to recognize when a tool is needed, and it selects the appropriate one from a list of registered tools.
Is LLM tool use secure?
Yes — tools are executed by a developer or sandboxed orchestrator. The model only *suggests* a tool call; it doesn’t run code itself.
What kinds of tools can be used by LLMs?
Anything from REST APIs and SQL databases to weather lookups, search engines, Python functions, or workflow engines.

Related Stuff

Enjoyed this explanation? Share it!