← Back to Learn
agent-safetyexplainer

What Is Function Calling in LLMs

Authensor

Function calling is the mechanism by which a language model generates structured outputs that trigger external function executions. Instead of responding with natural language, the model produces a JSON object specifying a function name and its arguments. The host application then executes that function and returns the result to the model.

OpenAI introduced function calling as a first-class API feature in 2023. Anthropic followed with tool use in the Claude API. Google added it to Gemini. The implementations differ in format, but the concept is the same: give the model a list of available functions with their schemas, and the model decides when and how to call them.

A function calling flow works as follows:

  1. The developer defines functions with names, descriptions, and JSON Schema parameter definitions.
  2. These definitions are sent to the model alongside the user's message.
  3. The model determines that it needs to call a function to complete the task.
  4. The model outputs a structured function call with the chosen function name and arguments.
  5. The host application executes the function.
  6. The function result is sent back to the model.
  7. The model incorporates the result into its response.

Function calling is the foundation of agentic behavior. Without it, a model can only generate text. With it, a model can query databases, call APIs, manage files, and perform any operation exposed through a function interface.

The safety implications are significant. The model's function calls are influenced by everything in its context window, including user messages, system prompts, and retrieved content. If any of this content is adversarial, the function call may target unintended operations.

This is why function calls must pass through a policy evaluation layer before execution. The model decides what to call, but the policy engine decides whether to allow it. This separation between intent and execution is the foundation of safe agentic systems. The model proposes, the policy engine disposes.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides