The topic of Tool Definition Drift: When Your Agent's Toolset Outgrows Its Prompt is currently the subject of lively debate — readers and analysts are keeping a close eye on developments.
This is taking place in a dynamic environment: companies’ decisions and competitors’ reactions can quickly change the picture.
You started the agent with three tools. The system prompt described all three by name, gave each a one-line example, and told the model when to pick which. Six months later the toolset has grown to 28. Product asked for a "look up the customer" tool, then a "look up the customer's last invoice" tool, then six variants of "search the docs". The system prompt was never updated to match. It still names three tools.
The trace logs show what happens next. The model calls a tool that does not exist. It picks search_docs_v1 when the right answer was search_docs_legal. Half the toolset never gets called at all because no string in the system prompt nudges the model toward it. You did not break anything. The prompt and the toolset just drifted apart.
This is one of the highest-impact bugs in production agents and one of the easiest to miss. The toolset is in code. The prompt is in code. They are not the same code, they are not in the same file, and nothing in your CI fails when one moves without the other. The rest of this post walks through detection, measurement, and regenerating the prompt from the toolset.
A few distinct shapes show up in traces. They are not the same bug, but they share the same root cause.
Tool-call hallucinations. The model emits tool_use for a name your dispatcher does not recognize. Often the name is a plausible mash-up of two real tools: search_customer_invoices when you have search_customers and get_invoice_by_id. The model is generalizing from the prompt's examples and inventing a tool that fits the gap. The Anthropic API returns the response with a tool_use block. Your code throws a "no such tool" error. The user sees a generic failure.
Orphan tools. A tool exists in the registry, gets shipped to the model on every request, and is never called by the model in production. Sometimes that is correct (the tool is for an edge case that rarely fires). Often it means the tool's description is so weak that the model never picks it, or another tool has a stronger description and always wins. Either way you are paying input tokens to send the schema on every request and getting nothing back.
Ambiguous selection. Two tools could plausibly answer the same question. The model picks one consistently and ignores the other, or worse, picks them at random. Take search_docs_v1 and search_docs_legal. If the prompt does not tell the model which corpus is which, the choice is a coin flip. The user sees inconsistent answers depending on which doc store the model picked that minute.
The shared root is that the prompt does not accurately describe the toolset the model has.
The first move is mechanical. Get the toolset and the prompt into the same data structure and diff them. You want to know, for every tool in the registry, whether the system prompt mentions it at all, and for every tool name in the system prompt, whether it still exists.
Run this in CI on every change to the system prompt or the tool registry. in_registry_only is the list of tools the prompt forgot. in_prompt_only is the list of tools the prompt still names that no longer exist (these are the ones the model is most likely to hallucinate, because the prompt keeps reinforcing the dead name). both is your healthy intersection.
The diff catches the static drift but not the second-order kind. A tool can be in the prompt by name and still be misdescribed. The prompt might say search_docs returns "the top 5 results" when the implementation actually returns 10, and the model will trust the prompt over the runtime behavior every time.
The runtime version of the diff is a coverage histogram. For every tool in the registry, count how often it was selected over the last N requests. The shape of that distribution tells you which tools are orphans, which are dominant, and which are flapping.
A healthy toolset has a long tail. Two or three tools dominate, the rest fire occasionally for edge cases. What you do not want is a step function: three tools at 95% of all calls, 25 tools at zero. That zero column is the orphan list, and every one of those tools is paying input-token rent on every request. Per Anthropic's tool-use documentation, tool definitions count toward the input. A forgotten tool with a 200-token schema costs you on every call until you remove it.
Set an alert: any tool with zero calls over the last 1,000 requests gets flagged. A human reviews it and decides whether to delete it from the registry or rewrite its description so the model has a reason to pick it. Either path works; ignoring the orphan does not.
Tool descriptions rot the same way comments rot. The first version was hand-written by the engineer who added the tool. They knew the call sites, the failure modes, and the right vocabulary, so they wrote a clear paragraph. The second version was added in a hurry by someone fixing a different bug. They copy-pasted the first description and changed a word. By the tenth tool, the descriptions are inconsistent in tense, length, and terminology. Two tools have a <example> block, six have none, the rest have a half-finished one.
The model reads all of them. It treats the inconsistency as signal. A long, detailed description outranks a short one, regardless of which tool is actually right for the request. A description that uses the same vocabulary as the user's question outranks a description that does not, regardless of which tool is right. The ranking has shifted away from tool capability and onto the prose quality of whoever wrote the description on a Tuesday.
Two fixes work. The first is a description style guide that every new tool has to match (length range, required sections, vocabulary). The second is to skip the prose and generate the description from the schema directly.
If the tool's input schema is rich enough, the description writes itself. You walk the JSON Schema, pull the title, the parameter names and descriptions, and any examples, and template them into a deterministic format. Every tool ends up with the same shape, which kills the prose-quality bias.
Now your tool registry stores the schema. The system prompt is generated at request time from the schemas, with a one-line preamble per tool. If you add a parameter to search_customers, the description regenerates. The prompt and the tools cannot drift, because there is only one source.
You pay a small cost: the auto-generated descriptions are blander than a hand-written one. A hand-written description can say "use this for cross-team docs only, not the legal corpus", which the schema cannot express. The trade is consistency for craft. Hand-writing wins under ten tools. Somewhere between ten and twenty, the consistency dividend starts beating individual craft, and past twenty, generation pulls ahead by a wide margin.
If your toolset is large enough that the schema-vs-prompt diff keeps growing, the next move is to stop showing every tool to the model. A tool router is a small upstream classifier (an embedding-similarity match, or a cheap-model classification) that picks 3 to 5 candidate tools per request and only those go into the prompt.
The model now sees the 5 tools that look most relevant to the query, not all 28. Hallucination drops because the prompt is shorter and more focused, and orphan tools stop paying token rent on every request. Ambiguous selection drops too: the router does the first cut, leaving only tools that pass a similarity threshold.
The cost is misroute risk. The router excludes the right tool from the candidate set and the model has no way to recover. Mitigate it two ways. First, evaluate the router with a held-out set of (query, expected_tool) pairs and watch top-5 recall. Second, keep a "fallback" tool always in the candidate set that lets the model say "none of these match, escalate." A span attribute carrying the selected tool names per request lets you spot the misroutes in production.
Run the schema-vs-prompt diff on your current agent in CI. If the diff returns anything in in_prompt_only, ship a fix today; those names are actively pulling the model toward a tool that does not exist. Then build the coverage histogram off your last 1,000 traces. Anything in the registry with zero calls is either dead code or has a description the model never picks, so decide today which it is, and delete or rewrite accordingly.
Once the current state is clean, put the next checkpoint on the calendar. Tool registries grow whether you watch them or not, and the drift is silent until a user complains. A two-hour audit every quarter beats an outage when the model invents a tool it heard about three releases ago.
The prompt is part of the toolset. Keep them in the same source, regenerate one from the other where you can, and watch the drift number rather than waiting for a hallucinated tool name to surface it for you.
Prompt Engineering Pocket Guide: Techniques for Getting the Most from LLMs covers the prompt-shape questions hiding behind every tool-use design: how long a tool description should be, when a one-tool-per-question router beats a 28-tool prompt, and what happens to selection accuracy as you cross 10, 20, 50 tools. The book is short on purpose; it is the chapters you would have written after a year of running the harness.
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.
For further actions, you may consider blocking this person and/or reporting abuse
DEV Community — A space to discuss and keep up software development and manage your software career
Built on Forem — the open source software that powers DEV and other inclusive communities.
Why it matters
News like this often changes audience expectations and competitors’ plans.
When one player makes a move, others usually react — it is worth reading the event in context.
What to look out for next
The full picture will become clear in time, but the headline already shows the dynamics of the industry.
Further statements and user reactions will add to the story.
