YC recently put out a video about the agent economy - the idea that
agents are becoming autonomous economic actors, choosing tools and
services without human input.
It got me thinking: how do you actually optimize for agent discovery?
With humans you can do SEO, copywriting, word of mouth. But an agent
just looks at available tools in context and picks one based on the
description, schema, examples.
Has anyone experimented with this? Does better documentation
measurably increase how often agents call your tool? Does the
wording of your tool description matter across different models
(ZLM vs Claude vs Gemini)?
Three things that actually moved the needle:
Negative boundaries work better than positive claims. "Generates reports from structured receipts. Does NOT execute code, modify files, or make API calls" gets called correctly way more often than "A powerful report generation tool." Trigger words matter more than you'd think. I maintain explicit trigger lists per skill — specific phrases that should activate it. Without those, the agent pattern-matches on vibes and gets it wrong ~30% of the time. With explicit triggers, that drops to under 5%.
Schema is the real interface. Clean parameter names with sensible defaults beat elaborate descriptions. If your tool takes query: string vs search_query_input_text: string, the first one gets called more reliably across models.
But here's the thing the "agent economy" framing gets wrong: you don't want fully autonomous tool selection. An agent choosing freely between 50 tools is like giving a junior developer admin access to everything — it'll work sometimes and break spectacularly other times. What works better is constraining the agent's scope upfront. Give it 3-5 relevant skills for the task, not your entire toolkit. Or build workflow skills that chain multiple tools in a fixed sequence — the agent handles the content, the workflow handles the routing.
The uncomfortable truth: you're not optimizing for "discovery" in the human sense. There's no brand loyalty, no trust built over time. Every single invocation is a cold start where the model reads your description and decides. That's actually freeing — it means the best-described tool wins, regardless of who built it.
reply