This is granularity taken to its logical conclusion. Your tools become so atomic that they work with types you didn't know existed when you built them.
#### When to use this:
* •
External APIs where you want the agent to have full user-level access (HealthKit, HomeKit, GraphQL endpoints)
* •
Systems that add new capabilities over time
* •
When you want the agent to be able to do anything the API supports
#### When static mapping is fine:
* •
Intentionally constrained agents with limited scope
* •
When you need tight control over exactly what the agent can access
* •
Simple APIs with stable, well-known endpoints
The pattern: one tool to discover what's available, one tool to interact with any discovered capability. Let the API validate inputs rather than duplicating validation in your enum definitions.
### CRUD completeness
For every entity in your system, verify the agent has full create, read, update, delete (CRUD) capability:
Create
Can the agent make new instances?
Read
Can the agent see what exists?
Update
Can the agent modify instances?
Delete
Can the agent remove instances?
The audit: List every entity in your system and verify all four operations are available to the agent.
**Common failure:** You build `create_note` and `read_notes` but forget `update_note` and `delete_note`. User asks the agent to "fix that typo in my meeting notes" and the agent can't help.
## Anti-patterns
### Common approaches that aren't fully agent-native
These aren't necessarily wrong—they may be appropriate for your use case. But they're worth recognizing as different from the architecture this document describes.
#### Agent as router
The agent figures out what the user wants, then calls the right function. The agent's intelligence is used to *route*, not to *act*. This can work, but you're using a fraction of what agents can do.
#### Build the app, then add agent
You build features the traditional way (as code), then expose them to an agent. The agent can only do what your features already do. You won't get emergent capability.
#### Request/response thinking
Agent gets input, does one thing, returns output. This misses the loop: Agent gets an outcome to achieve, operates until it's done, handles unexpected situations along the way.
#### Defensive tool design
You over-constrain tool inputs because you're used to defensive programming. Strict enums, validation at every layer. This is safe, but it prevents the agent from doing things you didn't anticipate.
#### Happy path in code, agent just executes
Traditional software handles edge cases in code—you write the logic for what happens when X goes wrong. Agent-native lets the agent handle edge cases with judgment. If your code handles all the edge cases, the agent is just a caller.
### Specific anti-patterns
#### Agent executes your workflow instead of pursuing outcomes
You wrote the logic, agent just calls it. Decisions live in code, not agent judgment.