Strictly typed APIs were a huge step forward. They made teams faster, integrations safer, and docs less of a guessing game. They also gave us a very comforting illusion: if the types line up, the world is stable.
Now the dominant “developers” touching your API are increasingly language models: agents that operate on intent, partial information, and evolving plans. In that world, a perfect schema is not a perfect interface. It’s often just a more expensive way to be wrong.
But that comfort comes with a cost. The moment your product starts to actually evolve, your API becomes brittle.
1) The hidden tax of “perfect contracts”
When you treat your schema as the truth, every new feature is a schema negotiation. Every experiment is a breaking change waiting to happen. Every consumer becomes a veto player.
You end up optimizing for internal consistency instead of external learning. And the API becomes “correct” while the product becomes slow.
2) The shift: stop typing the universe
The alternative is not “no structure.” It’s moving away from strictly typed APIs toward loosely described APIs—where the core stays flexible, and structure only hardens where it actually matters.
What changed is that we finally have a practical translation layer: LLMs. They can map messy, high-level intent (“schedule a follow-up with the customer, include the last two invoices, avoid Monday”) into concrete tool calls and payloads—as long as you give them a stable set of capabilities and a few anchors to hold onto.
And if there are language models on both ends—an agent calling the API, and a model-backed gateway sitting in front of it—the interface doesn’t have to be a one-shot contract. It can be a negotiation: the caller proposes an action, the server-side model responds with constraints, clarifying questions, a repaired payload, or a safer alternative, and they converge on a valid solution.
3) Structure belongs at the edges
Most systems already have “shape pressure.” It shows up in very specific places:
- Endpoints: where you expose a contract to the outside world and need predictable behavior.
- Databases: where consistency and constraints protect the business.
- Legacy APIs: where you don’t get to renegotiate the world.
- Legacy UIs: where the front-end expects structure because it was built that way.
- High-stakes flows: payments, auth, compliance, anything with real blast radius.
Everywhere else? Let it breathe. Your internal representation can change, your interaction model can shift, and your API can become a living interface instead of a frozen artifact.
This is where SLMs (small language models) become extremely useful. You can run small, fast models close to the boundary to do repetitive, domain-specific work: normalize payloads, choose adapters, detect missing fields, generate helpful errors, and enforce policy—without paying “big model” latency and cost on every request.
4) Anti-fragile APIs don’t just change — they learn
The bigger unlock is that your API becomes observable. Not in the “log the request” sense, but in the “watch patterns of interaction and let those patterns shape the interface” sense.
Instead of guessing what the right structure is upfront, you observe paths: what users ask for, what agents call, what fields actually matter, what breaks, what’s consistently missing, what combinations form real workflows.
With LLMs in the loop, those traces become training data: you can turn “real calls + outcomes” into eval sets, prompt upgrades, fine-tunes, or even specialized SLMs that handle the boring-but-critical edge cases better than a generic model.
That negotiation loop is also a teaching loop. The API-side model can “teach” the caller by returning executable corrections (e.g. “use customer_id, not customer”), minimal examples, and policy-aware error messages that an agent can incorporate into its next attempt. The caller can “teach” the server by surfacing intent and failure modes in a structured way—feeding the evals and fine-tunes that make the boundary smarter over time.
In other words: the API evolves like a product. It becomes anti-fragile—getting better from variance, not punished by it.
5) The ecosystem opportunity
If this paradigm is real, the opportunity is not just “a different way to write APIs.” It’s the entire supporting API ecosystem getting rebuilt around evolution:
- LLM tool routing: not just versioning, but “what can you do?” plus model-aware dispatch.
- Fine-tune loops: turn traces into evals → improvements → safer automation over time.
- SLM adapters and shims: cheap, local translation between loose intent and strict legacy shapes.
- Contract snapshots from reality: schemas derived from traces, not from meetings.
- Edge validators: structure + policy enforcement at endpoints and critical boundaries, not everywhere.
- Docs that stay alive: documentation that updates as usage changes (because it watches usage).
- Agent-native clients: tool interfaces optimized for multi-turn calls and token efficiency.
Once you stop pretending the interface is fixed, you can build tooling that treats change as the default. And once you treat change as the default, the ecosystem stops being defensive—and starts being compounding.