Field Note: When integration gets easier, responsibility gets harder
Lately, a few conversations and examples, particularly around MCP and the momentum building behind agentic AI and integration patterns, has pushed me to revisit some older mental models.
There’s a lot to like in what’s emerging. The composability is real. The primitives (Identity, Permission, Scope, Context...) feel cleaner than what we’ve had before. For once, connecting systems, tools, and capabilities doesn’t feel like an exercise in brute force. Moreover, they are highly discoverable and self-descriptive by design.
What that’s triggered for me, though, isn’t concern so much as recognition. I’ve seen this phase before.
A familiar pattern
Most major technology shifts follow a similar arc. The early phase is about possibility. What can now be done that previously couldn’t. The demos work. The integrations look elegant. Velocity increases and friction drops.
Only later does the second phase arrive — the point where these patterns stop being experiments and start becoming part of long-lived systems, real operating models, and organisational accountability. That’s usually where the interesting problems show up. Not because the technology fails, but because responsibility finally has to land somewhere. And although this has nothing to do with AI, it has everything to do with AI.
Where the complexity actually lives
In my experience, the hard parts rarely sit in the core capability. They sit in the glue. Who owns an automated decision once it crosses system boundaries? What happens when an agent behaves “correctly” but produces an outcome no one feels accountable for? How do permissions, identity, escalation paths, and failure modes evolve once autonomy increases?
These aren’t questions that appear in early demos. They emerge slowly, often only once something has gone wrong — or once a board starts asking uncomfortable but reasonable questions.
In previous waves, the biggest costs didn’t come from moving too slowly. They came from moving fast without being explicit about boundaries, ownership, and long-term consequences.
This isn’t about slowing down
It’s worth being clear about what this isn’t. In fact I'm rarely ever for slowing down. This also isn’t an argument against the likes of MCP, agentic AI, or rapid integration. If anything, the progress here is encouraging. We finally seem to be getting better at building connective tissue that doesn’t feel brittle from day one.
The question isn’t whether to move fast. It’s how deliberately we do so once early success turns into dependency.
Speed without sequencing tends to defer complexity rather than remove it.
What I’m still thinking through
There are a few questions I keep circling back to, and I don’t have settled answers yet. How do these patterns behave once they’re no longer optional, but assumed? What does “safe by default” actually mean in systems where agency is distributed? Where should responsibility sit as autonomy increases — with platforms, builders, operators, or governance structures that don’t yet exist?
These feel like the next layer of work, and they’re harder to talk about than capability or performance.
Provisional close
For now, I’m less interested in predictions than in paying attention.
When integration becomes easy, responsibility doesn’t disappear — it just moves. Understanding where it lands feels like the real work that follows this phase of progress.
I’m still thinking this through.