What it means to be AI-native
The last few years have produced a lot of software that uses AI. Using that a lot is not the same thing as being AI-native. Many (most) companies still treat AI like a very smart temporary contractor. You bring it in for a task, paste in some context, get an answer, and dismiss it. If the answer is good, great. If it’s wrong, you try again with a better prompt. If it forgets something important, that’s your fault for not pasting enough. If it does something dangerous, you tighten the instructions and hope next time goes better: you are responsible for supervising all actions. That workflow can be useful. But it is also fundamentally shallow.
For me an AI-native operating environment starts from a different premise: the limiting factor is usually not model capability. It is infrastructure. The hard problem is not getting the model to generate text or code. The hard problem is creating the conditions under which an agent can operate autonomously, repeatedly, safely, and with enough context to be genuinely useful without being micromanaged.
That changes the human role. You stop building stuff yourselves and become an architect of infrastructure. You have to let AI do most of the building. The human job is to make sure the AI system has the right context and structure to be effective. We have to work with it to continuously ensure its memory system is developing to keep up with what we’ve built, and its controls framework keeps up with the trust we’re placing in it. You then keep expanding, ensuring it knows the right things in the right quantities to be effective, and that the system has the right permissions and control infrastructure to do amazing things, safely, because it has exactly the right context and control infrastructure to succeed.
AI-native is not using Claude / Gemini / ChatGPT a lot. It is building an operating environment around AI tools: memory systems, retrieval pipelines, control hooks, daemons, workflow skills, review mechanisms, and safety layers that let the AI tools function less like a chat interface and more like an operator. The distinction matters because raw model capability is unreliable in exactly the ways infrastructure can compensate for.
At first things are closely monitored, but crucially every time a problem is detected the solution needs to be a thorough root cause analysis and the addition of infrastructure controls which systemically prevent repeated failures. Over time this compounds from an AI system which gives a useful answer most of the time but can’t be left alone, to an AI system which gives useful answers enough of the time to become as trusted as a human. The crucial difference is: success isn’t more skillful prompting, it’s more deliberate and continuous infrastructure design (with regular, but AI driven, rebuilds too).
| Not AI-native | AI-native |
|---|---|
| You ask the model to do tasks | You maintain the memory, controls, and permissions that let the model operate |
| Context is assembled ad hoc in prompts | Context is continuously accumulated, normalised, indexed, and retrieved |
| Safety is “be careful” in instructions | Safety is enforced structurally with hooks, state checks, gates, and audit trails |
| Failures are corrected socially | Failures are systematically analysed and controls are introduced which make the failure mechanically difficult to repeat |
| Trust is binary: on or off | Trust expands in concentric rings as controls mature |
Principles
To operate effectively in this style there are a few principles:
- Make AI act as the process spine. If there are multiple steps or systems you never copy-paste or manually coordinate across multiple processes or tools. The AI should do that, ideally using Application Programming Interfaces (APIs) or plug-ins.
- Controls need to be prioritised strongly. If something isn’t working, don’t just fix it. Understand how it happened, what controls failed, how do you restructure to ensure it physically can’t happen again.
- Rebuild periodically. Things will (and should) evolve organically. Like any system AI will therefore build up tech debt. Because it evolves faster than human-built systems, tech debt also builds up faster. The good news is that rebuilds can also be executed very quickly. Spot when these are needed and don’t put them off.
Four opportunities for mature organisations
-
Tool access. Many orgs’ information-security postures block a lot of AI tools. This is a really tricky balance. We need to be safe, but increasingly AI value gets unlocked through agentic workflows. The best of these stitch multiple systems and tools together. Which tools are right or best for each task is an ongoing evolution, so tool access needs to keep up.
-
Data layer. For many firms data is stored in multiple places. To be able to leverage that data effectively it’s important to have clean, well-documented data, but also to have Model Context Protocols (MCPs) in place. Without these steps it restricts who can leverage data. Once interfaces like MCPs are developed AI can be used to democratise data, enabling substantial business value.
-
Systems are as important as tools. Much of what many orgs focus on is ensuring people are confidently using AI tools, and that is a crucial step. But using AI is too often spoken about in tool-level isolation terms (for example, more people logging into Gemini). To make broad-scale systemic AI adoption safe, infrastructure (hooks, daemons, MCPs, memory and instruction systems and so on) may need to be dealt with en par with tool use. AI has the potential to be more than a transactional tool, but it can only fulfil that potential when we prioritise and invest in both tools, and the systems and controls which support them safely.
-
Subject matter expertise should be leveraged. The old human-based process of (1) ‘subject matter expert designs’, (2) ‘they write a set of specs or user stories’, then (3) ‘Tech builds’ isn’t the optimal model in the world of AI. There is an unnecessarily large separation between the end user / subject matter expert and the AI system. The AI system is perfectly capable of understanding the subject matter expert (SME), and with the right control infrastructure they can build what the SME needs directly. Because of this, I believe the most effective model to genuinely become AI-native is to develop it from the ground up, by empowering and encouraging subject matter experts to directly develop AI systems. This means helping our people overcome their fear of the unfamiliar, but also helping them understand how to do so securely by partnering with them in building appropriate systems and controls.
Practical next steps
A few options for how some of this could be approached:
-
Get people using AI enthusiastically. Hackathons and similar tasks to get everyone using AI tools. Encourage system builds, not tool usage. Give people development time and budgets, not just for courses but also for personal tinkering, ‘vibe coding’ and similar.
-
Unblock tool access. Two streams:
- Find a way to get a far wider number of AI tools into people’s hands, and design a process for the rapid approval of new tools as they become available. This is hard, but given the pace of market change, ideally days not weeks.
- Appreciating that the above won’t be perfect, consider creating sandboxed environments and personal training budgets where people could use AI more in their personal lives.
-
Data access. All data layers with MCPs in place and working effectively.
-
Continue iterating where there is genuine business value. Identify two to five key improvements that could be made using AI. Align resources behind them (personal AI budgets and tools is a great way to flush out people with interest). Give those teams strong exec support. Propose these are strongly encouraged to follow the AI-native principles above.