Skip to main content
Perspectives

Category

Presentation

Applied Intelligence

7 sections

Where AI sits determines its impact.

The gap between what AI can do and what organisations are doing with it is not a technology problem. It is a placement problem. And the difference between incremental gains and a fundamentally different kind of company comes down to one decision.

Where companies are

The tools are everywhere. The impact is not.

Every leadership team we sit down with is navigating some version of the same situation. They have invested in AI. They have tools, pilots, enthusiastic individuals doing impressive things on their own. And yet the organisation, as a whole, does not feel different.

The language varies. The pattern holds.

“We have the tools—we installed the platforms—but there is no strategy connecting them.”

“There are pockets of innovation, but they stay siloed.”

“Our prototypes work, but they do not make it into how we actually operate.”

“We are seeing results in specific areas, but the gains do not spread.”

“Teams are too busy to slow down and change how they work.”

Three levels

Where you place AI in your organisation is the single biggest decision you will make about it.

There are three ways companies are deploying AI today. All of them are valid. They are not equivalent, and conflating them is where most strategies go wrong.

A complete AI strategy includes all three. Individual tools have their place. AI features inside applications are useful. But the third level is where the real leverage lives, and from what we have seen, it is where almost nobody is spending their time.

Individual use

People working between ChatGPT, Gemini, or Copilot and their day. Useful for speeding up individual tasks. But the person is still doing the work—shuttling information between systems, making the decisions, executing manually. The company does not get smarter. One person gets faster. When they leave, the capability leaves with them.

AI in apps

Salesforce AI, Zendesk AI, Notion AI. Intelligence embedded inside a specific application, useful for accelerating a function. But it is stuck in a silo. It only knows what that one application knows. It can only act on what that one application can reach. Your CRM gets smarter. Your business does not.

AI in systems

Intelligence built into how the organisation actually operates—with access to the people, the business context, the process knowledge, and the tools. The AI sits in the same seat as your team. It can observe, decide, and act across systems, toward a goal. This is where you get order-of-magnitude change—a fundamentally different way of operating.

In practice

A company that could not scale its most important function.

A computer vision company that builds hardware for sports tracking was growing fast. Their product was technically complex. It changed constantly. And their customer support operation—more than thirty people—was buckling under the weight of it.

Training was perpetual. Every product update made the previous round of training obsolete. Hiring was expensive and slow. Answers went stale faster than the team could learn them. Customer satisfaction was falling, churn was climbing, and the support function was becoming an existential constraint on the business.

They spent four months looking for a solution. They evaluated startups, established vendors, and the AI features built into their existing helpdesk platform. The best any of them could promise was a partial reduction in ticket volume—the leading platform in the space advertises roughly 50 percent resolution. For a company that needed a complete rethinking of how support worked, that was not enough.

So they built it differently. They did not add AI to their helpdesk. They built a system of AI agents that sat in the same position as their support team—with access to the same email, SMS, inventory systems, customer databases, and CRM. The agents had the same process knowledge, the same rules, the same business context. People supervised the system rather than performing every step themselves.

32 → 1
Thirty-two support staff reduced to one person supervising
99%
Accuracy after three months of training
$50K/mo
Autonomous refund and replacement decisions trusted to the system
Revenue doubled—growth that would have required sixty-four people
The counterfactual

The difference was not the AI. It was where the AI sat.

It is worth understanding what would have happened under the conventional approach. At the company's original revenue, they employed thirty-two support agents. Double the revenue and they would need sixty-four. Apply the best-in-class AI-in-app resolution rate of fifty percent, and they are back to thirty-two people—right where they started, plus the cost and complexity of another platform to manage.

Both approaches used the same underlying technology. The difference was where the AI sat. Inside an application, you get an incremental improvement on an existing process. Inside the system, you get a different operating model entirely.

AI in an app

50% ticket resolution

32 agents at current revenue

64 agents at double revenue

Apply 50%—back to 32

Same operating model, same constraints

AI in the system

One person supervising

99% accuracy

Seconds instead of hours

Revenue doubled without adding headcount

A fundamentally different way of operating

The intelligence layer

The layer where everything actually gets done.

Every organisation operates in two layers. The application layer is the tools—your CRM, your inventory system, your email, your spreadsheets, your databases. These accomplish individual tasks.

The intelligence layer is where everything actually gets done. It is where people apply judgment, follow processes, make decisions, and draw on the institutional knowledge that makes the company work. Traditionally, people are this layer. They sit above the applications and orchestrate across them.

The shift is this: AI agents can operate in that same layer. As peers to your people, with access to the same tools, the same business context, the same process knowledge. They can observe what is happening, make decisions based on real business rules, and take action across systems—toward a goal, not just within a single application.

How to tell which layer your AI is in

Your AI systems sit in the same seat as people. They have access to the same tools. They understand the business context—the processes, the rules, the exceptions, the judgment calls. They can orchestrate across systems toward a goal. If any of those are missing, the AI is operating in the application layer. It is useful, but it is siloed.

Digital workers

Training, not deployment.

You would not hire a person, give them no context about the business, no training, no feedback, and expect them to perform well on day one. You would onboard them. You would give them the business context. You would observe how they work, correct mistakes, and build their competence over time.

AI agents that operate in the intelligence layer work the same way. They start at some level of accuracy and improve through structured training. They need feedback loops. They need guardrails—the same way different people in the organisation have different levels of access and authority. They need to understand the goal, not just the task.

From what we have seen, companies that treat this as a deployment problem—build it, ship it, move on—are the ones whose systems plateau at sixty percent accuracy and never improve. Companies that treat it as a training problem, with dedicated time for onboarding, feedback, and iteration, watch accuracy climb week over week until the system outperforms what came before.

01
Assess
Identify the highest-value seats in the organisation
02
Build
Engineer the agents, the integrations, the knowledge infrastructure
03
Train
Onboard the system the way you would onboard a person
04
Manage
Keep it current as the business evolves
Compounding

The first build is the hardest. Everything after it is different.

The most compelling thing about building AI into systems is what happens after the first module is working.

Every build creates infrastructure that carries forward. The knowledge base deepens. The integrations mature. The organisational learning—what works, what does not, how the business actually operates—accumulates in a form that makes the next build faster, cheaper, and more capable.

In one engagement, a sales agent that handled the full ordering workflow—from initial enquiry through product mockup, inventory check, and checkout—became the foundation for capabilities no one had planned for. Automated reordering for existing customers. Personalised product catalogs generated on demand. Proactive outreach based on purchase history. None of these were in the original scope. They became possible because the connecting tissue—the integrations, the knowledge, the business context—already existed.

This is what we mean when we talk about capability compounding. The first sprint is the hardest and produces the least value. The second builds on what the first established. By the third, the system starts to hum—and the organisation begins operating in ways that were not possible before, regardless of how much budget or headcount you had.

The ceiling is far higher than most companies realise. The question is not whether AI can do more. It is whether the organisation is structured to let it.
Applied Intelligence

The companies that figure this out first become a different kind of
company.

The gap between what AI can do and what most organisations are doing with it is wide, and it is growing. Every quarter, the technology gets more capable. The question of where to put it—and how to structure the organisation around what becomes possible—is only becoming more important.

From what we have seen, the companies that are pulling ahead are not the ones spending the most on AI. They are the ones that figured out where to place it. They built it into the layer where their business actually operates. They treated the deployment as a training problem. And they stayed with it long enough for the capabilities to compound. That is the work we do. And it is still early.