Why More AI Isn't the Answer
Apr 7, 2026
Why More AI Isn't the Answer
There is a version of AI adoption in investment operations that looks like progress and functions like a cost trap.
It starts with a reasonable instinct. A fund operations team is stretched. Reconciliations are slow, reporting is manual, compliance checks consume analyst hours that should go elsewhere. Someone suggests plugging in an AI tool. Within weeks, the team is running queries, generating outputs, automating workflows through a large language model. It feels like a transformation.
Then the bills arrive.
AI labs are in the business of selling tokens. Every query, every reconciliation, every report generated through an LLM consumes them. At low volume, this is unremarkable. At operational scale — portfolio reporting running nightly, NAV reconciliation across dozens of funds, compliance checks on every transaction, investor reporting on a monthly cycle — token consumption compounds quickly. What began as a productivity solution starts to look like a headcount problem that wasn't actually solved, just redenominated.
The answer is not to use less AI. It is to use it differently.
The right mental model is not an AI that runs every time a workflow runs. It is an AI that runs once to design the workflow, and then steps back.
Think of it this way. A brilliant engineer who understands your domain deeply does not re-derive the logic for a reconciliation every time it needs to run. They reason through the problem once, understand its structure, and write clean code that executes reliably at whatever frequency the business requires. The code runs daily, weekly, monthly, without consuming their attention. They return only when something changes — a new data source, a rule update, a structural shift in the underlying process.
That is the architecture that actually scales.
In an agentic system designed for this constraint, the agent handles the cognitively expensive work once: understanding the problem, mapping the logic, generating deterministic code that runs reliably and leaves a complete audit trail. After that, the workflow operates autonomously. The agent re-engages only when the underlying data, rules, or requirements shift — which represents a fraction of the compute you would burn running the full reasoning loop on every execution.
This is not great news for AI labs whose business models run on consumption. But it is the only architecture that allows a fund operations team to grow assets under management without growing headcount or absorbing a token bill that scales with every new mandate.
The distinction matters more than it might appear. Most generic AI tools are not designed with this constraint in mind. They are built to answer questions, not to generate and deploy operational code. They excel at the conversational layer and struggle at the handoff — the point where reasoning should produce something durable, auditable, and self-executing rather than another response to be read and acted on manually.
The hard part, and where purpose-built platforms differ from general-purpose tools, is knowing when to deploy an agent and when to simply run the code that already works.
Getting that distinction right is what separates AI adoption that compounds in value from AI adoption that compounds in cost.