AI in finance: At the borders
This article is authored by Prof. Vidhu Shekhar, Associate Professor of Finance and Accounting at S.P. Jain Institute of Management and Research (SPJIMR).
✨ AI Powered Summary
Across much of the global economy, artificial intelligence has moved from experimentation to deep integration. In information technology, AI systems write and review code, manage cloud infrastructure, and respond autonomously to cyber threats. In Logistics, algorithms dynamically reroute inventory across warehouses, making millions of operational decisions each day without human intervention. Large language models, the most visible face of the current AI wave, now draft contracts, generate marketing material, and synthesise research at scale.
In these domains, AI does more than assist human judgement. It makes decisions. Authority has been delegated to systems, and performance is evaluated through aggregate outcomes rather than individual responsibility.
Finance stands apart. Despite heavy investment and persistent claims of ‘AI-driven finance’, artificial intelligence remains largely at the borders of the financial system. A recent KPMG survey captures the gap. Seventy-one per cent of financial firms report using AI to some degree, yet only about forty-one per cent have moved beyond pilots to moderate or large-scale adoption. The technology is widespread and often useful, but structurally peripheral. It supports finance professionals rather than replacing them.
This is not a story of technological immaturity or regulatory drag. It reflects constraints that other sectors simply do not face.
Authority under uncertainty
At its core, finance is a system for allocating capital under uncertainty, where decisions are binding, losses are asymmetric, and responsibility must be owned.
Every significant financial decision carries authority. Someone or some institution decides to lend, to underwrite, to invest, or to assume risk on a balance sheet. That authority cannot be probabilistic or diffuse. It must be attributable before the decision is made, not reconstructed after the fact. When things go wrong, losses appear on balance sheets, propagate through counterparties, and test institutional stability.
This is the deepest reason AI remains peripheral in finance. Artificial intelligence excels where authority has already been delegated to machines. In information technology, algorithms already control infrastructure. In marketing, systems determine exposure and pricing. In logistics, software routes goods. AI improves decisions that machines were already authorised to make.
In finance, that delegation never occurred. Even a highly accurate model cannot own a credit decision or an investment outcome. Accuracy does not confer legitimacy. The constraint is not merely that AI cannot always explain itself. It is that explanations do not carry authority.
Irreversibility and systemic risk
A second boundary lies in the nature of financial error. In many AI-intensive sectors, mistakes are local and reversible. A recommendation algorithm can be retrained overnight. A routing error can be corrected mid-journey. A mispriced advertisement can be withdrawn. Failure is expected and absorbed.
Financial errors behave differently. A poor lending decision cannot be recalled once capital is deployed. A misjudged risk exposure cannot be unwound without affecting markets. Errors propagate through leverage, correlation, and confidence.
The 2008 financial crisis remains instructive. Not just because the models were opaque or exotic. Many were widely understood and widely trusted. The failure was systemic. Under stress, shared assumptions produced correlated errors that amplified risk across institutions. This makes finance hostile to autonomous decision-making systems. Even interpretable models, when deployed at scale, can magnify instability rather than contain it.
Learning limits in reflexive systems
A third boundary is epistemic. Financial markets are forward-looking and reflexive. Prices embed expectations about the future, not stable patterns from the past. The act of prediction alters behaviour, erodes signals, and reshapes the environment being modelled.
AI systems learn from historical data. That approach works well in domains with stable underlying processes. Finance is not such a domain. Regimes shift. Incentives adapt. Strategies that work attract capital and then stop working. Trading signals that appear robust in backtests often decay once they become widely known and the market responds.
This does not render AI useless in finance. It clarifies where its strengths lie. Machine learning excels at optimising execution, improving risk surveillance, detecting anomalies, and strengthening infrastructure rather than discovering durable alpha.
Where AI actually fits
Seen through this lens, current AI deployment in finance is coherent. In Banking and credit, production systems rely heavily on decision trees and gradient-boosted models. These are structured combinations of simple rules that balance predictive power with interpretability. In India, the Reserve Bank’s guidelines on algorithmic lending reinforce this preference, but the logic precedes regulation. Institutions that absorb losses demand models that can be justified and defended.
In insurance, neural networks may assess images such as vehicle damage, but final decisions revert to rules and human oversight. AI assists perception rather than controlling payouts. In wealth management, machine learning supports segmentation, reporting, and anomaly detection. Portfolio Construction and discretionary investment decisions remain human responsibilities.
Generative AI fits this pattern well. Large language models draft reports, summarise regulatory filings, and assist client communication. They are fluent, fast, and increasingly capable of synthesising complex material. But they do not decide which risks to take or which assets to buy. They cannot be cross-examined in court or held accountable for a credit denial. The distinction matters. AI assists in writing the work of finance, but not in doing it.
Across the sector, AI penetrates where discretion, liability, and systemic impact are lowest. It stops where authority begins.
Borders, not immaturity
This is why describing AI in finance as being ‘at the borders’ is analytically precise. A border is a jurisdictional limit, not a temporary lag. AI strengthens the infrastructure around financial decisions through monitoring, documentation, stress testing, and operational efficiency. It does not cross into the domain where authority and liability reside.
Explainability requirements and regulatory caution matter, but they are consequences rather than causes. The Financial Stability Board’s 2024 assessment flagged model opacity as a systemic risk. The Reserve Bank of India’s evolving stance on algorithmic credit reflects the same concern. Even without regulation, institutions that absorb losses would resist delegating authority to systems that cannot bear responsibility.
What this means going forward
Recognising this boundary is an argument for realism. The real opportunity lies in AI as decision infrastructure, not decision authority.
Better risk monitoring, continuous stress testing, explainable systems, and AI-assisted analysis can expand human capacity without eroding accountability. Progress will be cumulative and institutional rather than revolutionary. It will require collaboration between technologists and regulators to build audit-ready frameworks and evolve validation norms for hybrid systems.
In India, where digital public infrastructure has leapfrogged legacy systems in payments and identity verification, the next frontier is not replacing human judgement but augmenting it within an accountable architecture.
Finance is not lagging behind other sectors. It operates under constraints that others do not face and cannot afford to ignore. AI is everywhere in finance, but it will remain, by design and by necessity, at the borders. The constraint on AI in finance is accountability rather than intelligence.
Disclaimer: The opinions expressed above are of the author and may not reflect the views of DSIJ.
