The Invisible Hand’s New Glove: AI and the Art of Market Manipulation Dr. Navneet Sharma Advisor (Governance & Policy) Helicon Consulting

Sayali Shirke / 21 Aug 2025/ Categories: DSIJ_Magazine_Web, DSIJMagazine_App, Expert Opinion, Expert Speak, Regular Columns

The Invisible Hand’s New Glove: AI and the Art of Market Manipulation Dr. Navneet Sharma Advisor (Governance & Policy) Helicon Consulting

Robust capital markets - transparent, deep, and accessible - are essential to channel resources into sectors and firms with the highest productivity potential.

As India races towards its audacious goal of becoming a $5 trillion economy and achieving Viksit Bharat @2047, the debate on how technology will shape its markets has moved beyond the celebratory rhetoric of efficiency and disruption. Artificial Intelligence, with its capacity for hyperpersonalised pricing, algorithmic collusion, and predictive behavioural nudges, is no longer just a tool of competitive advantage; it has become the ‘new glove’ worn by Adam Smith’s invisible hand that is capable of shaping outcomes in ways that blur the line between market-driven efficiency and market manipulation in markets, including capital markets.  [EasyDNNnews:PaidContentStart]

Robust capital markets - transparent, deep, and accessible - are essential to channel resources into sectors and firms with the highest productivity potential. AI, in its best use, can enhance capital allocation by refining risk assessment, detecting fraud, and enabling smarter investment decisions. In its worst use, it can distort price signals, misprice assets, and create bubbles detached from real economic productivity. The question is not whether AI will transform Indian markets, but whether this transformation will tilt them towards open, pro-competitive growth or entrench anti-competitive power. 

A recent Wharton simulation of AI trading agents has made market watchers notice it: even simple reinforcement-learning bots, left to their own devices, learned to tacitly coordinate prices in ways that looked very much like collusion. The paper, now circulating as NBER Working Paper No. w34054, shows that AI-powered traders can, without explicit instruction to cooperate, evolve strategies that raise collective profits at the expense of market competitiveness and price efficiency. This finding has been amplified by sharp-edged coverage in the global press: Fortune warned of ‘artificial stupidity’ where unsupervised bots form cartels, while Bloomberg reported the experiment as a wake-up call for regulators even as some commentators argued that profit-seeking coordination is a core market dynamic. Together, these pieces demand that India think hard about what algorithmic markets mean for Dalal Street and the broader financial ecosystem. 

At a most basic level, the experiment reminds us of two simple truths. First, markets change when the cost of information and execution falls; AI radically lowers both. Second, competition policy and market regulation were designed for a world of human traders and paper trails. When decision-making migrates to self-learning algorithms that adapt in milliseconds, regulators cannot rely on the same investigative tools or legal constructs alone. India is not immune. We have seen rapid growth in algorithmic trading, a rising presence of sophisticated trading desks, and growing interest in machineassisted strategies beyond institutional investors. SEBI’s public consultation on guidelines for ‘responsible usage of AI/ML in the Indian securities market’ is therefore timely and necessary. The regulator’s draft, which stresses model governance, mandatory disclosure, testing, monitoring and data security, recognises both the promise and the peril of this technology. 

The policy questions India must answer are practical and immediate. If algorithms can arrive at collusive outcomes without humans ‘talking’ to one another, how should regulators prove wrongdoing? Traditional antitrust enforcement focuses on agreements and intent; algorithmic ‘tacit collusion’ strains that framework. The CCI has already signalled awareness: it launched a market study on AI and competition and its leadership has publicly warned that AI could enable ‘cartels without human communication.’ That is an important shift in mindset: CCI is asking not only whether competitors spoke, but whether market design and tools enable anti-competitive outcomes. 

The recently released 25th Report of the Standing Committee on Finance, takes cognizance of competition concerns arising from artificial intelligence and algorithmic collusion. The CCI informed the Committee that: ‘Digital markets often transcend national borders, with major players operating globally. Anticompetitive practices by these firms may have localized effects in India but originate from actions taken in other jurisdictions and addressing such cross-jurisdictional issues requires CCI to collaborate with international competition authorities. 

Yet there is another side to the coin. Economists and some market commentators rightly point out that strategies that look like coordination can also deliver improved market efficiency under certain conditions. If algorithms reduce frictions, narrow bid-ask spreads, and accelerate price discovery, retail and institutional investors can benefit in terms of lower costs and faster execution. Let us not forget that markets naturally evolve: if intelligent agents remove arbitrage opportunities and enforce more efficient prices, that is partly ‘how markets work.’ The difficulty for policymakers is to distinguish efficiency-enhancing dynamics from foreclosure or coordinated harm. The Indian debate must be nuanced enough to avoid hampering innovation that benefits investors, while firm enough to stop emergent harms that threaten market integrity. 

Translating this into an operational agenda for the country requires several steps. First, SEBI needs to further and accelerate its work on ‘model governance.’ Algorithms used for trading should be registered, tested under stress scenarios, and audited periodically. SEBI’s consultation rightly contemplates ‘white-box’ and ‘black-box’ distinctions; where models are opaque, compensatory controls - traceable logs, mandatory killswitches, and third-party audits - should be mandatory. Regulators must be able to rewind the tape and see how a decision was reached. This is not about policing code for its own sake; it is about reconstructing causal chains when markets move oddly. 

Second, market infrastructure must be strengthened. Stock exchanges and the clearing ecosystem should build real-time surveillance tools that monitor for coordinated patterns not explained by fundamentals: recurrent price clustering at non-fundamental levels, mutual restraint in aggressive liquidity provision, or episodic gaps in arbitrage responses across venues. Exchanges are often the first to spot anomalies; they should be equipped and mandated to share such signals with SEBI and other investigators. 

Third, competition policy needs adaptation. The Competition Commission of India, based on the findings of its AI study, may collaborate with SEBI. Enforcement of competition law has typically been ex-post and forensic; algorithmic markets call for a hybrid approach that blends ex-ante safeguards with sharper ex-post tools. 

Fourth, there is a role for disclosure and industry standards. Market participants should be required to disclose when AI/ ML agents are being used for trading, the governance framework around those agents, and the testing regimes undertaken. Disclosure will not solve collusion, but it increases accountability and enables market participants and regulators to better understand systemic risk. 

Fifth, legal doctrines must evolve. Courts and tribunals in India will sooner or later face cases where algorithms are implicated in anti-competitive outcomes. Legislators and judges should be primed to interpret the Competition Act and securities laws in a manner that recognises algorithmic agency and the unique evidentiary challenges it presents. That could mean shifting burdens in certain cases, allowing for presumptions where patterns are striking, or mandating greater transparency from dominant market players. 

Throughout, two traps need to be avoided. One is paranoia that freezes innovation; the other is complacency that assumes markets self-correct. India’s policy conversation, that are evident in the 25th Report of the Standing Committee on Finance, MEITY’s public consultations on AI governance, SEBI’s consultation paper, and the CCI’s market study, is moving in the right direction. But the pace can be quickened. Our markets are increasingly digitised and globally interconnected. Delays in rulemaking or enforcement will make it harder to unwind undesirable equilibria later. 

Finally, there is a democratic dimension. Algorithmic markets affect ordinary savers as much as institutional traders. When algorithms conspire, intentionally or emergently, to raise prices or damp liquidity, it is small investors who are most likely to suffer. Regulatory design should therefore prioritise market integrity and fairness over narrow innovation arguments. That means rigorous public consultation, clear rules of the road, and investment in the regulatory capacity needed to supervise code at scale. 

This tension matters because competitive markets are not an end in themselves; they are the primary engine for efficient and effective capital utilisation. India’s growth story cannot be fuelled by misallocated capital chasing short-term arbitrage or rent-seeking. In a sprint towards Viksit Bharat @2047, the quality of our capital allocation, no less than the quantity of our growth, will matter a lot. 

[EasyDNNnews:PaidContentEnd] [EasyDNNnews:UnPaidContentStart]

[EasyDNNnews:UnPaidContentEnd]