Essay · 22 April 2026

The configuration layer

Why the valuable Indian AI work over the next three years is not building frontier models but wrapping the ones that exist for the institutions and principals who will not build that wrapper themselves.

IThe claim

The valuable Indian AI work over the next three years will not be building frontier models. It will be configuring the models that exist for the institutions and principals who cannot configure them themselves.

This is an unpopular claim in a country that has committed ₹10,000 crore to a sovereign AI mission with foundation models as one of its pillars. It runs against the grain of national-champion rhetoric and the Krutrim–Sarvam–TCS storyline that dominates industry panels. It also happens to be what the economics of the stack say.

IIThe model is becoming a utility

Look at the price of intelligence. A frontier token cost about a hundredth of a cent in 2023, roughly a thousandth of a cent in 2024, and something like a ten-thousandth of a cent in 2025. Anthropic, OpenAI, Google, and the Chinese open-weights labs are all converging on similar quality for most tasks. The gap between the frontier and the near-frontier has narrowed from generations to months.

What this means operationally is that the raw capability of a language model is increasingly commoditised. An Indian startup in 2026 can pick any of five frontier labs, pay less per token than it paid for electricity last year, and get a model that passes most of the benchmark tests that mattered two years ago.

The remaining scarcity is not capability. It is integration.

IIIWhat is actually scarce

The frontier model, out of the box, does not know your voice. It does not know your rules. It does not remember what you worked on yesterday. It does not know which of your colleagues you trust for which kind of information. It does not know the bylaw your ministry wrote about deepfakes, or the three sentences your minister will never use under any circumstances, or the fact that your tea vendor's name is spelled Rameshwar with an H.

Every one of those is small on its own. Every one of those is what separates a useful assistant from an expensive party trick. None of them get fixed by a bigger model. They get fixed by the wrapper.

The configuration layer is the software between the user and the model that holds all of this institutional and personal context. Voice rules. Operating protocols. Memory of prior interactions. Integration with the user's actual tools. The layer that turns an excellent Claude into an excellent Claude-for-me, or Claude-for-our-ministry, or Claude-for-Delhi-High-Court.

IVWhy this matters more in India

Three reasons, in increasing order of importance.

First, the Indic-language quality gap. Anthropic, OpenAI, and Google optimise for English and code. They do a creditable job on Hindi, a poorer job on Bengali, Tamil, Marathi, Gujarati, Telugu, Kannada, Malayalam, Punjabi, Odia, Assamese. The gap is wide enough that institutional users who actually work in these languages end up unhappy in ways English-primary users do not. A wrapper that post-trains on the institution's own Indic-language corpus closes the gap in weeks, not years.

Second, the sectoral depth gap. Ministerial drafting has conventions that no commercial chat model has ingested. Nationalised-bank compliance language has conventions that no commercial chat model has ingested. Indian legal drafting, the particular rhythm of a Supreme Court rejoinder and the expected structure of a writ petition, has conventions that no commercial chat model has ingested. These are learnable. They are not being learned by the frontier labs because the frontier labs do not have the access, the interest, or the commercial incentive.

Third, and this is the one that matters most, the institutional access gap. An Indian ministerial office will not paste sensitive draft notes into ChatGPT. A PSU corporate-communications team will not run its chairman's speech through Claude.ai without a legal review that takes six weeks. A Supreme Court chamber will not route client correspondence through an American-hosted model. These institutions need the capability but need it wrapped in an operating model they control. That wrapper is configuration, not capability.

VWho buys this

Ministerial offices doing policy drafting and speechwriting. There are 54 Union ministers. Each has a private office of five to ten people drafting prodigious amounts of text that has to sound consistent and accurate and in-voice. This is a solvable problem. It is not being solved by Anthropic.

Public-sector enterprises doing corporate communications. There are around 300 central PSUs in India, and a dozen or so of them, including NTPC, IOC, SAIL, ONGC, and the major public-sector banks, run communications teams producing thousands of pages a year under tight institutional voice constraints. Each of them would benefit from a wrapper on a frontier model that enforces house style. None of them are being sold one.

Institutions doing drafting at scale. Supreme Court chambers and senior advocates' firms. Tier-one hospitals running discharge summaries and clinical notes. Consulting firms doing deliverables. Rating agencies, asset managers, media organisations, academic institutions producing press releases. Each has a voice and a process. Each could get a tenfold speedup on drafting from a well-configured wrapper. None of them are being sold one.

Professionals doing drafting at individual scale. Two million lawyers, a million-plus doctors, several million teachers. A fraction of them are heavy writers. For that fraction, the wrapper is not enterprise software; it is a personalised assistant configured for their practice. Different product, different price point, different sales motion, same underlying technology.

VIThe Shopify move

Shopify did not beat Amazon. Shopify gave every merchant a way to sell their own thing on their own terms using the same underlying capabilities Amazon had. The equivalent for AI is: every principal, every institution, every practitioner gets their own configured Claude that speaks in their voice and follows their rules and uses their tokens.

The model is a utility. The configuration is the product.

This is not a metaphor for investors to enjoy. It is a specific bet about where value accrues as the underlying technology commoditises. The model layer gets cheaper and more abundant. The integration layer gets more valuable because every institution's requirements are different, and because the cost of getting integration wrong is higher than the cost of getting the model wrong.

VIIWhy the incumbents are not coming

Anthropic is not going to fine-tune Computer Use for Swiggy's dispatch flow. OpenAI is not going to build voice-matched drafting for the Ministry of External Affairs. Google is not going to customise Gemini for the way a bench at the Delhi High Court reads a rejoinder. Those labs are American companies optimising for American and European enterprise customers, with scale metrics that require one-size-fits-many products. Indian institutional configuration work is long-tail, low-margin, and requires on-the-ground sales and drafting expertise that none of these labs have in depth in India.

This is the opening. The work will be done locally or not at all.

VIIIWhat it takes to build

The technical bar is not high. A configuration layer is, roughly: prompts plus retrieval plus memory plus tool integration plus maybe some supervised fine-tuning on the institution's own corpus. Every one of those is a well-documented pattern in 2026. An engineer with a year of LLM experience can build the first version in a week. The post-training explainer walks through the technical pieces in detail.

The specialist skill is voice capture. Turning the way a minister speaks, or a PSU chairman writes, or a High Court judge structures a paragraph into a set of rules that a model can follow is not an engineering problem. It is a communications problem. It is the kind of work communications professionals, speech writers, and editorial-voice specialists do. It is not taught at any Indian engineering college.

The advantage here is structural. India has more communications talent with ministerial-level drafting experience than any country outside the United States. Most of them are not in AI companies. Most of the AI companies in India do not have communications talent on the team. The two are passing each other in the corridor.

The distribution channel is institutional networks. This is not a market to win through product-led growth or influencer marketing. It is won through the senior communications adviser at one ministry introducing the tool to her counterpart at another, through the chairman of one PSU mentioning it to another at the annual meet, through the senior partner at one law firm telling the senior partner at another that the thing works. Those networks are open and under-exploited.

IXWhat happens if nobody builds this

The same thing that happens when any generally useful technology arrives in India with no local configuration layer. Some institutions ignore it. Some paste their sensitive drafts into foreign-hosted web chatbots in defiance of policy. The ministries and regulators slowly accumulate the wrong kinds of stories about AI doing harm, because the AI they see is the unconfigured kind doing unconfigured things in uncontrolled settings.

The configuration layer is also the compliance layer. Institutions that cannot deploy the technology safely through a trusted local wrapper will deploy it unsafely through ad-hoc personal use. The policy failure and the commercial failure are the same failure.

The argument is not that the government should build this. It is that someone should build it, and that someone is almost certainly a small set of Indian operators with communications-plus-technical depth plus institutional access. Three people with the right skills and the right rolodex can cover more institutional ground in six months than a hundred engineers can.

XWhat comes next

The rest of 2026 on this site maps the emerging shape of this market. The explainers go deeper on the technical pieces: what a configuration layer actually contains, how voice capture works in practice, what the post-training pipeline looks like for an institution-specific model. The briefings track who is building what, and what is breaking. The essays argue the policy and commercial cases as they move.

The model is a utility. The configuration is the product. Someone is going to build this. If that reader is you, get in touch.

Further reading on this site