What if the future of artificial intelligence wasn't about building ever-bigger systems, but about building smarter ones?

Rethinking AI: is the future of artificial intelligence frugal?

13 March 2026

The article at a glance

What if the future of artificial intelligence wasn’t about building ever-bigger systems, but about building smarter ones?

Jaideep Prabhu.
Professor Jaideep Prabhu

That question sits at the heart of the second series of the Cambridge Executive Business Insights podcast, hosted by Jaideep Prabhu, Professor of Marketing at Cambridge Judge Business School. The new series, titled Rethinking AI, explores the emerging concept of frugal AI, an approach to artificial intelligence that focuses on doing more with less. It looks not only at how AI models are built, but also at how organisations are using existing tools in smart, targeted and cost‑effective ways across sectors and geographies.

This series will bring together academics, industry leaders and innovators to explore how artificial intelligence can be developed and deployed in ways that are more efficient, accessible and sustainable. Episodes range from deep dives into the development of large language models (LLMs), to practical case studies of firms that are creatively repurposing off‑the‑shelf AI to solve real problems with limited budgets, data and technical capacity.

The assumption that progress in AI simply means building ever larger models is starting to be questioned. There's enormous opportunity in finding smarter, more efficient ways to create value with AI.

Professor Jaideep Prabhu

Why frugal AI matters now

The timing of this conversation could not be more important. In recent years, AI development has been dominated by large-scale models requiring vast computing power, huge datasets and enormous energy consumption. While these systems have achieved impressive results, they also raise pressing questions about cost, sustainability and accessibility.

Frugal AI offers an alternative perspective. Instead of assuming abundant resources, it starts from the idea of constraints, such as limited compute power, energy, data or infrastructure, and asks how AI systems can still deliver meaningful value under those conditions.

The goal is not to build weaker systems, but smarter and more purposeful ones. Frugal AI aims to design solutions that are efficient by design, scalable across different environments and accessible to organisations and communities around the world. For many companies, this means learning how to plug existing LLMs and AI services into their operations in lean, carefully scoped ways, rather than embarking on expensive, open‑ended transformation projects.

As the series explores, this approach could help address some of the biggest challenges facing AI today: the environmental footprint of large models, the high costs of implementation for businesses, and the risk that billions of people are left out of the AI revolution. It also highlights how organisations – from startups to multinationals – can experiment with AI incrementally, finding ‘good enough’ solutions that create value quickly without requiring massive new infrastructure or headcount.

Frugal AI in practice

The first episode of the series which launched last week, entitled Frugal AI in practice: smarter, leaner and more responsible intelligence, explores how this concept is already being applied in real-world settings.

Joining Professor Prabhu on the podcast are Serish Gandikota, Elizabeth Osta, and Arjuna Sathiaseelan, all closely involved in the work of the Frugal AI Hub at Cambridge Judge.

Together, they discuss how traditional AI development often assumes an ‘innovation of abundance’ with plentiful computing resources, energy and data. But much of the world operates under very different conditions. The frugal AI approach aims to bridge that gap by designing AI systems that can function effectively within real-world constraints such as cost, energy use, infrastructure and skills availability.

The conversation highlights why this matters across multiple sectors. Businesses are grappling with the rising cost and complexities surrounding the adoption of AI. Governments are increasingly concerned about national AI strategies and technological sovereignty. And in many parts of the world, limitations in terms of infrastructure and language support continue to restrict access to AI technologies.

Frugal AI seeks to address these challenges by building systems that are efficient and capable of delivering value at scale. Later episodes build on this by showcasing frugal AI stories from organisations that are, for example, using LLMs to streamline customer service, augment expert decision‑making or open up new services in emerging markets – all while keeping costs, energy use and organisational disruption under control.

From efficiency to inclusion

The episode also explores how frugal approaches could help ensure that AI benefits a much wider range of users. While today’s AI systems support only a fraction of the world’s more than 7,000 languages, decentralised and smaller-scale models could allow communities to develop AI tools tailored to their own languages and contexts. Similarly, smart use of existing AI platforms can help smaller organisations – including SMEs, NGOs and public‑sector bodies – access capabilities that were previously only available to the largest tech players.

AI shouldn't only work for organisations with vast computing power or budgets. One of the most exciting possibilities is designing systems that can deliver value in many different contexts, from large corporations to small organisations and communities around the world.

Professor Jaideep Prabhu

In that sense, frugal AI is not just about reducing costs or energy use. It is about rethinking how AI is designed so that it becomes more inclusive, responsible and impactful. Rethinking AI: Frugal AI shines a light on both halves of this story – the evolution of AI technologies themselves, and the creative, disciplined ways organisations are putting them to work in practice. Through conversations like these, the new series of Cambridge Executive Business Insights aims to spark a broader discussion about the future of AI.

This article was published on

13 March 2026.