
Opinion polls show that people overwhelmingly believe in recycling, and also practice it regularly, so the goals of circularity are intricately embedded in our lives and societies. Yet the relationship between artificial intelligence (AI) and circular economy goals is much more complex, according to research co-authored by Shahzad (Shaz) Ansari, Professor of Strategy and Innovation at Cambridge Judge Business School.
The research by Shaz, which builds on his prior studies on the concept of framing – which is the building of shared understanding and mobilising support for an issue or way forward – argues that circularity as it relates to AI is not only a technical matter but also poses an interpretive challenge. Different actors disagree not just on who should do what, but on what AI itself is – a solution to environmental problems or a contributor to them.
These competing interpretations shape how circular economy goals are defined, pursued and governed in AI ecosystems.
“While the public believes strongly in circularity, looking at circularity in relation to AI shows that in fact circularity is a contested process owing to several tensions, and that’s what we explore in our research,” says Shaz. “The findings have implications for ecosystem governance, but also for managers and policymakers involved in the interaction of AI with circularity.”
AI’s environmental impact: OpenAI CEO and computer scientists clash on climate costs
While differing views and interests are common in many ecosystems, the research argues that because AI supports many differing interpretations this “complicates the achievement of circularity objectives” because framing is “used by different actors to influence how AI is interpreted and what it implies for circularity”.
For example, according to the 2024 essay ‘The Intelligence Age’ by Sam Altman, CEO of ChatGPT owner OpenAI: “AI will … deliver astounding triumphs, including breakthroughs that can help fix the climate.” A very different view was outlined by prominent computer scientist Dr Timnit Gebru (Founder of the Distributed Artificial Intelligence Research Institute) and colleagues, who wrote that the environmental impact of large language models (LLMs) “has devastating environmental costs, especially when these models cost millions of dollars to train, emitting tons of greenhouse gases into a world already in the throes of climate catastrophe.”
So the research by Shaz and academic colleagues from VU Amsterdam and Amsterdam Business School asks this question: “What are the key AI circularity framing tensions and interactions that shape how circular economy goals are defined and pursued in an AI ecosystem?” – and the study then develops a conceptual framework grounded in the tensions reflected in the contrasting remarks by Altman and Gebru.
3 key tensions defining AI’s role in the circular economy
1
Purpose
Is AI solving climate change, or fueling it? In this tension, AI is framed between providing solutions for climate change or a clear present contributor to climate change owing to the huge amount of energy used in LLMs.
2
Strategy
Incremental efficiency vs systemic transformation. Pursuing incremental improvements in current business and tech processes is in conflict with demands for systematic transformation of hardware and business models.
3
Governance
Who controls AI sustainability outcomes? This manifests in a battle between internal control by a very few powerful tech firms or broad sovereignty involving public actors and external oversight.
“In some ways these types of tensions exist in many public policy issues,” says Shaz. “But such tensions are far more intense when it comes to AI and circularity because AI deployment currently rests with a very small number of dominant private firms compared to more complex inter-organisational arrangements that apply in many other areas such as, for example, healthcare and property development.”
Why circularity in AI depends on ecosystem co-ordination
Circularity, according to the research published in the journal Long Range Planning, involves an ongoing framing struggle rather than a discrete technical task, and positions the 3 tensions as the mechanisms around how co-ordination around circularity occurs. While other recent AI research has looked at the dominance of LLM hyper-scalers and the role of complementors, “we extend this line of research on AI ecosystems by theorising how, within a business ecosystem, orchestrators and actors in different framing positions interact to influence the achievement of circularity goals”. The study shows how different actors try to shape what circular AI should mean, and whose definition ultimately prevails.
Business ecosystems, as outlined in previous studies, are comprised of loosely related yet interdependent actors who create value through both co-ordinated and complementary roles – in contrast to centralised hierarchical control. “In the context of the circular economy, this shifts attention away from firm-level greening or compliance and toward the collective outcomes of interorganisational configurations, namely ecosystems that sustain and regenerate resources rather than deplete them over time,” say Shaz and his co-authors.
We extend this line of research on AI ecosystems by theorising how, within a business ecosystem, orchestrators and actors in different framing positions interact to influence the achievement of circularity goals.
How framing struggles shape debates about AI sustainability
Framing theory cited in the research by Shaz dates back to the early 1990s, and is moved forward by 2021 research co-authored by Shaz that examines complex framing involving multiple shareholders. That 2021 study examined how social movements use collective action frames to confront targets and how targets respond to such social movements.
The study focused on the Occupy London movement (which grew out of the Occupy Wall Street movement in the US) that protested against growing inequality between the top 1% and the 99%, using the mobilising frame of “capitalism is crisis”. Yet the London protest ended up occupying and targeting not the London Stock Exchange but, just next door, the Church of England’s St Paul’s Cathedral, and the slogan shifted to “What would Jesus do?” – in effect transferring the protest focus from the financial establishment to the Church of England as the key target.
“For scholars, the St Paul’s study threw up unexpected twists in which a potential ally, the famous cathedral, became a target that replaced the initial target,” says Shaz. “This allowed us to study interactions during the year-long length of this protest, and this provided some really important insights – including how frames involve active negotiations over meaning, and this results in in what we call ‘meaning-making on the ground’ in which such meanings are subject to spontaneous updates and revisions large and small during such interactions.”
Another 2021 study co-authored by Shaz was cited in the circularity research’s focus on the idea of orchestration, examining how AI orchestrators (like big AI firms) and other actors selectively emphasise certain aspects around AI circularity while downplaying others, thus influencing debate and mobilising support for a certain position.
That earlier study, which focused on how an incumbent insurance firm dealt with the emergence of online insurance aggregator platforms in the early 2000s, looked at the dynamics of how shared meaning is constructed across diverse shareholder groups. The study introduced the idea of “‘multiplexed framing’ – comprised of multiple, non-binary frames” in a way that allows members of the same group to embrace conflicting frames, and quotes US novelist F. Scott Fitzgerald, who said: “The test of a first-rate intelligence is the ability to hold 2 opposed ideas in mind at the same time and still retain the ability to function.”
Tracking AI sustainability debates across policy, academia and industry
The methodology for the circularity research by Shaz and his colleagues involves drawing on policy documents, interviews, industry reports and other publicly available statements issued by circularity-AI ecosystem actors between 2003 and 2005, as collectively these statements “serve as representative articulations of the divergent framings that animate circularity debates”.
“These tensions entail 2 opposing sides, represented by techno-solutionist and techno-scepticist framings of AI,” says the research, which examines reconciliatory framing that is “achieved either through the orchestrator’s adaptation of its original framing or through new entrants and peripheral actors who attempt to reconcile otherwise divergent perspectives.”
Framing positions by technology companies to suggest that tech provides the ultimate fix for climate change and related issues include another statement by Sam Altman of OpenAI: “Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace.” On the other side of this spectrum, the framing position of what the research terms a techno-scepticist, as expressed by James Temple in MIT Tech Review, said: “Altman’s argument that AI will ‘fix the climate’ fundamentally misunderstands the nature of the problem. It casually waves away growing concerns about a technology that is already accelerating proposals for natural-gas plants.”
Framing that seeks to reconcile these positions, what the authors call ‘green AI reconciliatory framing’, is represented by a quote from telecoms firm Ericsson, which says: “Ericsson’s vision is driven by its commitment to enabling Communication Service Providers (CSPs) to break the energy curve (reducing energy use despite data growth).”
The authors suggest, however that without some substantial meeting of the minds regarding the 2 poles, such framing could “drift into symbolic action” rather than resulting in tangible progress on climate change. “When circularity arguments overlook the practical benefits AI can offer for environmental improvement, they invite pushback and slow adoption in areas where AI could contribute meaningfully,” the authors say. “When orchestrators rely on circularity framing without implementing substantive changes, alignment becomes superficial and accountability weakens. In such cases, the ecosystem may appear co-ordinated while delivering limited tangible environmental impact.”
When circularity arguments overlook the practical benefits AI can offer for environmental improvement, they invite pushback and slow adoption in areas where AI could contribute meaningfully.
Implications for managers and policymakers
For managers, Shaz and colleagues say the circularity research underlines the importance of dual framing that presents circularity and innovation “not as trade-offs but as interdependent imperatives”, and this may require these managers to develop more granular key performance indicators such as energy efficiency per AI model, along with a communications strategy that appeals to both technical and non-technical audiences.
The implication for policymakers includes the need for policies and structures that support co-ordinated initiatives while encouraging innovation, and this might entail setting minimum standards for circularity performance and convening forums that allow different stakeholders to engage in constructive dialogue.
Turning AI’s sustainability ambiguity into new value propositions
AI technologies introduce new forms of ambiguity into business ecosystems, and this reshapes how framing can impact alignment of views, the research concludes. “Under these conditions, circularity becomes a site of contestation rather than a straightforward technical goal.
“Our analysis positions circularity not as an external constraint but as an endogenous force whose meaning is continually shaped through framing contests within the ecosystem. We demonstrate how framing dynamics operate as mechanisms of both constraint and enablement, influencing the emergence of new roles and value propositions.”
Our analysis positions circularity not as an external constraint but as an endogenous force whose meaning is continually shaped through framing contests within the ecosystem.
Featured research
Khanagha, S., Ansari, S. and Ahmadi, S. (2025) “Framing circularity in an AI ecosystem: aligning purpose, strategy, and governance.” Long Range Planning, 59(1): 102601 (DOI: 10.1016/j.lrp.2025.102601)
Fraser, J. and Ansari, S. (2021) “Pluralist perspectives and diverse responses: exploring multiplexed framing in incumbent responses to digital disruption.” Long Range Planning, 54(5): 102016 (DOI: 10.1016/j.lrp.2020.102016)
Reinecke, J. and Ansari, S. (2021) “Microfoundations of framing: the interactional production of collective action frames in the Occupy movement.” Academy of Management Journal, 64(2): 378-408 (DOI: 10.5465/amj.2018.1063)




