Organisations and individuals increasingly turn to artificial intelligence (AI) to answer questions large and small. So what is the role of experts who have traditionally analysed detailed datasets and other information to provide insight and clarity? And how should those experts interact with what AI has produced to add value and win over stakeholders?
Research co-authored by Virginia Leavell, Assistant Professor in Organisational Theory and Information Systems at Cambridge Judge, examines such interplay of AI and experts in making key decisions – with a particular focus on what the research terms artificial certainty in which there is an illusion that complex scenarios will play out in a certain knowable way even though they are inherently uncertain.
Practical implications in balancing AI speed with expert experience

“The research focuses not on whether AI will replace experts, but on the representation strategies that experts can and arguably should adopt in order to maintain their authority as interpreters and mediators between what is knowable and unknowable,” says Virginia. “This holds practical implications for all organisations in balancing the speed and efficiency of AI with the experience that experts bring to complex situations.”
Virginia’s research focuses on a comparative study of 2 urban planning organisations in the western US – one named in the study as Ocean and the other as Mountain. Both use the same AI tool, but the approach of experts differs widely between one state and another.
The key finding: when process experts focus on enhancing technological ability, audiences mistake such representations for reality and thus undermine such human, expert authority. Yet when experts modulate the AI outputs and integrate this modulation into decision-making, lay audiences preserve this human authority and thus keep healthy uncertainty alive.
“Based on these insights, we develop a critical distinction between representations of the future versus representations for the future, offering new ways to theorise decision-making under uncertainty as organisations increasingly deploy sophisticated AI systems,” says the research published in the leading journal Organization Science.
The research focuses not on whether AI will replace experts, but on the representation strategies that experts can and arguably should adopt in order to maintain their authority as interpreters and mediators between what is knowable and unknowable.
AI widens gap between what looks certain and what is truly unknowable
The authors outline how each tech advancement over many decades has made representations more certain looking with greater detail, better graphics and more precise numerical outputs, thus creating a “growing gap between the apparent certainty of representations and the inherent uncertainty of what they represent”.
Yet AI has “dramatically accelerated this trend” by placing complex and uncertain future dynamics at the fingertips of organisational decision makers, and AI outputs can be perceived not as approximations but as definitive descriptions of what will occur in the future. This “creates dangerous illusions of expertise for laypeople who can believe they fully comprehend complex phenomena simply because they have access to authoritative-looking representations,” the study says. “The traditional safeguard against such illusions has been the mediating role of process experts – professionals whose authority rests in their interpretive capacities.”
Key practices that influence perception of certainty
The research thus identifies 3 practices that most influence how people perceive the certainty of a representation:
- controlling the level of detail
- shaping stakeholder engagement
- constructing the model’s meaning
The study then focuses on the approach of these process experts, and finds that the experts who take AI results and then state their related advice too stridently may do themselves no favours.
“We found that when process experts fully embraced these technologies and presented their representations of the future directly to stakeholders, they undermined their own authority, making the future appear so certain that laypeople no longer perceive a need for expert guidance. Conversely, when process experts were more strategic in how they deployed these tools – selectively using their representations to preserve appropriate levels of uncertainty – they maintained their essential role in helping laypeople make informed decisions.”
How predictive AI and philosophical ideas prompted early curiosity about this topic
Virginia says she became interested in this topic while a PhD student at University of California Santa Barbara. Her supervisor, Professor Paul Leonardi, Chair of the Department of Technology Management at UC Santa Barbara, who is lead author of the study in Organization Science, had a dataset relating to the topic that caught Virginia’s eye.
“The technology was based on AI simulation to predict the future, and I said to myself ‘this is interesting’. Then I was in a bookstore in Los Angeles, and bought a book called Simulacra and Simulation (1981) by French philosopher Jean Baudrillard, whose argument is based on how representation of reality becomes the reality – and that argument unlocked what was cool about the dataset.”
The technology was based on AI simulation to predict the future, and I said to myself ‘this is interesting’. Then I was in a bookstore in Los Angeles, and bought a book called Simulacra and Simulation (1981) by French philosopher Jean Baudrillard, whose argument is based on how representation of reality becomes the reality – and that argument unlocked what was cool about the dataset.
What research reveals about how experts communicate clarity and uncertainty
The research traces previous research about expert authority, including a 1988 study that emphasises the importance of using concrete terminology to talk about those things that are knowable – because “too much abstraction spells weakness and renders a profession unable to defend its jurisdiction from interlopers”. Simplification alone is not sufficient, as the research history outlines how experts should “calibrate the level of certainty their representations convey”.
As an example of process experts who often strike the right balance, the research looks at the work of transport nurses who make decisions about referring critically ill children from hospitals to specialised pediatric facilities. While these nurses do not make diagnoses and are not responsible for making the transfer decision, their role in assembling lots of information and advising doctors make them an invaluable part of this decision-making process.
“To do so, transport nurses gather a wide range of physiological data, observations and contextual cues, and then format that information into a succinct numerical representation of the patient’s condition. In doing so, they calibrate how much uncertainty to communicate. Too much certainty might lead to premature acceptance of a transfer; too little might delay lifesaving care. Their representational work must construct a picture that is technically accurate, institutionally credible, and attuned to the practical stakes of clinical decision-making. This example underscores how process expertise is interpretive at its core but is distinct in being materially enacted through processes and artifacts that configure what counts as actionable knowledge.”
There’s a delicate balancing act for process experts in AI era
The research says that process experts face a tricky situation: while embracing AI findings too emphatically can undermine the uncertainty that justifies their interpretive role, resisting AI risks making them appear technologically backward or resistant to new tech capabilities.
“We show that process experts deploy strategies of enhancement (amplifying or sharpening particular features of AI outputs) and modulation (tempering or qualifying what those outputs appear to claim),” say the authors. “These strategies shape how representations are built, interpreted and acted upon, and explain how artificial certainty arises and creates challenges for experts’ authority.”
US planning law provides backdrop for study
The research looks at planning because US urban areas with a population over 50,000 have been required since passage of the 1962 Federal-Aid Highway Act to establish a Metropolitan Planning Organization (MPO) that does long-range planning for goals ranging from transport to zoning.
The study selected 2 geographical areas or sites, called Ocean and Mountain for the study. Both sites adopted an AI-powered simulation technology, UrbanSim, which is designed to predict the effect of policy changes 20 years hence by examining such factors as household location choice, job and family transitions, real estate development, land prices, travel demand and macroeconomic factors.
“By drawing on machine learning, integrating multiple types of urban data and generating outputs at the parcel level, UrbanSim offered dynamic, interactive visualisations that connected traffic, land use, and economic data into a single system,” the research says. “This integration allowed planners not only to predict potential futures in far greater detail but also to present stakeholders with immersive visual scenarios, fundamentally changing the representational landscape of urban planning.”
2 approaches to AI: enhancing its power or modulating its influence
Through 110 interviews and more than 72 hours of observations that included watching planners and modelers at work, the authors came up with the conceptualisation of enhancement versus modulation approaches at, respectively, Ocean and Mountain.
Enhancing AI’s power
The enhancement approach at Ocean involved “amplifying the technology’s capabilities by maximising detail, visibility and authority”, which transformed sometimes abstract planning principles into “increasingly concrete, specific representations that appeared to speak for themselves”.
For example: in presenting models to the public, Ocean planners highlighted the granularity of data and the sophistication of computational techniques.
Modulating AI’s influence
In contrast, the modulation approach of Mountain involved using UrbanSlim alongside other exploratory tools and “calibrating the technology’s impact by limiting detail, maintaining abstraction and positioning technology as subordinate to expert judgment” – thus preserving uncertainty.
Mountain planners set up a system to judge models by their “utility and restraint rather than their comprehensive coverage”, thus providing “enough detail to support meaningful conversations without becoming so complex they created misleading impressions of certainty”.
Too much AI detail can undermine expert judgment: the importance of strategic modulation
As one Mountain planner told the authors: “As a regional planner, I’m thinking about big-picture policy. So we have to work on that level with the modelling, too. Sure, we talk about specific performance measures and measurable outcomes and all that, but balancing detail and the big picture is important.”
Added a Mountain modeler: “With modelling, it is not all about technical details. If you give them too many details, no policy makers are going to want it. They will skim it and trash it. They have many things to read and do! I learned when I was working directly with city commissioners and county commissioners and city counsellors. I give them what they want.”
“By maintaining a clear separation between visioning and modeling, Mountain planners created representations that stakeholders viewed as more abstract and conceptual rather than personally immersive,” said the study. “Unlike at Ocean, where stakeholders became emotionally immersed in model representations to the point where the boundary between representation and reality blurred, Mountain stakeholders maintained a separation between the models and the real-world outcomes they represented.
“Enabling this separation required restraint by planners because UrbanSim’s algorithms were designed to produce outputs that presented even highly speculative scenarios with the same visual polish and apparent precision as more reliable predictions.” Ocean “found themselves increasingly marginalised, presenting plans to audiences that neither understood nor cared about the role of the planners as process experts” and this resulted in erosion of authority that impeded effective decision-making.
“By contrast, Mountain’s modulation approach to controlling the level of detail in the model, shaping stakeholder engagement, and constructing the model’s meaning, preserved uncertainty and maintained planners’ authority. By limiting lay participation in modelling and maintaining boundaries between visioning and modeling, Mountain planners retained their position as essential intermediaries whose expertise was necessary for translating abstract aspirations into concrete plans.”
By maintaining a clear separation between visioning and modeling, Mountain planners created representations that stakeholders viewed as more abstract and conceptual rather than personally immersive, unlike at Ocean, where stakeholders became emotionally immersed in model representations to the point where the boundary between representation and reality blurred, Mountain stakeholders maintained a separation between the models and the real-world outcomes they represented.
Key takeaway: how AI can enhance, not replace, expert decision-making
The research by Virginia shows how the role of human experts should be transformed rather than diminished in this AI era. Portraying AI outputs as seemingly definitive predictions risks weakening the interpretive role of experts, while strategically calibrating AI results while guiding stakeholder engagement can reinforce the value of experts in the eyes of stakeholders. A key lesson: smart humans who cede too much authority to AI may threaten their own positions while leaving audiences with misplaced certainty.
“In our study, Ocean treated a prediction as reality, but actually such a prediction is only one of many possible futures given all the moving parts behind regional planning,” says Virginia. “The language used by the experts for Ocean was that ‘this will happen’ and they used 3D graphics in a very definite predictive way, while Mountain’s experts used more traditional charts and graphs in a way that conveyed a believable level of uncertainty – as it’s healthy to retain some scepticism when it comes to AI and predictions when dealing with complex matters.”
Featured faculty
Virginia Leavell
Assistant Professor in Organisational Theory and Information Systems
Featured research
Leonardi, P. and Leavell, V. (2026) “Knowing enough to be dangerous: AI-produced representations, artificial certainty, and the work of process experts.” Organization Science




