Like elite professional athletes, top-class academics have long regarded themselves as a rare breed. Just as only a small percentage of football players will make it to the English Premier League, the assumption has long been that only the crème-de-la-crème of scholars have the experience, breadth of knowledge, and empirical or theoretical insight to research and write papers to the standards demanded by the best peer-reviewed research journals.
While artificial intelligence is a long way from replicating Lionel Messi on the football pitch, the rapid improvement of AI has dramatically changed the game for researchers in the social sciences in a very short period of time – and this has become a key area of focus for Matthew Grimes, Professor of Entrepreneurship and Sustainable Futures at Cambridge Judge Business School.
Reawakening a childhood IT passion through AI and scholarship

“I’ve always been interested in information systems, and considered going into IT as a profession but then went into research, so these AI tools are re-awakening a latent passion of mine,” says Matthew, who in 2023 co-authored an editorial in the prestigious Academy of Management Journal (AMJ) that, while highlighting real dangers from AI such as hallucinations, punctured the assumption of elite scarcity in academia in an age of AI.
“While academics often experience efficiencies in knowledge synthesis work within and across literatures over time, such efficiencies can take years to realise. Conversely, emerging generative AI tools now increasingly offer capabilities aimed to increase those efficiencies and the pace at which those efficiencies are realised by scholars,” says the editorial entitled ‘From scarcity to abundance’, which was written by Matthew and other editors of the AMJ.
Beyond efficiencies in research execution, a debate has raged over whether AI can also creatively generate the key questions whose answers push the boundaries of academic research – in sporting parlance, to move the ball – in the way that elite peer-reviewed journals demand in order to publish a research paper.
Identifying a gap between AI capabilities and how scholars were using these tools
“After this AMJ editorial was published, I could see that despite the huge potential of AI in scholarship there was in fact a slowness in adaptation – I saw the growing gap in the capability of the tools and the way people were using them, and I was confused by this,” Matthew says. “I was hearing people saying that AI is not good at scholarly idea-generation, that we senior academics are way better, and I thought: ‘You’re missing some of the foundational capabilities of these AI tools, which can be used to augment our own efforts to come up with great ideas.’”
So Matthew turned his attention beginning in mid-2025 to developing AI tools that would help scholars, and especially junior scholars, tap the power of AI in ways that would help them publish in top journals and expand the horizons of their fields of expertise.
In August 2025 Matthew shared on LinkedIn a link to a new tool he developed – the AI-Powered Management Research Feedback Tool – that was initially designed to help PhD students and supervisors of such students (Matthew is such a supervisor at Cambridge Judge) improve their draft papers for publication. Given his role at AMJ, where Matthew became Deputy Editor in July 2025, he briefly took down the tool and re-issued it to clarify that the tool was not endorsed by any journal including the journals published by the Academy of Management.
I was hearing people saying that AI is not good at scholarly idea-generation, that we senior academics are way better, and I thought: ‘You’re missing some of the foundational capabilities of these AI tools, which can be used to augment our own efforts to come up with great ideas.’
A new tool offering structured manuscript feedback
The tool, which is free to use with an account on AI provider Claude and can be customised with a paid Claude account, allows authors to upload their manuscript into the tool, which then provides feedback through a so-called canvas broken down into 8 areas.
- Theoretical motivation, or what fundamental management phenomenon is investigated and what specific question will be answered by the research.
- Primary and secondary literature, assessing what foundations of knowledge already exist.
- Theoretical constructs and relationships, or the key concepts driving the analysis.
- The empirical research context examines, and why this is well-suited to the particular study.
- Research design and analysis.
- Findings, including patterns from the data analysis.
- How the findings advance understanding and practice.
- Key constraints of the study.
In announcing the tool, Matthew urged users to be “mindful of the risks of adopting these technologies when the institutional environment is evolving at a much slower pace than the technology itself. For instance, please make note of journal policies, some of which require disclosure of or significantly restrict the use of generative AI”.
Since announcing this first tool, it has proved popular far beyond the PhD community and has been accessed nearly 3,000 times.
AI feedback that mirrors – and sometimes exceeds – peer review
“When you click on the tool’s ‘Feedback’ button it generates a topline feedback view of the manuscript: where there are strengths and weaknesses, then a detailed view of extensive feedback on the various issues raised in the manuscript,” says Matthew. “I built the tool based on my understanding of journals’ peer-review and editorial processes, and how AI should be critiquing manuscripts based on that understanding. The tool does not predict whether a paper will or will not pass a peer-review process, but it does tell you that ‘here are some issues with the paper’.
“Some users have found it overly aggressive, not in tone, but in identifying more issues than a human peer-reviewer would. That might be true, but I don’t think that’s bad”, because it’s preferable for the tool to raise more rather than fewer issues in advance of a formal peer review process by human brains.
For example, one senior scholar who explored the tool with her own already published manuscripts conveyed to Matthew that the tool had flagged several major concerns with those manuscripts, despite the fact that this work had been thoroughly vetted by the scholarly community. Indeed, it may be that as scholars begin to benefit from the augmentation of AI in their research, so too will AI enhance the process of evaluating the rigour of that research.
Matthew’s AI toolmaking didn’t end there.
Helping scholars move beyond niches into compelling puzzle-solving
In January 2026, Matthew turned again to the question of how AI can devise the compelling and boundary-moving questions and answers that drive human knowledge forward in truly meaningful ways. The answer, Matthew says, lies in identifying and solving puzzles.
“Too many research projects start from gaps in the literature rather than genuinely important puzzles,” Matthew observed in a LinkedIn post, as they are launched under such premises as:
- “literature has overlooked X”
- “let’s open the black box of Y”
- “no one has studied Z in this context”
Yet the fact that something was overlooked or not studied in a particular way does not in itself make that something compelling, Matthew says, because framing projects in this way often focuses on niche areas that leads to “incremental work that struggles to get published in top journals”.
“There’s usually a reason that such a gap in the literature exists, and it’s because it’s not interesting to fill in that gap,” he says. “I observed that junior scholars were too often spending 4 years in an effort to get published in a top journal only to answer a very niche question that could yield just a tiny bit of knowledge. The right way to do research in the social sciences is to start with something empirically or theoretically puzzling – a tension in the literature or in practice. It’s fascinating how infrequently people are asking such questions of themselves and the work they’re doing – they’re doing it just because no one’s ever done it before, but peer reviewers will spot this.”
The right way to do research in the social sciences is to start with something empirically or theoretically puzzling – a tension in the literature or in practice.
Standalone Socratic tool works across all AI models to surface research puzzles
So Matthew in January 2026 unveiled a new AI tool, Scholarly Ideas, that seeks not just to search for niches but to surface compelling puzzles.
“What this tool tries to do is not just find a gap in the literature but to ask interesting questions – that’s a better place to start. What this tool does is to help solve real-world puzzles through empirical or theoretical patterns that contradict or cannot be explained by existing theories.”
Issued through a Massachusetts Institute of Technology (MIT) Creative Commons License, the Scholarly Ideas tool works through Socratic dialogue (question, answer, question) and is designed to work with any AI model including ChatGPT, Claude, Gemini and even free local and open-source models.
Whereas the AI-Powered Management Research Feedback Tool sits on Claude servers and runs on Claude models (so Claude has ownership of it), the Scholarly Ideas tool is a fully functional, standalone app. While built using Claude code, it is hosted on GitHub under Matthew’s account, so users can download the tool and run it on any machine with any AI model.
For instance, if a researcher expresses interest in international entrepreneurship, the tool might raise the growing challenge of navigating increasing geopolitical fragmentation amid expansion. As the tool presses the user to surface examples of ventures that have survived or even thrived under such conditions, they are also challenged to ground their curiosities in more precise conceptualisations of both fragmentation and entrepreneurial success. The resulting output from this exercise is a strong foundation for contributing to our understanding of an important contemporary phenomena.
Change need not be frightening: if research is a public good, then so are AI tools to further such research
In the 2 years since his AMJ editorial, Matthew has spoken to many people about the potential risks and rewards of AI in scholarship, including the professional peril that many academics may no longer be needed because AI is so effective in various ways.
“I realise that in academia, as in other professions, our sense of well-being can feel threatened by the pace of change” brought by AI, he says. “But I thought to myself: ‘If the research we scholars are doing is a public good, which I believe it is in expanding human knowledge, why shouldn’t the tools to create that research also be a public good?’”
Asked if he feels less professionally threatened by the power of AI, Matthew replied: “It’s not that I’m less afraid by the lack of control that AI might bring, but as the world picks up pace around us, in order to feel some sense of control my sense is that you have to lean into the uncertainty. That’s also where the opportunities are.”
I thought to myself: 'If the research we scholars are doing is a public good, which I believe it is in expanding human knowledge, why shouldn’t the tools to create that research also be a public good?'.
Featured research
Grimes, M., von Krogh, G., Feuerriegel, S., Rink, F. and Gruber, M. (2023) “From scarcity to abundance: scholars and scholarship in an age of generative artificial intelligence.” Academy of Management Journal, 66(6): 1617-1624 (DOI: 10.5465/amj.2023.4006)
Related content
Matthew has posted on LinkedIn about both his Management Research Feedback Tool for PhDs and about his Scholarly Ideas tool.
Access the Scholarly Ideas tool on GitHub
Access the AI Powered Management Research Feedback Tool on Claude




