As AI becomes capable of producing review-like text, the integrity of peer review faces a new kind of threat. Here is how the publishing community — and this journal — are responding.
As AI becomes capable of producing review-like text, the integrity of peer review faces a new kind of threat. Here is how the publishing community — and this journal — are responding.
Peer review has long been built on a simple premise: that qualified human experts read submitted manuscripts, apply their professional judgment, and provide honest, constructive feedback. This premise is now under pressure. The same generative AI tools that help authors write manuscripts can also produce text that resembles a peer review — complete with summary paragraphs, methodological critiques, and recommendations.
The question facing every journal today is not whether AI-generated reviews exist, but how to detect them, what policies should govern AI use by reviewers, and how to maintain the trust that makes peer review meaningful.
The Scale of the Problem
Data presented at the 2025 International Congress on Peer Review and Scientific Publication in Chicago revealed that AI use among reviewers has increased steeply since generative AI tools became widely available. A study of over 46,000 abstracts and nearly 30,000 reviews in oncology journals found that detectable signs of AI-assisted language in reviewer reports rose sharply after 2023. When journals introduced formal prohibitions, detections dropped briefly — then climbed again. Bans, it appears, do not eliminate AI use; they simply make it less visible.
Separately, an analysis across thirteen journals in a major medical publisher network found that reviewers who disclosed AI use had turnaround times and editorial quality ratings comparable to those who did not. In other words, editors often cannot distinguish AI-assisted reviews from fully human ones based on quality alone.
Why AI-Generated Reviews Are Problematic
A peer review is not simply a summary of a manuscript’s strengths and weaknesses. It is an exercise in expert judgment — one that requires deep familiarity with a field, the ability to assess whether a methodology is appropriate for a specific research question, and the contextual awareness to identify what is genuinely novel versus what has been done before. AI systems, however sophisticated, do not possess this kind of expertise.
AI-generated reviews tend to exhibit characteristic patterns: they are often generically positive, lack specificity about the manuscript’s actual content, fail to identify genuine methodological weaknesses, and produce comments that could apply equally to any manuscript in the field. They may appear polished, but they are rarely useful — and in some cases, they are actively misleading, giving authors and editors a false sense that the work has been rigorously evaluated.
There is also a more fundamental concern. When a reviewer accepts an invitation to evaluate a manuscript, they are entering into a relationship of trust with the journal and the authors. Delegating that responsibility to an AI system — without disclosure — violates the confidentiality of the process and undermines the social contract on which peer review depends. As COPE and Wiley have stated clearly, peer review is a human responsibility that should not be delegated to a generative AI tool.
The Confidentiality Risk
Uploading an unpublished manuscript to a commercial AI platform raises serious confidentiality concerns. Most generative AI providers’ terms of service permit the use of input data for model training or improvement, meaning that confidential manuscript content could, in principle, be incorporated into the AI system’s knowledge base and influence future outputs. Even where providers offer data-handling assurances, the risk remains that unpublished research is exposed to a system not designed to safeguard academic confidentiality.
For this reason, Springer Nature, Elsevier, PLOS, and virtually every other major publisher explicitly prohibit reviewers from uploading manuscripts into generative AI tools. This prohibition is not about hostility toward technology — it is about protecting the confidentiality that authors are promised when they submit their work.
What Reviewers May and May Not Do
The current consensus, reflected in the policies of COPE, Wiley, and other leading bodies, distinguishes between acceptable and unacceptable AI use by reviewers.
Acceptable: Using AI tools to refine the language or grammar of a review report that you have already written, provided this use is disclosed to the editor. The intellectual content and judgments in the review must be your own.
Unacceptable: Uploading a manuscript to an AI tool and using the AI-generated output as the basis for your review. This constitutes a breach of confidentiality and a delegation of professional responsibility. It is also unacceptable to use AI to generate reviewer reports without disclosure.
How Journals Are Detecting AI-Generated Reviews
Detection is an evolving field, but several approaches are already in use. Journals and publishers now deploy integrity screening tools that analyze review reports for linguistic patterns characteristic of AI-generated text — including unusual uniformity of tone, absence of field-specific terminology, generic praise, and lack of specific references to manuscript content. Some publishers use multiple detection signals in combination, cross-referencing review language with the reviewer’s known expertise and publication record.
At Research and Science Today, our editorial workflow incorporates systematic review quality assessment. As part of the Ethical Shield platform integrated into our editorial process, every review report is evaluated for whether its content is anchored in the actual manuscript — that is, whether the reviewer’s comments refer to specific data, methods, and claims in the submitted work. Generic or unanchored reviews are flagged for editorial attention, regardless of whether AI involvement is suspected.
Our Policy
Research and Science Today requires all peer reviewers to adhere to the following principles:
- Do not upload manuscripts to AI tools. Submitted manuscripts are confidential. Uploading them to any third-party AI platform, including ChatGPT, Gemini, Claude, or any other generative system, is prohibited.
- Do not delegate your review to AI. The intellectual evaluation of the manuscript must be your own work. AI may not be used to generate review content.
- Disclose any AI assistance. If you use AI tools to improve the language of your review report, you must disclose this to the editor.
- Provide specific, evidence-based feedback. Reviews should reference specific sections, figures, tables, and claims in the manuscript. Generic comments that could apply to any paper are insufficient.
These expectations are aligned with the COPE Ethical Guidelines for Peer Reviewers and the editorial policies of all major international publishers.
— — —
This article draws on data presented at the 2025 International Congress on Peer Review and Scientific Publication (Chicago), editorial policies from COPE. Research and Science Today is committed to maintaining the integrity of its peer review process.