Generative AI is transforming how researchers write. Here is what the global publishing community — and this journal — expect from authors who use these tools.
Generative AI is transforming how researchers write. Here is what the global publishing community — and this journal — expect from authors who use these tools.
The rapid adoption of generative AI tools — large language models such as ChatGPT, Gemini, Claude, and others — has fundamentally altered the landscape of academic writing. Researchers across every discipline are now using these tools for tasks ranging from language polishing and literature summarization to data analysis and code generation. The question is no longer whether AI is being used in manuscript preparation, but how it should be used responsibly.
This article summarizes the current consensus among leading publishers and ethics bodies, and outlines the specific expectations of Research and Science Today regarding AI use in submitted manuscripts.
The Global Consensus: Transparency, Not Prohibition
The scholarly publishing community has converged on a clear set of principles. The Committee on Publication Ethics (COPE), the world’s leading authority on publication ethics, issued its position statement establishing that AI tools cannot be listed as authors of academic papers. The rationale is straightforward: authorship requires the capacity for accountability — the ability to stand behind claims, respond to questions about integrity, and take responsibility for errors. AI systems, regardless of their sophistication, cannot fulfill these obligations.
This position has been adopted universally. Elsevier, Springer Nature, Wiley, Taylor & Francis, SAGE, Cambridge University Press, Oxford University Press, the American Psychological Association, and every other major publisher explicitly prohibit AI authorship. There are no exceptions.
However — and this is the critical distinction — none of these organizations prohibit the use of AI as a tool. The consensus is not about banning AI; it is about ensuring transparency. Authors may use generative AI to support their work, provided they disclose how and where it was used and maintain full intellectual responsibility for the final manuscript.
What Must Be Disclosed
Disclosure requirements vary slightly between publishers, but the core principle is consistent: if AI contributed to the creation of content in your manuscript, you must say so. Specifically, authors should declare the following.
- Which tool was used. Identify the specific AI system (e.g., ChatGPT-4, Claude 3.5 Sonnet, Gemini) and, where possible, the version and date of access.
- What it was used for. Describe the specific tasks the AI performed — drafting sections, translating text, generating code, summarizing literature, analyzing data, or improving language.
- Where the disclosure appears. At Research and Science Today, AI use should be declared in the Methods section if the AI was part of the research methodology, or in the Acknowledgements section if it was used in manuscript preparation. A separate disclosure statement before the references is also acceptable.
What Does Not Require Disclosure
Most publishers — including Springer Nature and SAGE — have drawn a useful distinction between assistive AI use and generative AI use. Basic grammar and spelling checkers, standard reference managers, and conventional software tools do not require disclosure, even if they incorporate AI components. These are considered routine assistive tools analogous to spell-check or autocorrect.
The threshold for disclosure is crossed when AI generates new content, substantially rewrites existing text, summarizes literature, produces analytical outputs, or creates visual elements. If the AI’s contribution goes beyond correcting surface-level errors and begins to shape the substance of the manuscript, disclosure is required.
The Non-Negotiable: Human Accountability
Across all publishers and ethics bodies, one principle is absolute. Human authors bear full responsibility for every element of the manuscript — including any content produced with AI assistance. This means that if an AI tool generates a factual error, fabricates a reference, introduces bias, or produces text that constitutes plagiarism, the human authors are held accountable.
This is not a theoretical concern. Large language models are known to produce plausible but incorrect information, to fabricate citations that do not exist, and to reproduce copyrighted material without attribution. Authors who use these tools must verify every claim, check every reference, and critically evaluate every output before incorporating it into their work.
AI-Generated Images: A Restricted Domain
The use of AI to generate or manipulate images in manuscripts is subject to stricter rules. Springer Nature, Elsevier, and Taylor & Francis have near-total prohibitions on AI-generated images in publications, citing unresolved legal issues around copyright and significant concerns about scientific integrity. The only exception is when the use of AI image generation is itself the subject of the research and is fully described in the methodology.
Authors should not use AI tools to create figures, diagrams, or photographic content for their manuscripts unless the generation process is part of the study being reported.
Our Policy at Research and Science Today
Consistent with COPE guidelines and the practices of leading international publishers, Research and Science Today requires the following from all submitting authors:
- No AI authorship. AI tools may not be listed as authors or co-authors under any circumstances.
- Mandatory disclosure. Any use of generative AI in research or manuscript preparation must be transparently declared.
- Full accountability. All named authors must take responsibility for the accuracy and integrity of the entire manuscript, including any AI-assisted content.
- No AI-generated images. Unless the generation of such images is part of the reported research methodology.
We recognize that AI tools can be genuinely helpful — particularly for researchers working in a second language, for accelerating literature searches, or for improving the clarity of complex technical writing. We do not penalize authors for using these tools. We do, however, require honesty about their use. Undisclosed AI assistance, if detected, will be treated as a breach of publication ethics.
— — —
This article is based on the COPE Position Statement on Authorship and AI Tools, editorial policies published by Springer Nature, Elsevier, Wiley, Taylor & Francis, SAGE, and guidance from the International Committee of Medical Journal Editors (ICMJE). Research and Science Today operates in full compliance with COPE principles.