Using AI tools for writing the “Related Works” or “Literature Review” section of a research paper is fraught with significant problems that can undermine the quality, integrity, and credibility of your work. While AI can be a useful assistant for other tasks, its application here is particularly risky.
Here are some major problems I encountered, categorized for clarity.
1. Accuracy and Factual Integrity
This is the most critical area of failure, encompassing everything from outright fabrication to subtle but serious scholarly errors.
- Factual Inaccuracies and “Hallucinations”: AI language models can generate text that sounds plausible but is factually incorrect. This can manifest as:
- Inventing Papers: Creating citations for papers that do not exist.
- Misrepresenting Findings: Incorrectly summarizing a study’s methodology, results, or conclusions.
- Incorrect Citations: Generating citations with the wrong authors, year, journal, or DOI.
- Incorrect Attribution of Foundational Work: A proper literature review traces ideas to their origin. AI models, often biased towards newer data in their training set, frequently fail at this.
- The Problem: The AI may cite a recent paper that refines or applies an idea, instead of the original, seminal paper that introduced it. This is a major error in scholarly attribution.
- Example: Let’s say “Algorithm X” was introduced in a foundational paper, Paper Y, in 2015. An AI tasked with discussing “Algorithm X” might prioritize a 2025 paper, Paper Z, which presents a minor modification. The AI-generated text would then incorrectly credit Paper Z with the core concept, obscuring the true origin and failing to give credit to the pioneering work of Paper Y.
- Transitive Citation Errors and Concept Conflation: This is a subtle but pervasive problem where an AI fails to understand the structure of citation within a paper.
- The Problem: An AI reads Paper A. In Paper A’s “Related Works” section, the authors discuss an idea from Paper B. The AI lacks the ability to trace this and incorrectly attributes the idea from Paper B to the authors of Paper A. It can also go a step further and conflate this misattributed idea with the actual methodology of Paper A, creating a description of work that doesn’t exist in any paper.
- Example: An AI is summarizing Paper A. In its literature review, Paper A states, “Our work builds on the ‘cyclic learning rate’ proposed in Paper B.” The AI might wrongly report that Paper A developed the ‘cyclic learning rate’. Worse, it could then merge this with Paper A’s own methods section to produce a sentence like, “Paper A introduced the ‘cyclic learning rate’ and applied it to satellite image analysis,” a specific claim that is entirely fabricated and misrepresents both papers.
- Duplicate and Unprofessional Referencing: The academic publishing ecosystem has multiple versions of the same work (e.g., arXiv pre-print, OpenReview version for a conference, the final accepted version, a copy on ResearchGate).
- The Problem: AI tools cannot disambiguate these sources. They may treat them as separate works, citing two or more versions of the same paper. This bloats the bibliography, looks unprofessional, and demonstrates that the author has not engaged with the sources directly.
2. Academic Integrity and Plagiarism
- “Patchwriting” and Subtle Plagiarism: AI models often “patchwrite”—they take sentences and phrases from their training data and stitch them together with minor modifications. This is still a form of plagiarism, as you are presenting ideas and phrasing from other sources as your own.
- Lack of Transparency and Authorship Issues: Failing to disclose the use of an AI tool for writing can be considered academic misconduct by most journals and institutions.
3. Bias and Limited Scope
- Knowledge Cut-off Date: Many AI models have a knowledge cut-off date and are unaware of the most recent research, which is essential for a timely and relevant literature review.
4. The “Deskilling” of the Researcher
- Atrophy of Core Research Skills: The process of conducting a literature review is how you learn your field, understand its history, and master its core concepts. Outsourcing this task to an AI prevents you from developing this fundamental expertise.
Best Practices for Responsible Use (As an Assistant, NOT an Author)
- Use it for Idea Generation: Ask for potential keywords or prominent authors to start.
- Use it for Summarizing Papers You Have Read: After you have read a paper yourself, ask an AI to summarize it and critically compare its output with your own understanding. Never trust a summary of a paper you haven’t read.
- Use it for Language Polishing: After you have written your own draft, use AI to check grammar or improve sentence structure.
- The Golden Rule: Read Every Source. Verify Every Claim. You are responsible for every word and citation. You must personally read and verify every source to trace ideas to their origin and ensure accuracy.
- Disclose Your Use: Be transparent and follow the policies of your institution and publisher regarding AI usage.