Unethical Role of Artificial Intelligence in Scholarly Writing: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Subjects: Others
Contributor: , , , , ,

The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI’s capacity to automate labor-intensive tasks like literature reviews and data analysis has created opportunities for unethical practices, with scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity. 

  • artificial intelligence
  • academia
  • plagiarism detection
  • machine learning
  • Large Language Models
  • natural language processing
  • chatbot
  • chatgpt
  • ethics

1. Introduction

Artificial intelligence (AI) is now a cornerstone of contemporary technological progress, fueling breakthroughs in a wide array of fields—from healthcare and finance to transportation and the arts—leading to enhanced efficiency and productivity [1]. In the medical realm, AI systems are poring over patient histories to forecast health outcomes [2], while in the financial world, they are dissecting market fluctuations to fine-tune investment approaches [3]. Self-driving vehicles are transforming how we think about transportation [4], and in the realm of entertainment, AI is the unseen curator of your music playlists and film queues [5]. The scope of AI’s reach is both vast and awe-inspiring, especially when considering the capabilities of AI-generated large language models such as ChatGPT [6], Bard [7], Bing Chat [8], and Claude [9]. Generative AI refers to a subset of AI that generates content, including text and images, by utilizing natural language processing. OpenAI introduced ChatGPT, an AI chatbot employing natural language processing to emulate human conversation. Its latest iteration, GPT-4, possesses image analysis capabilities known as GPT-4 Vision [10]. Google’s Bard is another AI-driven chat tool utilizing natural language processing and machine learning to simulate human-like conversations [7]. Microsoft’s Bing Chat, integrated into Bing’s search engine, enables users to engage with an AI chatbot for search inquiries instead of typing queries. It operates on the same model as ChatGPT (GPT-4) from OpenAI [8]. Claude, developed by Anthropic, is yet another AI chatbot in the field, currently powered by a language model called Claude 2 [9].
Within academia, AI’s growing influence is reshaping traditional methodologies [11]. These AI tools, such as chatbots, are capable of providing personalized medical advice [12], disseminating educational materials and improving medical education [13,14,15], aiding in clinical decision-making processes [16,17,18], identifying medical emergencies [19], and providing empathetic responses to patient queries [20,21,22].

2. AI’s Unethical Role in Scholarly Writing

The transformative impact of AI on various sectors is well documented, and academia is no exception [39,40,41]. While AI has been praised for its ability to expedite research by sifting through massive datasets and running complex simulations, its foray into the realm of academic writing is sparking debate. AI large language model tools like ChatGPT offer tantalizing possibilities: automating literature reviews, suggesting appropriate research methods, and even assisting in the composition of scholarly articles [42]. Ideally, these advancements could liberate researchers to concentrate on groundbreaking ideas and intricate problem-solving. Yet, the reality diverges sharply from this optimistic scenario (Figure 1).
Figure 1. Ethical concerns surrounding AI’s role in scholarly writing.

2.1. Examples of Academic Papers That Have Used AI-Generated Content, Focusing on ChatGPT-Based Chatbots

In a blinded, randomized, noninferiority controlled study, GPT-4 was found to be equal to humans in writing introductions regarding publishability, readability, and content quality [52]. An article using GPT-3 to write a review on “effects of sleep deprivation on cognitive function” demonstrated ChatGPT’s adherence to ICMJE co-authorship criteria, including conception, drafting, and accountability [53]. However, it revealed challenges with accurate referencing. Another paper had GPT-3 generate content on Rapamycin and Pascal’s wager, effectively summarizing benefits, risks, and advising healthcare consultation, listing ChatGPT as first author [54].
In nephrology, there are currently only a small number of published papers featuring AI-generated content. However, this is still concerning, as it poses questions about the integrity of academic publications. A prior study employed ChatGPT for a conclusion in the study “Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease” [56]. A letter to editor suggests that academic journals should clarify the proportion of AI language model-generated content in papers, and excessive use should be considered academic misconduct [57]. Many scientists disapprove that ChatGPT can be listed as author on research papers [58,59]. But recently, science journals have overturned their bans on ChatGPT-authored papers; the publishing group of the American Association for the Advancement of Science (AAAS) allows authors to incorporate AI-written text and figures into papers if technology use is acknowledged and explained [60]. Similarly, the WAME Recommendations on ChatGPT and Chatbots in Scholarly Publications were updated due to the rapid increase in chatbot usage in scholarly publishing and concerns about content authenticity. These revised recommendations guide authors and reviewers on appropriately attributing chatbot use in their work. They also stress the necessity for journal editors to have tools for manuscript screening to ensure content integrity [61]. Although ChatGPT’s language generation skills are remarkable, it is important to use it as a supplementary tool rather than a substitute for human expertise, especially in medical writing. Caution and verification are essential when employing AI in such contexts to ensure accuracy and reliability. We should proactively learn about the capabilities, constraints, and possible future developments of these AI tools [62].

2.2. Systemic Failures: The Root of the Problem

Such lapses in oversight raise critical questions about the efficacy of the peer-review system, which is intended to serve as a multilayered defense for maintaining academic integrity. The first layer that failed was the coauthors, who apparently did not catch the AI-generated content. The second layer was the editorial oversight, which should have flagged the issue before the paper was even sent for peer review. Currently, numerous AI solutions, such as GPTZero, Turnitin AI detection, and AI Detector Pro, have been created for students, research mentors, educators, journal editors, and others to identify texts produced by ChatGPT, though the majority of these tools operate on a subscription model [44]. The third layer was the peer-review process itself, intended to be a stringent evaluation of a paper’s merit and originality. A study showed that ChatGPT has the potential to generate human-quality text [63], which raises concerns about the ability to determine whether research was written by a human or an AI tool. As ChatGPT and other language models continue to improve, it is likely that it will become increasingly difficult to distinguish between AI-generated and human-written text [64]. A study of 72 experienced reviewers of applied linguistics research article manuscripts showed that only 39% were able to distinguish between AI-produced and human-written texts, and the top four rationales used by reviewers were a text’s continuity and coherence, specificity or vagueness of details, familiarity and voice, and writing quality at the sentence level [65]. Additionally, the accuracy of identification varied depending on the specific texts examined [65]. The fourth layer was the revision phase, where the paper should have been corrected based on reviewers’ feedback, yet the AI-generated text remained. The fifth and final layer was the proofing stage, where the paper should have undergone a last round of checks before being published. These lapses serve as instructive case studies, spotlighting the deficiencies in the current peer-review system.

2.3. The Infiltration of AI in Academic Theses

The problem of AI-generated content is not limited to scholarly articles; it has also infiltrated graduate-level theses. A survey conducted by Intelligent revealed that nearly 30% of college students have used ChatGPT to complete a written assignment, and although 75% considered it a form of cheating, they continue to use it for academic writing [66]. For example, a master’s thesis from the Department of Letters and English Language displayed unmistakable signs of AI-generated text [67]. The thesis, focused on Arab American literary characters and titled “The Reality of Contemporary Arab-American Literary Character and the Idea of the Third Space Female Character Analysis of Abu Jaber Novel Arabian Jazz”, included several phrases commonly produced by AI language models like ChatGPT. Among these were disclaimers such as “I apologize, but as an AI language model, I am unable to rewrite any text without having the original text to work with”. The presence of such language in a master’s thesis is a concerning sign that AI-generated content is seeping into even the most rigorous levels of academic scholarship. Dr. Jayachandran, a writing instructor, published a book titled “ChatGPT Guide to Scientific Thesis Writing”. This comprehensive guide offers expert guidance on crafting the perfect abstract, selecting an impactful title, conducting comprehensive literature reviews, and constructing compelling research chapters for undergraduate, postgraduate, and doctoral students [68]. This situation calls into question the effectiveness of existing safeguards for maintaining academic integrity within educational institutions. While there is no research indicating the extent of AI tool usage in nephrology-related academic theses, the increasing application of these tools in this field is noteworthy.

2.4. The Impact on Grant Applications

The issue of using AI-generated content is not limited to just academic papers and theses; it is also infiltrating the grant application process. A recent article [69] in The Guardian highlighted that some reports were crafted with the help of ChatGPT. One academic even found the term “regenerate response” in their assessor reports, which is a feature specific to the ChatGPT interface. A Nature survey of over 1600 researchers worldwide revealed that more than 25% use AI to assist with manuscript writing and more than 15% use the technology to aid in grant proposal writing [70]. The use of ChatGPT in grant proposal writing has not only significantly reduced the workload but has also produced outstanding results, suggesting that the grant application process is flawed [71]. This also raises concerns that peer reviewers, who play a crucial role in allocating research funds, might not be diligently reviewing the applications they are tasked with assessing. The ramifications of this oversight are significant, with the potential for misallocation of crucial research funding. This issue is exacerbated by the high levels of stress and substantial workloads that academics routinely face. Researchers are often tasked with reviewing a considerable number of lengthy grant proposals, in addition to fulfilling their regular academic duties such as publishing, peer reviewing, and administrative responsibilities. Given the enormity of these pressures, it becomes more understandable why some might resort to shortcuts like using AI-generated content to cope with their responsibilities. At present, the degree to which AI tools are employed in nephrology grant applications is unclear, yet given the rapid rise in AI adoption, attention should be drawn to this area.

2.5. The Inevitability of AI in Academia

The incorporation of AI into academic endeavors is not just a possibility; it is an unavoidable progression [72]. It becomes imperative for universities, publishers, and other academic service providers to give due consideration to AI tools. This entails comprehending their capabilities, recognizing their limitations, and being mindful of the ethical considerations tied to their utilization [73]. Rather than debating whether AI should be used, the primary focus should revolve around how it can be harnessed responsibly and effectively [74]. To ensure that AI acts as a supportive asset rather than an impediment to academic integrity, it is essential to establish clear guidelines and ethical parameters. For example, AI could be deployed to automate initial phases of literature reviews or data analysis, tasks that are often time-consuming but may not necessarily require human creativity [26,68]. However, it is crucial that the use of AI remains transparent, and any content generated using AI should be distinctly marked as such to uphold the integrity of the academic record. The key lies in striking a balance that permits the ethical and efficient application of AI in academia. This involves formulating policies and processes that facilitate academics’ use of AI tools while simultaneously ensuring that these tools are employed in a manner that upholds the stringent standards of academic work.

2.6. Proposed Solutions and Policy Recommendations

  • Advanced AI-driven plagiarism detection: AI-generated content often surpasses the detection capabilities of conventional plagiarism checkers. Implementing next-level, AI-driven plagiarism detection technologies could significantly alter this landscape. Such technologies should be designed to discern the subtle characteristics and structures unique to AI-generated text, facilitating its identification during the review phases. A recent study compared Japanese stylometric features of texts generated using ChatGPT (GPT-3.5 and GPT-4) and those written by humans, and verified the classification performance of random forest classifier for two classes [75]. The results showed that the random forest classifier focusing on the rate of function words achieved 98.1% accuracy, and focusing on all stylometric features, reached 100% in terms of all performance indexes including accuracy, recall, precision, and F1 score [75].
  • Revisiting and strengthening the peer-review process: The integrity of academic work hinges on a robust peer-review system, which has shown vulnerabilities in detecting AI-generated content. A viable solution could be the mandatory inclusion of an “AI scrutiny” phase within the peer-review workflow. This would equip reviewers with specialized tools for detecting AI-generated content. Furthermore, academic journals could deploy AI algorithms to preliminarily screen submissions for AI-generated material before they reach human evaluators.
  • Training and resources for academics on ethical AI usage: While academics excel in their specialized domains, they may lack awareness of the ethical dimensions of AI application in research. Educational institutions and scholarly organizations should develop and offer training modules that focus on the ethical and responsible deployment of AI in academic endeavors. These could range from using AI in data analytics and literature surveys to crafting academic papers. In this era of significant advancements, scholars must recognize and embrace the potential of chatbots in education while simultaneously emphasizing the necessity for ethical guidelines governing their use. Chatbots offer a plethora of benefits, such as providing personalized instruction, facilitating 24/7 access to support, and fostering engagement and motivation. However, it is crucial to ensure that they are used in a manner that aligns with educational values and promotes responsible learning [76]. In an effort to uphold academic integrity, the New York Education Department implemented a comprehensive ban on the use of AI tools on network devices [77]. Similarly, the International Conference on Machine Learning (ICML) prohibited authors from submitting scientific writing generated by AI tools [78]. Furthermore, many scientists disapproved ChatGPT being listed as an author on research papers [58].
  • Acknowledgment for AI as contributor: The use of ChatGPT as an author of academic papers is a controversial issue that raises important questions about accountability and contributorship [79]. On the one hand, ChatGPT can be a valuable tool for assisting with the writing process. It can help to generate ideas, organize thoughts, and produce clear and concise prose. However, ChatGPT is not a human author. It cannot understand the nuances of human language or the complexities of academic discourse. As a result, ChatGPT-generated text can often be superficial and lacking in originality. In addition, the use of ChatGPT raises concerns about accountability. Who is responsible for the content of a paper that is written using ChatGPT? Is it the human user who prompts the chatbot, or is it the chatbot itself? If a paper is found to be flawed or misleading, who can be held accountable? The issue of contributorship is also relevant. If a paper is written using ChatGPT, who should be listed as the author? Should the human user be listed as the sole author, or should ChatGPT be given some form of credit? Therefore, promoting a culture of transparency and safeguarding the integrity of academic work necessitates the acknowledgment of AI’s contribution in research and composition endeavors. It is crucial for authors to openly disclose the degree of AI assistance in a specially designated acknowledgment section within the publication. This acknowledgment should specify the particular roles played by AI, whether in data analysis, literature reviews, or drafting segments of the manuscript, alongside any human oversight exerted to ensure ethical deployment of AI. For example: “Acknowledgment: We hereby recognize the aid of [Specific AI Tool/Technology] in carrying out data analytics, conducting literature surveys, and drafting initial versions of the manuscript. This AI technology enabled a more streamlined research process, under the careful supervision of [Names of Individuals] to comply with ethical guidelines. The perspectives generated by AI significantly contributed to the articulation of arguments in this publication, affirming its valuable input to our work”.
  • Inevitability of Technological Integration: While recognizing ethical concerns, the argument asserts that the adoption of advanced technologies such as AI in academia is inevitable. It recommends shifting the focus from resistance to the establishment of robust ethical frameworks and guidelines to ensure responsible AI usage [76]. From this perspective, taking a proactive stance on AI integration, firmly rooted in ethical principles, can facilitate the utilization of AI’s advantages in academia while mitigating the associated risks of unethical AI use. By fostering a culture of transparency, accountability, and continuous learning, there is a belief that the academic community can navigate the complexities of AI. This includes crafting policies that clearly define the ethical use of AI tools, creating mechanisms for disclosing AI assistance in academic work, and promoting collaborative efforts to explore and comprehend the implications of AI in academic writing and research.
 

This entry is adapted from the peer-reviewed paper 10.3390/clinpract14010008

This entry is offline, you can click here to edit this entry!
Video Production Service