Unethical Role of Artificial Intelligence in Scholarly Writing: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , , ,

The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI’s capacity to automate labor-intensive tasks like literature reviews and data analysis has created opportunities for unethical practices, with scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity. 

  • artificial intelligence
  • academia
  • plagiarism detection
  • machine learning
  • Large Language Models
  • natural language processing
  • chatbot
  • chatgpt
  • ethics

1. Introduction

Artificial intelligence (AI) is now a cornerstone of contemporary technological progress, fueling breakthroughs in a wide array of fields—from healthcare and finance to transportation and the arts—leading to enhanced efficiency and productivity [1]. In the medical realm, AI systems are poring over patient histories to forecast health outcomes [2], while in the financial world, they are dissecting market fluctuations to fine-tune investment approaches [3]. Self-driving vehicles are transforming how we think about transportation [4], and in the realm of entertainment, AI is the unseen curator of your music playlists and film queues [5]. The scope of AI’s reach is both vast and awe-inspiring, especially when considering the capabilities of AI-generated large language models such as ChatGPT [6], Bard [7], Bing Chat [8], and Claude [9]. Generative AI refers to a subset of AI that generates content, including text and images, by utilizing natural language processing. OpenAI introduced ChatGPT, an AI chatbot employing natural language processing to emulate human conversation. Its latest iteration, GPT-4, possesses image analysis capabilities known as GPT-4 Vision [10]. Google’s Bard is another AI-driven chat tool utilizing natural language processing and machine learning to simulate human-like conversations [7]. Microsoft’s Bing Chat, integrated into Bing’s search engine, enables users to engage with an AI chatbot for search inquiries instead of typing queries. It operates on the same model as ChatGPT (GPT-4) from OpenAI [8]. Claude, developed by Anthropic, is yet another AI chatbot in the field, currently powered by a language model called Claude 2 [9].
Within academia, AI’s growing influence is reshaping traditional methodologies [11]. These AI tools, such as chatbots, are capable of providing personalized medical advice [12], disseminating educational materials and improving medical education [13][14][15], aiding in clinical decision-making processes [16][17][18], identifying medical emergencies [19], and providing empathetic responses to patient queries [20][21][22].

2. AI’s Unethical Role in Scholarly Writing

The transformative impact of AI on various sectors is well documented, and academia is no exception [23][24][25]. While AI has been praised for its ability to expedite research by sifting through massive datasets and running complex simulations, its foray into the realm of academic writing is sparking debate. AI large language model tools like ChatGPT offer tantalizing possibilities: automating literature reviews, suggesting appropriate research methods, and even assisting in the composition of scholarly articles [26]. Ideally, these advancements could liberate researchers to concentrate on groundbreaking ideas and intricate problem-solving. Yet, the reality diverges sharply from this optimistic scenario (Figure 1).
Figure 1. Ethical concerns surrounding AI’s role in scholarly writing.

2.1. Examples of Academic Papers That Have Used AI-Generated Content, Focusing on ChatGPT-Based Chatbots

In a blinded, randomized, noninferiority controlled study, GPT-4 was found to be equal to humans in writing introductions regarding publishability, readability, and content quality [27]. An article using GPT-3 to write a review on “effects of sleep deprivation on cognitive function” demonstrated ChatGPT’s adherence to ICMJE co-authorship criteria, including conception, drafting, and accountability [28]. However, it revealed challenges with accurate referencing. Another paper had GPT-3 generate content on Rapamycin and Pascal’s wager, effectively summarizing benefits, risks, and advising healthcare consultation, listing ChatGPT as first author [29].
In nephrology, there are currently only a small number of published papers featuring AI-generated content. However, this is still concerning, as it poses questions about the integrity of academic publications. A prior study employed ChatGPT for a conclusion in the study “Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease” [30]. A letter to editor suggests that academic journals should clarify the proportion of AI language model-generated content in papers, and excessive use should be considered academic misconduct [31]. Many scientists disapprove that ChatGPT can be listed as author on research papers [32][33]. But recently, science journals have overturned their bans on ChatGPT-authored papers; the publishing group of the American Association for the Advancement of Science (AAAS) allows authors to incorporate AI-written text and figures into papers if technology use is acknowledged and explained [34]. Similarly, the WAME Recommendations on ChatGPT and Chatbots in Scholarly Publications were updated due to the rapid increase in chatbot usage in scholarly publishing and concerns about content authenticity. These revised recommendations guide authors and reviewers on appropriately attributing chatbot use in their work. They also stress the necessity for journal editors to have tools for manuscript screening to ensure content integrity [35]. Although ChatGPT’s language generation skills are remarkable, it is important to use it as a supplementary tool rather than a substitute for human expertise, especially in medical writing. Caution and verification are essential when employing AI in such contexts to ensure accuracy and reliability. We should proactively learn about the capabilities, constraints, and possible future developments of these AI tools [36].

2.2. Systemic Failures: The Root of the Problem

Such lapses in oversight raise critical questions about the efficacy of the peer-review system, which is intended to serve as a multilayered defense for maintaining academic integrity. The first layer that failed was the coauthors, who apparently did not catch the AI-generated content. The second layer was the editorial oversight, which should have flagged the issue before the paper was even sent for peer review. Currently, numerous AI solutions, such as GPTZero, Turnitin AI detection, and AI Detector Pro, have been created for students, research mentors, educators, journal editors, and others to identify texts produced by ChatGPT, though the majority of these tools operate on a subscription model [37]. The third layer was the peer-review process itself, intended to be a stringent evaluation of a paper’s merit and originality. A study showed that ChatGPT has the potential to generate human-quality text [38], which raises concerns about the ability to determine whether research was written by a human or an AI tool. As ChatGPT and other language models continue to improve, it is likely that it will become increasingly difficult to distinguish between AI-generated and human-written text [39]. A study of 72 experienced reviewers of applied linguistics research article manuscripts showed that only 39% were able to distinguish between AI-produced and human-written texts, and the top four rationales used by reviewers were a text’s continuity and coherence, specificity or vagueness of details, familiarity and voice, and writing quality at the sentence level [40]. Additionally, the accuracy of identification varied depending on the specific texts examined [40]. The fourth layer was the revision phase, where the paper should have been corrected based on reviewers’ feedback, yet the AI-generated text remained. The fifth and final layer was the proofing stage, where the paper should have undergone a last round of checks before being published. These lapses serve as instructive case studies, spotlighting the deficiencies in the current peer-review system.

2.3. The Infiltration of AI in Academic Theses

The problem of AI-generated content is not limited to scholarly articles; it has also infiltrated graduate-level theses. A survey conducted by Intelligent revealed that nearly 30% of college students have used ChatGPT to complete a written assignment, and although 75% considered it a form of cheating, they continue to use it for academic writing [41]. For example, a master’s thesis from the Department of Letters and English Language displayed unmistakable signs of AI-generated text [42]. The thesis, focused on Arab American literary characters and titled “The Reality of Contemporary Arab-American Literary Character and the Idea of the Third Space Female Character Analysis of Abu Jaber Novel Arabian Jazz”, included several phrases commonly produced by AI language models like ChatGPT. Among these were disclaimers such as “I apologize, but as an AI language model, I am unable to rewrite any text without having the original text to work with”. The presence of such language in a master’s thesis is a concerning sign that AI-generated content is seeping into even the most rigorous levels of academic scholarship. Dr. Jayachandran, a writing instructor, published a book titled “ChatGPT Guide to Scientific Thesis Writing”. This comprehensive guide offers expert guidance on crafting the perfect abstract, selecting an impactful title, conducting comprehensive literature reviews, and constructing compelling research chapters for undergraduate, postgraduate, and doctoral students [43]. This situation calls into question the effectiveness of existing safeguards for maintaining academic integrity within educational institutions. While there is no research indicating the extent of AI tool usage in nephrology-related academic theses, the increasing application of these tools in this field is noteworthy.

2.4. The Impact on Grant Applications

The issue of using AI-generated content is not limited to just academic papers and theses; it is also infiltrating the grant application process. A recent article [44] in The Guardian highlighted that some reports were crafted with the help of ChatGPT. One academic even found the term “regenerate response” in their assessor reports, which is a feature specific to the ChatGPT interface. A Nature survey of over 1600 researchers worldwide revealed that more than 25% use AI to assist with manuscript writing and more than 15% use the technology to aid in grant proposal writing [45]. The use of ChatGPT in grant proposal writing has not only significantly reduced the workload but has also produced outstanding results, suggesting that the grant application process is flawed [46]. This also raises concerns that peer reviewers, who play a crucial role in allocating research funds, might not be diligently reviewing the applications they are tasked with assessing. The ramifications of this oversight are significant, with the potential for misallocation of crucial research funding. This issue is exacerbated by the high levels of stress and substantial workloads that academics routinely face. Researchers are often tasked with reviewing a considerable number of lengthy grant proposals, in addition to fulfilling their regular academic duties such as publishing, peer reviewing, and administrative responsibilities. Given the enormity of these pressures, it becomes more understandable why some might resort to shortcuts like using AI-generated content to cope with their responsibilities. At present, the degree to which AI tools are employed in nephrology grant applications is unclear, yet given the rapid rise in AI adoption, attention should be drawn to this area.

2.5. The Inevitability of AI in Academia

The incorporation of AI into academic endeavors is not just a possibility; it is an unavoidable progression [47]. It becomes imperative for universities, publishers, and other academic service providers to give due consideration to AI tools. This entails comprehending their capabilities, recognizing their limitations, and being mindful of the ethical considerations tied to their utilization [48]. Rather than debating whether AI should be used, the primary focus should revolve around how it can be harnessed responsibly and effectively [49]. To ensure that AI acts as a supportive asset rather than an impediment to academic integrity, it is essential to establish clear guidelines and ethical parameters. For example, AI could be deployed to automate initial phases of literature reviews or data analysis, tasks that are often time-consuming but may not necessarily require human creativity [43][50]. However, it is crucial that the use of AI remains transparent, and any content generated using AI should be distinctly marked as such to uphold the integrity of the academic record. The key lies in striking a balance that permits the ethical and efficient application of AI in academia. This involves formulating policies and processes that facilitate academics’ use of AI tools while simultaneously ensuring that these tools are employed in a manner that upholds the stringent standards of academic work.

2.6. Proposed Solutions and Policy Recommendations

  • Advanced AI-driven plagiarism detection: AI-generated content often surpasses the detection capabilities of conventional plagiarism checkers. Implementing next-level, AI-driven plagiarism detection technologies could significantly alter this landscape. Such technologies should be designed to discern the subtle characteristics and structures unique to AI-generated text, facilitating its identification during the review phases. A recent study compared Japanese stylometric features of texts generated using ChatGPT (GPT-3.5 and GPT-4) and those written by humans, and verified the classification performance of random forest classifier for two classes [51]. The results showed that the random forest classifier focusing on the rate of function words achieved 98.1% accuracy, and focusing on all stylometric features, reached 100% in terms of all performance indexes including accuracy, recall, precision, and F1 score [51].
  • Revisiting and strengthening the peer-review process: The integrity of academic work hinges on a robust peer-review system, which has shown vulnerabilities in detecting AI-generated content. A viable solution could be the mandatory inclusion of an “AI scrutiny” phase within the peer-review workflow. This would equip reviewers with specialized tools for detecting AI-generated content. Furthermore, academic journals could deploy AI algorithms to preliminarily screen submissions for AI-generated material before they reach human evaluators.
  • Training and resources for academics on ethical AI usage: While academics excel in their specialized domains, they may lack awareness of the ethical dimensions of AI application in research. Educational institutions and scholarly organizations should develop and offer training modules that focus on the ethical and responsible deployment of AI in academic endeavors. These could range from using AI in data analytics and literature surveys to crafting academic papers. In this era of significant advancements, scholars must recognize and embrace the potential of chatbots in education while simultaneously emphasizing the necessity for ethical guidelines governing their use. Chatbots offer a plethora of benefits, such as providing personalized instruction, facilitating 24/7 access to support, and fostering engagement and motivation. However, it is crucial to ensure that they are used in a manner that aligns with educational values and promotes responsible learning [52]. In an effort to uphold academic integrity, the New York Education Department implemented a comprehensive ban on the use of AI tools on network devices [53]. Similarly, the International Conference on Machine Learning (ICML) prohibited authors from submitting scientific writing generated by AI tools [54]. Furthermore, many scientists disapproved ChatGPT being listed as an author on research papers [32].
  • Acknowledgment for AI as contributor: The use of ChatGPT as an author of academic papers is a controversial issue that raises important questions about accountability and contributorship [55]. On the one hand, ChatGPT can be a valuable tool for assisting with the writing process. It can help to generate ideas, organize thoughts, and produce clear and concise prose. However, ChatGPT is not a human author. It cannot understand the nuances of human language or the complexities of academic discourse. As a result, ChatGPT-generated text can often be superficial and lacking in originality. In addition, the use of ChatGPT raises concerns about accountability. Who is responsible for the content of a paper that is written using ChatGPT? Is it the human user who prompts the chatbot, or is it the chatbot itself? If a paper is found to be flawed or misleading, who can be held accountable? The issue of contributorship is also relevant. If a paper is written using ChatGPT, who should be listed as the author? Should the human user be listed as the sole author, or should ChatGPT be given some form of credit? Therefore, promoting a culture of transparency and safeguarding the integrity of academic work necessitates the acknowledgment of AI’s contribution in research and composition endeavors. It is crucial for authors to openly disclose the degree of AI assistance in a specially designated acknowledgment section within the publication. This acknowledgment should specify the particular roles played by AI, whether in data analysis, literature reviews, or drafting segments of the manuscript, alongside any human oversight exerted to ensure ethical deployment of AI. For example: “Acknowledgment: We hereby recognize the aid of [Specific AI Tool/Technology] in carrying out data analytics, conducting literature surveys, and drafting initial versions of the manuscript. This AI technology enabled a more streamlined research process, under the careful supervision of [Names of Individuals] to comply with ethical guidelines. The perspectives generated by AI significantly contributed to the articulation of arguments in this publication, affirming its valuable input to our work”.
  • Inevitability of Technological Integration: While recognizing ethical concerns, the argument asserts that the adoption of advanced technologies such as AI in academia is inevitable. It recommends shifting the focus from resistance to the establishment of robust ethical frameworks and guidelines to ensure responsible AI usage [52]. From this perspective, taking a proactive stance on AI integration, firmly rooted in ethical principles, can facilitate the utilization of AI’s advantages in academia while mitigating the associated risks of unethical AI use. By fostering a culture of transparency, accountability, and continuous learning, there is a belief that the academic community can navigate the complexities of AI. This includes crafting policies that clearly define the ethical use of AI tools, creating mechanisms for disclosing AI assistance in academic work, and promoting collaborative efforts to explore and comprehend the implications of AI in academic writing and research.

This entry is adapted from the peer-reviewed paper 10.3390/clinpract14010008

References

  1. Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Reviewing Federated Machine Learning and Its Use in Diseases Prediction. Sensors 2023, 23, 2112.
  2. Rojas, J.C.; Teran, M.; Umscheid, C.A. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated. Crit. Care Clin. 2023, 39, 769–782.
  3. Boukherouaa, E.B.; Shabsigh, M.G.; AlAjmi, K.; Deodoro, J.; Farias, A.; Iskender, E.S.; Mirestean, M.A.T.; Ravikumar, R. Powering the Digital Economy: Opportunities and Risks of Artificial Intelligence in Finance; International Monetary Fund (IMF eLIBRARY): Washington, DC, USA, 2021; Volume 2021, pp. 5–20.
  4. Gülen, K. A Match Made in Transportation Heaven: AI and Self-Driving Cars. Available online: https://dataconomy.com/2022/12/28/artificial-intelligence-and-self-driving/ (accessed on 29 December 2022).
  5. Frąckiewicz, M. The Future of AI in Entertainment. Available online: https://ts2.space/en/the-future-of-ai-in-entertainment/ (accessed on 24 June 2023).
  6. Introducing ChatGPT. Available online: https://openai.com/blog/chatgpt (accessed on 18 April 2023).
  7. Bard. Available online: https://bard.google.com/chat (accessed on 21 March 2023).
  8. Bing Chat with GPT-4. Available online: https://www.microsoft.com/en-us/bing?form=MA13FV (accessed on 14 October 2023).
  9. Meet Claude. Available online: https://claude.ai/chats (accessed on 7 February 2023).
  10. OpenAI. GPT-4V(ision) System Card. Available online: https://cdn.openai.com/papers/GPTV_System_Card.pdf (accessed on 25 September 2023).
  11. Majnaric, L.T.; Babic, F.; O’Sullivan, S.; Holzinger, A. AI and Big Data in Healthcare: Towards a More Comprehensive Research Framework for Multimorbidity. J. Clin. Med. 2021, 10, 766.
  12. Joshi, G.; Jain, A.; Araveeti, S.R.; Adhikari, S.; Garg, H.; Bhandari, M. FDA Approved Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: An Updated Landscape. Available online: https://www.medrxiv.org/content/10.1101/2022.12.07.22283216v3 (accessed on 12 December 2022).
  13. Oh, N.; Choi, G.S.; Lee, W.Y. ChatGPT goes to the operating room: Evaluating GPT-4 performance and its potential in surgical education and training in the era of large language models. Ann. Surg. Treat. Res. 2023, 104, 269–273.
  14. Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med. Educ. 2023, 9, e46885.
  15. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887.
  16. Reese, J.T.; Danis, D.; Caulfied, J.H.; Casiraghi, E.; Valentini, G.; Mungall, C.J.; Robinson, P.N. On the limitations of large language models in clinical diagnosis. medRxiv 2023.
  17. Eriksen, A.V.; Möller, S.; Ryg, J. Use of GPT-4 to Diagnose Complex Clinical Cases. NEJM AI 2023, 1–3.
  18. Kanjee, Z.; Crowe, B.; Rodman, A. Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge. JAMA 2023, 330, 78–80.
  19. Zuniga Salazar, G.; Zuniga, D.; Vindel, C.L.; Yoong, A.M.; Hincapie, S.; Zuniga, A.B.; Zuniga, P.; Salazar, E.; Zuniga, B. Efficacy of AI Chats to Determine an Emergency: A Comparison Between OpenAI’s ChatGPT, Google Bard, and Microsoft Bing AI Chat. Cureus 2023, 15, e45473.
  20. Ayers, J.W.; Poliak, A.; Dredze, M.; Leas, E.C.; Zhu, Z.; Kelley, J.B.; Faix, D.J.; Goodman, A.M.; Longhurst, C.A.; Hogarth, M.; et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern. Med. 2023, 183, 589–596.
  21. Lee, P.; Bubeck, S.; Petro, J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N. Engl. J. Med. 2023, 388, 1233–1239.
  22. Mello, M.M.; Guha, N. ChatGPT and Physicians’ Malpractice Risk. JAMA Health Forum 2023, 4, e231938.
  23. Kurian, N.; Cherian, J.M.; Sudharson, N.A.; Varghese, K.G.; Wadhwa, S. AI is now everywhere. Br. Dent. J. 2023, 234, 72.
  24. Gomes, W.J.; Evora, P.R.B.; Guizilini, S. Artificial Intelligence is Irreversibly Bound to Academic Publishing—ChatGPT is Cleared for Scientific Writing and Peer Review. Braz. J. Cardiovasc. Surg. 2023, 38, e20230963.
  25. Kitamura, F.C. ChatGPT Is Shaping the Future of Medical Writing But Still Requires Human Judgment. Radiology 2023, 307, e230171.
  26. Huang, J.; Tan, M. The role of ChatGPT in scientific communication: Writing better scientific review articles. Am. J. Cancer Res. 2023, 13, 1148–1154.
  27. Sikander, B.; Baker, J.J.; Deveci, C.D.; Lund, L.; Rosenberg, J. ChatGPT-4 and Human Researchers Are Equal in Writing Scientific Introduction Sections: A Blinded, Randomized, Non-inferiority Controlled Study. Cureus 2023, 15, e49019.
  28. Osmanovic-Thunström, A.; Steingrimsson, S. Does GPT-3 qualify as a co-author of a scientific paper publishable in peer-review journals according to the ICMJE criteria? A case study. Discov. Artif. Intell. 2023, 3, 12.
  29. ChatGPT Generative Pre-trained Transformer; Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: Generative pre-trained transformer perspective. Oncoscience 2022, 9, 82–84.
  30. Miao, J.; Thongprayoon, C.; Cheungpasitporn, W. Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease. Kidney Int. Rep. 2023, 8, 1657–1659.
  31. Tang, G. Letter to editor: Academic journals should clarify the proportion of NLP-generated content in papers. Account. Res. 2023, 1–2.
  32. Stokel-Walker, C. ChatGPT listed as author on research papers: Many scientists disapprove. Nature 2023, 613, 620–621.
  33. Bahsi, I.; Balat, A. The Role of AI in Writing an Article and Whether it Can Be a Co-author: What if it Gets Support From 2 Different AIs Like ChatGPT and Google Bard for the Same Theme? J. Craniofac. Surg. 2023.
  34. Grove, J. Science Journals Overturn Ban on ChatGPT-Authored Papers. Available online: https://www.timeshighereducation.com/news/science-journals-overturn-ban-chatgpt-authored-papers#:~:text=The%20prestigious%20Science%20family%20of,intelligence%20tools%20in%20submitted%20papers (accessed on 16 November 2023).
  35. Zielinski, C.; Winker, M.A.; Aggarwal, R.; Ferris, L.E.; Heinemann, M.; Lapena, J.F., Jr.; Pai, S.A.; Ing, E.; Citrome, L.; Alam, M.; et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. Colomb. Médica 2023, 54, e1015868.
  36. Daugirdas, J.T. OpenAI’s ChatGPT and Its Potential Impact on Narrative and Scientific Writing in Nephrology. Am. J. Kidney Dis. 2023, 82, A13–A14.
  37. Liu, H.; Azam, M.; Bin Naeem, S.; Faiola, A. An overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. Health Inf. Libr. J. 2023, 40, 440–446.
  38. Dönmez, I.; Idil, S.; Gulen, S. Conducting Academic Research with the AI Interface ChatGPT: Challenges and Opportunities. J. STEAM Educ. 2023, 6, 101–118.
  39. Else, H. Abstracts written by ChatGPT fool scientists. Nature 2023, 613, 423.
  40. Casal, J.E.; Kessler, M. Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Res. Methods Appl. Linguist. 2023, 2, 100068.
  41. Nearly 1 in 3 College Students Have Used Chatgpt on Written Assignments. Available online: https://www.intelligent.com/nearly-1-in-3-college-students-have-used-chatgpt-on-written-assignments/ (accessed on 23 January 2023).
  42. Kamilia, B. The Reality of Contemporary Arab-American Literary Character and the Idea of the Third Space Female Character Analysis of Abu Jaber Novel Arabian Jazz. Ph.D. Thesis, Kasdi Merbah Ouargla University, Ouargla, Algeria, 2023.
  43. Jayachandran, M. ChatGPT: Guide to Scientific Thesis Writing. Independently Published. 2023. Available online: https://www.barnesandnoble.com/w/chatgpt-guide-to-scientific-thesis-writing-jayachandran-m/1144451253 (accessed on 5 December 2023).
  44. Lu, D. Are Australian Research Council Reports Being Written by ChatGPT? Available online: https://www.theguardian.com/technology/2023/jul/08/australian-research-council-scrutiny-allegations-chatgpt-artifical-intelligence (accessed on 7 July 2023).
  45. Van Noorden, R.; Perkel, J.M. AI and science: What 1,600 researchers think. Nature 2023, 621, 672–675.
  46. Parrilla, J.M. ChatGPT use shows that the grant-application system is broken. Nature 2023, 623, 443.
  47. Khan, S.H. AI at Doorstep: ChatGPT and Academia. J. Coll. Physicians Surg. Pak. 2023, 33, 1085–1086.
  48. Jeyaraman, M.; Ramasubramanian, S.; Balaji, S.; Jeyaraman, N.; Nallakumarasamy, A.; Sharma, S. ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World J. Methodol. 2023, 13, 170–178.
  49. Meyer, J.G.; Urbanowicz, R.J.; Martin, P.C.N.; O’Connor, K.; Li, R.; Peng, P.C.; Bright, T.J.; Tatonetti, N.; Won, K.J.; Gonzalez-Hernandez, G.; et al. ChatGPT and large language models in academia: Opportunities and challenges. BioData Min. 2023, 16, 20.
  50. Suppadungsuk, S.; Thongprayoon, C.; Krisanapan, P.; Tangpanithandee, S.; Garcia Valencia, O.; Miao, J.; Mekraksakit, P.; Kashani, K.; Cheungpasitporn, W. Examining the Validity of ChatGPT in Identifying Relevant Nephrology Literature: Findings and Implications. J. Clin. Med. 2023, 12, 5550.
  51. Zaitsu, W.; Jin, M. Distinguishing ChatGPT(-3.5, -4)-generated and human-written papers through Japanese stylometric analysis. PLoS ONE 2023, 18, e0288453.
  52. Koo, M. Harnessing the potential of chatbots in education: The need for guidelines to their ethical use. Nurse Educ. Pract. 2023, 68, 103590.
  53. Yang, M. New York City Schools Ban AI Chatbot That Writes Essays and Answers Prompts. Available online: https://www.theguardian.com/us-news/2023/jan/06/new-york-city-schools-ban-ai-chatbot-chatgpt (accessed on 6 January 2023).
  54. Vincent, J. Top AI Conference Bans Use of ChatGPT and AI Language Tools to Write Academic Papers. Available online: https://www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper (accessed on 5 January 2023).
  55. Siegerink, B.; Pet, L.A.; Rosendaal, F.R.; Schoones, J.W. ChatGPT as an author of academic papers is wrong and highlights the concepts of accountability and contributorship. Nurse Educ. Pract. 2023, 68, 103599.
More
This entry is offline, you can click here to edit this entry!
Video Production Service