ChatGPT is a variation of the large language model (LLM) named GPT-3 from OpenAI, trained on a given dataset. ChatGPT, which is based on GPT-3 using a specific algorithm for learning, could be used for brainstorming in a conversational way, its possibilities can bring a revolution to working environments in various professions.
1. Aspects of large language model (LLM) in Scientific Works
One significant issue is that ChatGPT might pose a danger from a scientific publishing point of view as it can rewrite content to make plagiarism detection nearly impossible. There are tools to mitigate this problem, to detect AI generated text. Despite the fact that these applications are being developed to help detect the unethical use of generative AI, they are not functioning yet perfectly, and a simple rewording by the user could fool the detection systems
[1]. It was highlighted by Bhattacharya et al. that ChatGPT works from existing text, and it can produce output that comes out negatively from a plagiarism tracker
[2]. Aljanabi et al. reported that the output from ChatGPT is not always accurate, but the team emphasized that it can be helpful in academic writing (getting references, rewriting or generating text, proposing style, etc.). ChatGPT can function as a search engine
[3]. It was noted that it cannot handle mathematical calculations and other types of specific queries. The widely cited help that ChatGPT can provide needs further assessment.
According to the schoolars, the tool could be useful for detecting security vulnerabilities as well. While the AI is supposed to be able to generate a literature review (or even complete papers and arguments
[4]), it actually lacks an understanding of the implicit ideas in a text
[5], which are incredibly important in human–human communication. This phenomenon could introduce errors to the answer. Also, intensive “Socratic” questioning can make ChatGPT change its answers, suggesting that they were not logically coherent or lacked sufficient proof and founding
[6]. The tool often provides convincing text that can contain false parts in its details
[2], and due to the black-box setup of the AI tools
[7], transparent mitigation is difficult. Another issue of using an LLM, like ChatGPT, is that it is trained on a very large dataset. While this data is supposed to be diverse, there is no guarantee of a diversity of opinions. Therefore, the dataset should include biases, those could propagate through the AI and appear in the text generated
[8], without the knowledge of the users. Bias, commercialization and technical precision should be investigated for a wide array of scientific and technical fields and professions.
2. Application in Medical Sciences
From a medical application point of view, ChatGPT could be useful for treating patients in surgery (analyzing vital signs, pain tolerance, medical history, facilitating communication, etc.). It could help doctors make a diagnosis without having to do an extensive manual literature review
[9][10]. Macdonald et al. used the tool for a socio-medical experiment
[11]. They simulated virus spreading among a population, and ChatGPT was asked to help with determining the effectiveness of the fictive vaccine and drafting a research paper. ChatGPT generated abstracts passed plagiarism. After the dataset being described, ChatGPT could explain and offer potential study steps, and ChatGPT could generate code for studying the dataset. It contained some faults, but after feedback, it could self-correct itself. As for manuscript writing, ChatGPT generated a coherent abstract for the paper. It was possible to use ChatGPT to do a literature review, but in this example, it provided faulty information. This important aspect needs to be assessed in other fields as well.
Macdonald found that ChatGPT could become a resource for researchers to develop papers and studies faster but with careful assessment of the given answer. Khan et al. found that ChatGPT is useful for checking grammar in written text, and it could be useful as a teaching tool for generating tasks in a medical environment
[9]. In the reference
[1], the authors also came to a similar conclusion. The tool can provide research assistance by providing text summaries, answering questions and generating bibliographies. It can translate, which could be useful for researchers who need to write in a second language and do not have a subscription tool for translation. In clinical management, it could improve documentation, support decision-making and facilitate communication with patients. There are noted deficiencies though: A lack of real understanding was reported, originating from the fixed and closed database. Besides, no data was found after 2021, and it could also generate unoriginal content with incorrect answers at times. This was also emphasized by Gordjin and Have
[12]. Liebrenz showed that ChatGPT can also write an article for The Lancet Digital Health about AI and medical publishing ethics
[13]. They highlighted that monetization of AI could produce a gap between researchers of different wealth. However, the availability of further monetized tools (such as ChatGPT4) was not available at the moment, so critical comparison could not have been performed by the authors.
3. Application in Finance and Education
Moving on to a different profession, Dowling and Lucey used the tool to assess their research process in finance research
[14]. A literature review was done on both public (already included) and private (fed to ChatGPT) data. Idea generation, data identification, prep and testing framework proposition were done. The generated studies were stated by a board of referees to be of acceptable quality for a peer-reviewed finance journal. However, data analysis was missing from ChatGPT’s actual capabilities.
As for educational use, Rillig et al.
[15] discussed the application in environmental research and education. Due to the working principle of the algorithm, biases in the training data could produce bias in the output of ChatGPT. The LLM output could easily be confused with an expert’s answer, even though it has no real understanding, so Rillig et al. also highlighted these risks during applications. ChatGPT could be used to accelerate research by outsourcing tasks to ChatGPT, improving workflow. Furthermore, it can also help non-native English speakers to write papers, develop ideas, etc. Nevertheless, ChatGPT could raise issues of cheating in education as well. The papers usually do not propose in-depth solutions for the problems; they only imply the need.
4. Application in IT and in Engineering Sciences
From the aspect of IT sciences and electrical engineering, Wang et al. noted
[5] that “Stack Overflow” introduced a temporary ban on code generated by ChatGPT because of a low percentage of being totally correct. The bot gave plausible but incorrect answers during the presented discussions. Surameery and Shakor highlighted bug-fixing capabilities of the tool and suggested ChatGPT as a part of a more comprehensive debugging toolkit, not only as a sole solution used by developers
[16]. Biswas showed mostly the positive aspects of the tool in programming, such as a technical query answering machine for explanations, guides and diagnosing problems
[17]. Vemprala et al. showed a possible application mindset for robotics, where ChatGPT partially substitutes the engineer in the loop and where eventually a user can use the LLM as a tool to connect to further robots solving tasks
[18]. The literature is very limited for electronics, software- and electrical engineering applications, which needs further investigations via various use cases and their documentations.