While OpenAIs models are the most prominent LLMs, they are not the only ones available. The large popularity of ChatGPT lead to other companies releasing their own competitors. For example Meta released LLaMA (Large Language Model Meta AI) in February 2023
[10]. Google, who laid the groundwork for LLMs with the development of the transformer architecture, also developed their own LLMs. One of those is LaMDA (Language Model for Dialogue Applications)
[11], which was first announced in 2021. Another model called PaLM (Pathways Language Model)
[12] was first made available in March 2023. An updated version called PaLM 2 was announced later that year. The PaLM model is trained on a collection of web documents, books, Wikipedia, conversations, and GitHub code. In response to the release of ChatGPT, Google launched their own LLM-powered chatbot named BARD in March 2023. It was originally based on LaMDA models, but now uses PaLM
[12].
2. Potential Problems in Conducting Qualitative Research
In qualitative research, there are several challenges that need to be addressed.“Qualitative research” does not have a singular definition, similarly to quantitative research. Different qualitative methods are required to cater to specific research objectives
[13]. However, qualitative research methods are often grouped together, leading to the application of inconsistent standards. This fails to capture the essence of qualitative research.
As interpretation and analysis play a crucial role in qualitative research, there is the potential for subjective biases to influence the findings
[14]. Researchers’ personal beliefs, experiences, and preconceptions can inadvertently affect participant selection, data collection methods, and analysis, leading to biased results
[15]. For instance, even the formulation of interview questions can be affected by judgment and subjectivity, leading to suggestive questions or limiting participants’ freedom of expression. All these potential issues require careful consideration from the researchers’ side throughout the process.
Moreover, potential conflicts of interest may arise concerning the stakeholders involved in the study. Evaluating and including critical responses, as well as acknowledging possible conflicts of interest, constitute challenges for researchers. These challenges are not only related to the interview method, but similar complications can occur with other forms of qualitative research.
Purposive or convenience sampling is often used in qualitative studies, which may not provide representative results for a broader population. Limited sample sizes also raise concerns about the generalizability of findings
[16]. While sample size is less critical in qualitative studies compared to quantitative studies, randomization of participants still poses a potential problem. Efforts are often made to enhance the reproducibility of qualitative studies; however, significant challenges exist. For example, if a study is conducted using randomization in a school setting, different results may be obtained, depending on which school and city are chosen. Online surveys can also present challenges, as the responses tend to be primarily from ambitious participants who may bias the study’s results. Consequently, researchers must carefully consider their sampling strategy and acknowledge the limitations of their study.
Establishing the trustworthiness, validity, and reliability of qualitative findings is another persistent challenge
[17]. Unlike quantitative research, which can rely on statistical measures for objectivity, qualitative research heavily depends on the researcher’s interpretation
[18]. Strategies such as triangulation, member checking, and inter-rater reliability can enhance validity and reliability but are not foolproof. Qualitative content analysis (e.g.,
[19]) tries to address the issue of objectivity in particular by providing verifiable guidelines for the analysis of a text (e.g., transcripts). Additionally, qualitative research often involves engaging in personal and sensitive discussions with participants (e.g.,
[20]). Researchers must ensure informed consent, protect confidentiality and privacy, and navigate ethical dilemmas, including power imbalances and the potential for harm. These ethical considerations add another layer of complexity to qualitative research.
In conclusion, qualitative research poses various challenges that researchers need to address. From issues related to inconsistent standards and subjective biases to sampling limitations, reproducibility challenges, and establishing validity and reliability, conducting qualitative research requires careful consideration and strategic planning. Furthermore, ethical concerns must be taken into account to ensure the well-being and privacy of participants. As large language models are based on a large amount of existing documents, and thus reflecting (most of the time) real opinions and perspectives on certain topics, AI could potentially offer some potential solutions to certain challenges. However, its introduction into qualitative research also raises questions and considerations about bias, transparency, and data privacy.
3. Existing Work on Qualitative Research Methods with AI and LLMs
Artificial intelligences and LLMs have not only been a topic of research since ChatGPT 3.5 and the associated breakthrough in public perception but already support numerous disciplines in practice, such as in medicine
[21], healthcare
[22], and economics
[23]. All of these studies primarily focus on the exploration of artificial intelligences through the use of qualitative research methods and do not employ qualitative research methods with artificial intelligences. In education, on the other hand, the use of AI and LLM is largely unexplored. For example, one of the few research papers in this area addressed how education, teaching, and learning could be improved by using AI for qualitative data analysis
[24].
Christou
[25] examined the widespread impact of artificial intelligence in research and academia, particularly in qualitative research through literature and systematic reviews, addressing its strengths, limitations, ethical dilemmas, and potential biases. He proposed five key considerations for its appropriate and reliable use, including understanding AI-generated data, addressing biases and ethical concerns, cross-referencing information, controlling the analysis process, and demonstrating the cognitive input and skills of the researcher throughout the study. In Christou’s discussion on the role of AI in qualitative research, the example of
InfraNodus was given: This AI system was designed to perform various tasks related to textual data analysis, such as the categorization of information, cluster creation, and creating visual graphs. This can, for example, be used on texts created from interviews
[25]. Furthermore, it can be assumed that the researcher should engage in some degree of manual coding or categorization, due to the reliance on analytical software or AI systems, which often employ predefined rules or algorithms to identify patterns, themes, or keywords in text data. Additionally, the researcher is responsible for providing comprehensive documentation of the analysis methodology, justification, and precise execution procedure employed. It is essential for the researcher to be able to explain the rationale and algorithms employed by the AI system in conducting the analysis, and the researcher’s cognitive evaluative skills play a valuable role in the analytical process and the formulation of conclusions
[25]. Drawing such conclusions from an AI’s analyses to answer practical research questions requires the expertise and contextual knowledge of the researcher
[26][27]. It can be concluded that AI can be used in qualitative research (e.g., for systematic reviews, qualitative empirical studies, and conceptual studies), but only if the researcher adheres to certain key considerations and guidelines
[25].