Designing for Hybrid Intelligence: Comparison
Please note this is a comparison between Version 3 by Antonio Correia and Version 2 by Jason Zhu.

WThith the widespread availability and pervasiveness of artificial is entry summarizes the taxonomic framework proposed in the article entitled "Designing for Hybrid Intelligence (AI) in many application areas across the globe, the role of crowdsourcing has seen an upsurge in terms of importance for scaling up data-driven algorithms in rapid cycles through a relatively low-cost distributed workforce or even on a volunteer basis. However, there is a lack of systematic and empirical examination of the interplay among the processes and activities combining crowd-machine hybrid interaction. To uncover the enduring aspects: A Taxonomy and Survey of Crowd-Machine Interaction". Specifically, this research summary aims to provide a glimpse into the unique characteristics of artificial intelligence (AI)-powered crowdsourcing by characterizing the human-centered AI design spaceits uses, limitations, and prospects when involving ensembles of crowds and algorithms and their symbiotic relations and requirements, a Computer-Supported Cooperative Work (CSCW) lens strongly rooted in the taxonomic tradition of conceptual scheme development is taken with the aim of aggregating and characterizing some of the main component entities in the burgeoning domainseen from a socio-technical perspective grounded on hybrid machine-crowd interaction. To this end, we performed a scoping review of the existing literature in order to frame the relevant aspects of this particular form of hybrid crowd-AI centered systems. A theoretically grounded and empirically validated analytical framework is proposed  for the study of crowd-machine interaction and its environment. Based on a review and several cross-sectional analyses of research studies comprising hybrid forms of human interaction with AI systems and applications at a crowd scale, the available literature was distilled and incorporated into a unifying framework comprised of taxonomic units distributed across integration dimensions that range from the original time and space axes in which every collaborative activity take place to the main attributes that constitute a hybrid intelligence architecture. When turning to the challenges that are inherent in tasks requiring massive participation, novel intelligence in light of the progress reported in prior research when considering human-algorithmic arrangements at a massive scale. From understanding the role of crowd-AI ethicality to the analysis of the spatio-temporal characteristics of crowd activity and the behavioral traces left by crowd workers as a way of improving performance outcomes and user experience (UX) design, this entry unveils some important properties can be obtained for a set of potential scenarios that go beyond the single experience of a human interacting with the technology to comprise a vast set of massive machine-crowdand component entities that must be taken into account in the design and development of intelligent systems combining crowds and algorithms and their interactionsve relationships.

 
  • conceptual framework
  • crowd-machine hybrid interaction
  • design implications

1. Introduction and Contextry Remarks

Crowd-cent Thered design is far from a trivial undertaking, and this is even more challenging when trying to implement hybrid intelligence models incorporating human cognition into algorithmic-crowdsourcing workflows [1]. In fact, crowd-algorithm intfunctional structureraction has recently reached a certain level of maturity, and a vast range of crowd-powered algorithms have successfully been applied in areas like medical image segmentationof intelligent [2] and games with a purpoyse (GWAP) [3]. In these instances, crowds of untrained (non-expert) online workers have proved to provide similar results in terms of detection accuracy when compared to other groups such as domain knowledge experts, medical students, and experienced crowd workers. Further investigations in this burgeoning domain have also shown that the use of crowd-algorithm hybrids can outperform crowd-only techniques in accomplishing tasks like examining protein interactions and chemical reactions that are very common in the field of network biology [4]. Nonethelms has been augmented with new properties in recent yess, the taxonomic rationale behind the mass interaction efforts between crowds and machines as an integrated and complex socio-technical system is not completely understood, and there is a need to find novel ways of characterizing this body of work in its whole range. To mitigate this brittleness, a review of the main activities and contexts in which such crowd-AI ensembles have been investigated was carried out to develop a taxonomic scheme as comprehensive as possible to capture the nuances that are unique in comparison with other types of interactions between humans and computational systems.
For more than three decars principally owing to the advancements in the fieldes, taxonomy development has been seen as a crucial part of socio-technical research within the field of CSCW [5]. To some extent, taxonomies provide a useful guide and theoretiof artifical foundation for assessing technological developments due to their capability to organize complex concepts and knowledge structures into understandable formats [6]. By going back in the course of time, one may find severaal intell taxonomic approaches that formed the basis for the understanding of the task types that are currently present in many crowdsourcing systems. For a review of prior taxonomic proposals, the reader is referred to Harris and co-authors [7]. gence (AIn retrospect, McGrath [8] proposed a circumplex model of group tasks intended to characterize their nature (e.g., decision-making) into four quadrants that reflect the processes involved in their execution (i.e., generate, choose, negotiate, and execute). When moving even further back in history, Shaw [9] asserted the importan From a ce of aspects like task difficulty and intrinsic interest which are seen as foundational in several concenceptual frameworks proposed to characterize the broader crowdsourcing phenomena (e.g., [10][11]). According pointo some authors, Johansen’s [12] time-space matrix is a landmark in the ofield of CSCW and inspired the development of descriptive models such as the Model of Coordinated Action (MoCA) [13], viewhich frames each collaborative work arrangement on a continuum of synchronicity (synchronous vs. asynchronous), physical distribution, scale (i.e., number of participants), number of communities of practice involved, nascence and planned permanence of coordinated actions, and turnover. More recently, Renyi and colleagues [14] executed a set of data collecthybrid intellion and processing procedures involving structured interviews in order to create a taxonomic scheme covering the components related to the collaboration technology support in home care work, while other authors have devoted most of their efforts to the design of innovative taxonomic interfaces [15]. In adence can be understoodition, there is now an emerging body of research documenting the different levels of hybrid intelligence in human-algorithm interactions.
From a more generic view, the concept of hybrid intelligence has been defined as the as the “combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines” [161]. StemmiIng from this definition, experiments have shown that the time is now appropriate to develop a new taxonomic proposal that can be used for planning and assessing activities among humans (crowds) and algorithms in a hybrid mode. To the best of the authors’ knowledge, no other previous work has specifically focused on crowd-AI interaction, although there are some research works addressing the particularities of hybrid human-AI intelligence at a taxonomic level. For example, Pescetelli [17] stresse line with this definition, Dellermann and the role of algorithms as assistants, peers, facilitators, and system-level operators. On the other hand, Dellermann and associateso-authors [182] characterized the design space of hybrid intelligence systems and recalled the importance of the task itself and its characteristics as a central aspect of collaboration among humans and machines. In the same vein, Dubey et al. [19] proposed a taxonomy of taking into account thuman-AI teaming comprised of task properties, trust-related aspects, teaming characteristics (e.g., shared awareness), and the learning paradigm involved. However, these taxonomies have hitherto not yet fully explored the particularities of hybrid crowd-AI systems and their use cases in real-world applications. Through a qualitative inspection of conceptual frameworks, artifacts, case studies, and empirical results comprising some type of human-AI hybrid interaction at a massive scale, this contribution lies in systematically structuring a set of attributes and characteristics into an integrated taxonomy that arises as a continuum of co-evolving crowd-algorithmic partnerships intended to solve complex problems that neither humans nor machines can solve separately.
The entry is set structural co-evolvability of their constituent parts. By propout as follows. After a discussion of background work in Section 2, a description of the methodological steps follows until the development of a taxonomy for hybrid crowd-AI systems is provided in Section 3. The resulting taxonomic ing a conceptual framework is then presented and discussed in detail in Section 4, while Section 5 is concerned with the validation of the taxonomy proposed. Finally, possible extensions of this work are suggested in the Section 6 by looking toward the future of hybrid systems from a socio-technical view of human-centered systems design.

2. Background and Scope

The point of departure for buildto describe interactions and dynaming the taxonomy presented in this entry was the existing work found on the intersectional spaces between human-computer interaction (HCI) and AI from a crowdsourcing perspective. Although the coining of the term ‘crowdsourcing’ took place in the mid-2000s, some may argue that its origin is rooted in the seminal work of the physicist and astronomer Denison Olmsted, who used news media as a crowdsourcing strategy for obtaining accurate observations on the Leonid meteor shower that was witnessed across the United States in 1833 [20]. What is interesting to note is that the sequential steps and general techniques used by OlAI applications, human-agent teamsted about nineteen decades ago constitute the basis for most of the current crowdsourcing applications. Aligned with this goal, a variety of taxonomies and conceptual frameworks have been developed to better characterize the way as information technology (IT)-enabled crowdsourcing operates. Among the known classifications of crowdsourcing activities, Corney and co-authors [21] wnd society, Peere some of the first to frame this phenomenon from a taxonomic point of view by incorporating the nature of the crowd, the payment mechanisms or lack thereof, and the type of task into an integrated framework. In line with this,ers and associates Rouse [223] proposed a taxonomy that comprises the different forms of intrinsic and extrinsic motivation that can lead to a successful crowdsourcing experience (e.g., social status, altruistic behavior, and personal achievement). This taxonomic proposal also addresses a set of aspects that are specific to the nature of the crowdsourcing task being undertaken by encompassing the expertise and complexity that are directly or indirectly involved in such initiatives. On the basis of insights from the history of group support systems, one would notice similar points to McGrath’s [8] task circumpldentified a set of core design principles for implemex taxons taking into consideration the different task types that can be executed by individuals in a group structure, which may include decision-making, idea generation and information gathering to name just a few examples.
To an extent, this research strand ting hybrid colled to the proliferation of several taxonomies incorporating task-related elements (e.g., [23][24][25][26][27][28][29][30]). Consistent with the task properties discussed in most of these studies, a cursory look at the literature retiveals certain commonalities related to crowd attributes (e.g., reputation), requester features (e.g., incentivization), and platform facilities such as aggregation and payment mechanisms [29]. Othintelligencer research works have focused specifically on internal forms of crowdsourcing [31] or even on effecthe use of crowdsourcing as a taxonomy development strategy by itself [32]vely. On a mNore generic level, Modaresnezhad and colleagues [10] made a cnethelear distinction between the IT-enabled crowdsourcing requirements in business and non-business contexts by basing their proposal on the four collective intelligence “genes” proposed by Malone et al. [23]. However, these s, most of conceptaxonomies fail to fully account for the hybrid nature of crowd-AI interaction and thus are unable to capture the variety of interacalizations and relations that occur when using a hybrid intelligence system.
During thetaxonomic last few years, the advances in the development of AI technologies have been silently leveraging the capacity of a large pool of crowd workers worldwide who provide data on a daily basis and thus contribute to the improvement of several models on a scale that had never been seen before. In fact, this intertwinement of alrameworks of human-algorithms with crowdsourcing workflows brought important advantages in a multiplicity of settings. Prior work has employed these principles and proved to be effective in detecting accessibility problems on public surfaces (e.g., sidewalks) through the use of street-level imagery [33]. In the same vein, Zhang and associates [34] propic activity in a hybrid mosed a system for identifying urban infrastructure damages, such as fallen street signs, when AI-based solutions fail to recognize them. These architectures have also been applied in the context of video object segmentation [35], e are still in infanculturaly heritage damage identification [36], enandoscopic image annotationnot [37], and historical portrait idyentification [38]. In addition, weaving together crowd- and AI-powered techniques has also resulted in positive outcomes in real-time and remote on-demand assistance [39]. In the literature, there are also examples of sensing systems embedded in real-world environments (e.g., domestic spaces) that resort to built-in cameras and crowdsourcing interfaces for dynamic image labeling [40]. That is, crowd-AI hllybrid systems are now able to engage humans and machines through a massively collaborative joint action that spans research fields and temporal and geographical boundariesmapped [41]. Drawing from previous studies on t the characterization of hybrid intelligence systems from a taxonomic viewpoint [18], the work conducted herein expandstics upon what has been previously investigated by examining the many facets of crowd-machine hf hybrid systems and thus identifying key thematic elements derived from the literature.

3. Methodological Approach

Drawmaching on a literature review of extant studies on human-AI interaction with a crowd-in-the-loop, this entry outlines a particular set of arrangements in which the research on this burgeoning area can inform the development of future hybrid intelligence systems while contributing to understanding the socio-technical practices that require humans and machines working together towards a common goal. To this end, this work takes a human-centered AI approach [42] guided by the evidence-based taxonomy deve-crowd systems and their use cases in real-worlopment method proposed by Nickerson and colleagues [6], as deapicted in Figure 1. Synoptically, the practice of taxonomic classification can be described as a full-fledged endeavor in fields like astrophysics [43] and ications. In genetics [44] that usually consists of a formal semantic model with empirically or conceptually derived dimensions and characteristics that are exhaustive and mutually exclusive by nature [6]. At their structural levelterms, taxonomies may have hierarchical or non-hierarchical configurations [45] anrowd be constantly subjected to updating revisions [15]. Building on nthese methodological elements, the present study draws on the HCI body of literature to create a taxonomy of crowd-AI hybrids and thus aid researchers, practitioners, and anyone concerned with the understanding and development of these technologies. With this in mind, a step forward is made by distilling a variety and breadth of conceptual units from studies that seek to address the complementary way in which human crowds interact with AI systems. Essentially, this sheds light on the socio-technical dimensions of crowd-AI integration by acknowledging that both social and technical aspects must be taken into account to understand the functioning of a hybrid system as a whole.
Figure 1. Iterative taxonomy development process flow (a) and methodological details underlying the work undertaken (b). Adapted from Nickerson and co-authors [6].
A novel set of helligence can be particularly uristics and theoretical aspects are proposed as a foundational structure for future research based on a scoping review that follows the guidelines of evidence-based practice [46]. Feful in superom a methodological perspective, this approach seeks to systematically categorize research into a classification scheme that is then used as a foundation for taxonomy construction and validation. To operationalize the taxonomic process, a phenetic approach [47] was used throughout a set of ising, traiterative cycles until the ending conditions were met. To this end, this entry explores the vast space covered by the literature on hybrid crowd-AI systems grounded in case studies, ethnographic fieldwork, conceptual frameworks, surveys, semi-structured interviews, experimental work, mixed methods, and technical artifacts (e.g., algorithms). The taxonomy-building process followed the formal definition of Nickerson et al. [6] to create a taxonomy T ing, or even supplementing automation, with “a set of n dimensions Di (i = 1,…, n), each consisting of ki (ki ≥ 2) mutually exclusive and collectively exhaustive characteristics Cij (j = 1, …, ki) such that each object under consideration has one and only one Cij for each Di, or T = {Di, i = 1,…,n|Di = {Cij, j = 1,…, ki; ki ≥ 2}}”. AIt is worth noting that the guidelines provided by Nickerson and associates [6] represent one of tteche most well-established methodological approaches for taxonomy development in the field of information systems (IS), as reported in a recent literature review [48]. In thiiques vein, these guidelines were systematically applied in an effort to n make the proposed taxonomy clear, concise, robust, comprehensive, explanatory, and extendible as nearly as possible to attend to the conditions advocated by Gerber [49] crowhen addressing the creation of classification artifacts.
Th more first phase of taxonomy development consisted of a descriptive literature analysis [50] ccurato identify rationales for the use of crowd-AI hybrids. This was followed by a systematic examination of the insights extracted and further categorized into a literature classification scheme. In fact, this empirical-to-conceptual methodological approach has been a common procedure for data collection in the taxonomy-building activity (e.g., [51][52][53]), involving a while augmenting human capabilitieset of systematic processes that range from a literature search to data filtering and classification. For taxonomic validation, a conventional approach for corpus construction was used as previously described in [54]. Essentialland interactions by, the sample used is an expanded version of that used in [41]. This was achieved by following a living systemeans of matic review protocol [55], where the search strategy is maintained and updated in a continuous manner as new studies become available. A simplistic Boolean query formulation was applied using the following sequene intelligence of[4]. terms:
((“crowd*-AI” OR “AI-crowd*” OR “crowd*-machine” OR “machine-crowd*” OR “crowd*-computing”) AND (“interact*”))
This expanded upon a previous corpus to accommodate a new set of possible settings in which crowd-AI interaction occurs. This was done due to two main reasons. First, a more recent picture of the state-of-the-art in this domain was needed. To this end, only papers published in the last five years (2018–2022) as of 17 December 2022 were inspected. Second, most of the studies considered for taxonomy validation in [41] comprough a taxonomy-based revised human-AI interaction at an individual level, while here, the focus is on evaluating arrangements involving crowds mixed with AI. The present work is also more restrictive in terms of peer-reviewed studies since this contribution only considered journal articles and conference papers. From a systematic search for publications indexed by the Dimensions database, which contains records from diverse digital libraries such as ACM Digital Library, IEEE Xplore, SpringerLink, and Science Direct with large coverage when compared to Web of Science and Scopus [56], cw of empirical studies involving sontment types such as adjunct/companion proceedings, panels, tutorials, book reviews, correspondence articles, introductions to special issues, doctoral colloquiums and student research competitions, keynote talks, commentaries, and course summaries were disregarded to ensure high-quality results. The search returned 593 publication records. After initial scrutiny of the titles and abstracts, along with the removal of papers that did not meet the inclusion criteria, a total of 138 studies were selected for further appraisal. To be eligible for inclusion, studies had to describe original research from primary or secondary literature addressing the broader domain of human-centered AI with a focus on crowd-AI interaction. As can be perceived from Figure 1, this selection resulted in 25 rese type of crowd-AI hybrid interaction, this entry summarch studies published in English-written, peer-reviewed manuscripts. The final set of papers chosen provided a reliable source of information for testing the taxonomic proposal since they presented a diverse set of scenarios.
As an integral part of the iterative taxonomy zes the main points addevelopment process proposessed in [6], the meta-characteristic of the taxonomy was determined to be its focus on functional properties and attributes of hybrid crowd-AI systems. Through a socio-technical lens grounded on the foundational aspects of crowd computation rticle [[575] and] its embodiment in hybrid human-AI systems [58], the by definition of this meta-characteristic allowed to frame and guide the taxonomy development process until the subjective ending conditions previously mentioned at the level of rocribustness, comprehensiveness, conciseness, extendibility, and explanatory nature of the taxonomy were fulfilled. Followinng the taxonomic work of Landolt and co-authors [59] on the use of dee p neural networks in natural language processing (NLP) applications, this contribution also tried to meet objective ending conditions to enopertiesure that each dimension and characteristic within the dimension were exclusive and no new characteristics or dimensions were added in the final iteration. Therefore, the original dimensions of the taxonomic proposal were validated within a literature matrix in order to verify whether these dimensions and characteristics are present in the final sample of studies addressing crowd-machine hybrid interactionin an integrated fashion. To some degree, the empirical validation of the taxonomy proposed here is inspired by the work of Straus [60], who took McGrath’s [8] group task circumplex as the object of evaluation.

4

2. ‘IDefinside the Matrix’: In Pursuit of aitional Issues and Taxonomy for Hybrid Proposal for Crowd-AI InteracHybrid Intelligent Systems and their Applications

The availability of crowdsourcing platforms has led many organizations to adopt them as continuous and highly available sources of data upon which the paradigm of open innovation [61] some extent, taxonomies provis founded and continues to develop. On its most generic level, these solutions are leveraged by a 24/7 digital workforc a useful guide and represent a problem-solving and innovation-driven approach able to shorten the entire product lifecycle [62].theoretical As novel AI-infused products and features become more and more prevalent and integral to many everyday life pursuits, the need to incorporate hybrid intelligence in highly complex and volatile scenarios (e.g., early warning and prompt response) become even more evident since the complementarity [63] and adaptivoundation for assessing technological developments due to their capability [64] of human and AI-based systems co-evolving over time “as coequal partners” organize complex [65] caon be of particular value to suppress each other’s failures. In this vein, crowdsourcing has been applied to executing tasks such as obtaining ground-truth human labelcepts and knowledge structures [66], gathering ratings for data to be used in supervised machine learningo understandable [67], or even managing portfolio informations [686]. In gTheneral terms, Kittur and associates [57] re ported that crowd intelligence could be particularly useful in supervising, training, or even supplementing automation, while AI techniques can makeint of departure for proposing the taxonomy summarized in this entry was the crowd more accurate while augmenting human capabilities and interactions through machine intelligence. This constitutes the point of departure for the proposal of a taxonomic framework for crowd-AI interaction, whose dimensions are shown in Figure 2 sourcing literature found in the intersectional design space of humand-AI briefly described in the following subsectionsinteraction.
Figure 2. Taxonomy of hybrid crowd-AI systems. This taxonomic proposal integrates key conceptual dimensions of the human-centered AI framework introduced in [41] to characterize the configurations in which crowd-AI interaction occurs within the interplay between human and machine intelligence.
From a taxonomy-building methodological standviewpoint, the taxonomic design approach was largely inspired by the Work System Theory as depictproposed by Alter [697] and further dexplorveloped by Venumuddala and Kamath [708], who conducted an ethnographic fieldwork grounded on a set of observations retrieved in in an AI research laboratory. In additionFurthermore, some elements from the Activity Theory [719] inspired model for assessing computer-supported cooperative work (CSCW) in distributed settings [7210] were also introduced. As a result, a previous human-centered AI framework [4111] was revised and extended to highlight the imprortancle of agency and control, explainability, fairness, common ground, and situational awareness in the design space of hybrid crowd-AI systems.

42.1. Spatio-Temporal and Spatial AxeAspects of Crowd-AI SystemsMachine Interaction

Crowdsourcing can be seen as a gateway to obtain reliable solutions to problems of varying levels of difficulty when there is an urgent need for quick and prompt action or even when the development of a game,s big-scale application,with a purpose (GWAP) [12] software module, sketch,r medical image segmentation application etc.,[13] is required without the strict rigidity to be situated physically close [7314]. At the interaction level, hybrid crowd-AI systems can be able to support real-time crowdsourcing activities involving chatting and live tracking services, and also those occurring asynchronously, such as post-match soccer video analysis. In framing this discussion within the time-space matrix originally described in the context of groupware applications [1215], this concentrates on the spatio-temporal patterns of human-AI partnerships at a crowd scale. Thus, one can argue that the notion of space has been reshaped to incorporate the provision of localization and navigation information into crowdsourcing settings as a way of exploring the full potential of local-and-remote on-demand real-time response in tasks like road data acquisition [7416] and local news reporting [7517]. That is, crowd workers can be physically or virtually distributed in a dispersed or co-located manner or even “synchronize in both time and physical space” [7618]. As some scholars noted, the level of engagement in both paid and non-profit crowdsourcing communities can also be evaluated, taking into account the daily-devoted time of participants, periodicity of interactions, and activity duration [7719]. In this regard, the contribution time and availability of the crowd constitute key information sources in crowd-AI hybrid settings.

42.2. Crowd-Machine HybridIntelligent Task Assignment and Execution and Delegatioin Crowd-AI Hybrid Settings

The rapid progress of AI-based technology has led to novel ways of motivating humans to delegate tasks to AI for further fulfillment. Bouwer [7820] proposed a four-quadrant taxonomic model for AI-based task delegation and stressed the importance of emotional/affective states as key deterministic factors for task delegation. In line with this, Lubars and Tan [7921] mentioned the relevance of trust, motivation, difficulty, and risk as influential determinants of human-AI delegation decisions. In particular, trust and reliance assume a special significance in terms of delegation preferences. The strategic line behind most of the tasks that are commonly crowdsourced in current digital labor platforms is still grounded in microtask design settings [8022], although some recent attention has been given to macrotasking activities (e.g., creative work) which involve crowd-powered tools designed to support computer-hard tasks that need specialized expertise and thus cannot be executed by AI algorithms in an effective manner [8123]. By focusing on the task properties and attributes in crowdsourcing, Nakatsu and co-workers [2724] introduced a taxonomy that classifies the structure (well-structured vs. unstructured) and level of interdependence (independent vs. interdependent) together with a third binary dimension involving the degree of commitment (low vs. high) required to accomplish a task.
Going back to the levels of complexity that may be present in crowdsourcing tasks, Hosseini et al. [2925] briefly divided them into two main categories: simple and complex. Using this rationale, microtasks have been largely described as being simple for crowd workers to perform well and easily in the sense that they involve a lesser degree of context dependence [8226]. Furthermore, these self-contained tasks are usually short by nature and take little time to finish. Zulfiqar and co-authors [8327] go even further by underlining that microtasks do not require specialized skills, which enable any worker to contribute in a rapid and cognitive effortless manner. Extrapolating to more complex crowdsourcing processes, many forms of advanced crowd work have emerged throughout the years, and there is now a renewed focus on task assignment optimization involving algorithmically-supported teams of crowd workers acting collaboratively [8428][8529]. While the possibilities for optimization are manifold across a number of different task scenarios, robust forms of hybrid crowd-machine task allocation and delegation are needed to yield accurate results and reliable outcomes not only for crowd workers acting at the individual level but also in terms of team composition and related performance.

42.3. The Role of Contextual Factors a and Situational CharacteristicAwareness in Crowd-Computing ArrangementHybrid Scenarios

Any crowd-machine hybrid interaction has its own contextual characteristics and specificities. Dwelling on this issue, one may wish to claim that crowdsourcing settings are highly context-dependent and situational information is particularly critical to achieving successful interactions in a crowd-AI working environment since a crowd can be affected by contextual factors such as geo-location, temporal availability, and surrounding devices [8630]. Considering the context from which a crowd worker is interacting with an intelligent system can help to personalize the way the actions are developed and thus improve processes, such as task assignment [8731] while providing resources and contextually relevant information tailored to the needs of each individual based on content usage behaviors [4232] and other forms of context extraction. This involves a set of environmental, social, and cultural contexts [8833] that come with fundamental challenges for hybrid algorithmic-crowdsourcing applications in terms of infrastructural support for achieving efficient and accurate context detection and interpretation. When designing a crowd-AI hybrid system, user-generated inputs must be handled adequately in order to filter the relevant information and better adapt the interaction elements and styles to each particular case [8934]. In hindsight, this is also somewhat related to the notions of explainability and trust in AI systems [9035] since the trustworthy nature of these interactions will be affected by the quality of the contextual information provided and the degree to which a user perceives the AI system they are interacting with as useful for aiding their activities. In such scenarios, aspects like satisfaction shape the internal states of the actors [7210] and can constrain the general performance of the crowd-AI partnerships if the system does not meet the expectations of the users.

42.4. Deconstructing theBehavioral Traces of Crowd Behavior Continuum in Hybrid Crowd-Machine Supported EnvironmentActivity in Human-Algorithmic Ensembles

To some extent, both paid and non-paid forms of crowdsourcing have served as “Petri dishes” for many behavioral studies involving experimental work [9136]. A crowd can differ in terms of attention level, size, emotional state, motivation and preferences, and expertise/skills, among many other characteristics [8630]. In this vein, Robert and Romero [9237] found a considerable impact of diversity and crowd size on performance outcomes while testing the registered users of a WikiProject Film community. As such, online crowd behaviors are volatile by nature and vary given the contextual factors and situational complexity of the work, along with the surrounding environment of its members. Neale and co-authors [7210] briefly explained the importance of context for creating a common ground which can be understood as the shared awareness among actors in their joint activities, including their mutual knowledge. That is, sustaining an appropriate shared understanding can constitute a critical success factor for achieving a successful interaction when designing intelligent systems [9338]. This also applies to the range of crowd work activities that involve self-organized behaviors and transient identities [9439], which imply a reinforced need for effective quality control mechanisms (e.g., gold standard questions) in crowd-AI settings [40]. Furthermore, some crowds are arbitrary, while others are socially networked or organized into teams that coalesce and dissolve in response to an open call for solutions where the nature of the task being crowdsourced is largely dependent on collective actions instead of individual effort only. In some specific cases, these tasks are non-decomposable and involve a shared context, mutual dependencies, changing requirements, and expert skills [9541][9642]. In this vein, some prior research has revealed the presence of “a rich network of collaboration” [9743] through which the crowd constituents are connected and interact in a social manner, although there are many concerns about the bias introduced by these social ties. Seen from a human-machine teaming perspective, imbalanced crowd engagement [9844], conflict management [9945], and lack of common ground [10046] are also key aspects that must be taken into account in such arrangements.

42.5. Hybrid Intelligence Systems at a Crowd Scale: An Infrastructural ViewpointInfrastructural Elements as Facilitators of Hybrid Intelligence

As AI-infused systems thrive and expand, crowdsourcing platforms continue to play an active role in aggregating inputs that are used by companies and other requesters around the globe toward the ultimate goal of enabling algorithms with the ability to cope with complex problems that neither humans nor machines can solve alone [10147]. However, designing for AI with a crowd-in-the-loop includes a set of infrastructure-level elements such as data objects, software elements, and functions that together must provide effective support for actions like assigning tasks, stating rewards, setting time periods, providing feedback, evaluating crowd workers, selecting the best submissions, and aggregating results [10248]. To realize the full potential of these systems, online algorithms can be incorporated into task assignment optimization processes for different types of problems involving simple (decomposable), complex (non-decomposable), and well-structured tasks [8529]. By showing reasonable results in terms of effectiveness, some algorithms have been proposed to organize teams of crowd workers as cooperative units able to perform joint activities and accomplish tasks of varying complexity [9541][9642][10349]. From an infrastructural perspective fitted into the taxonomy proposed in this entry, the contribution on Kamar’s [10450] work to stress the importance of combining both human and machine capabilities in a co-evolving synergistic way.
Taken together, crowd and machine intelligence can offer a lot of opportunities for predicting future events while improving large-scale decision-making since online algorithms can learn from crowd behavior using different integration and coupling levels. In many settings, hybrid intelligence systems can help to draw novel conclusions by interpreting complex patterns in highly dynamic scenarios. In line with this, many have studied novel forms of incorporating explainable AI approaches, such as gamification [10551], for enhancing human perceptions and interpretations of algorithmic decisions in a more transparent and understandable manner. Due to their scalability, crowd-AI architectures can constitute an effective instrument for handling complexity, and thus more research is needed to explore how to best develop hybrid crowd-AI-centered systems taking into account the requirements and personal needs of each crowd worker. In particular, this domain raises some questions about the use of AI to enhance the quality of crowdsourcing outputs through high-quality training data [6752] and related interaction experiences, as seen from a human-centered design perspective [10653]. To summarize, crowd-powered systems can present a wide variety of opportunities to train algorithms “in situ” [10754] while providing learning mechanisms and configuration features for customizing the levels of automation over time.

42.6. ‘Rebuilding from the Ruins’:Social-Ethical Caveats in Hybrid Crowd-Artificial Intelligence and Its Social-Ethical CaveaArrangements

There is a clarion call for an investigation on the ethical, privacy, and trust aspects of human-AI interaction from several causes. For instance, Amershi and colleagues [8833] raised a set of concerns related to the need to avoid social biases and detrimental behaviors. To tackle those issues, it is necessary to dive deep into the harms provided by AI decisions in a contextualized way to ensure fairness, transparency, and accountability in such interactions [10855]. This can be realized by materializing human agency and other strategies that can provide more control over machine behaviors [10956][11057][11158]. From diversity to inclusiveness—and subsequently justice—there is still a long way until these goals are accomplished within the dynamic frame of human-AI interaction and hybrid intelligence augmentation. To address these shortcomings, system developers can play a critical role by considering the potential effects of AI-infused tools on user experiences.
Extrapolating to the crowdsourcing settings, Daniel and co-workers [11259] reported a concern with the ethical conditions, terms, and standards aligned with the compliance towards regulations and laws that are sometimes overlooked in such arrangements. When considering crowd work regulation, aspects of intellectual property, privacy, and confidentiality in terms of participant identities constitute pivotal points [11360]. A look into previous works (e.g., [11461]) shows multiple concerns regarding worker rights, ambiguous task descriptions, acknowledgment of crowd contributions, licensing and consent, low wages, and unjustified rejected work. Such ethical and legal issues are even more expressive in the context of hybrid crowd-AI systems where there are not only online experiments and other human intelligence tasks (HITs) running on crowdsourcing platforms but also machine-in-the-loop processes within the entire hybrid workflow. In a particular setting, strategies like shared decision-making and informed consent can be particularly helpful to mitigate the threats of bad conduct and malicious work if based on a governance strategy where the guidelines, rules, actions, and policies are socially organized by the crowd itself [11562]. In this vein, the potential impacts of the aforementioned socio-ethical concerns surrounding crowd-powered hybrid intelligence systems must be further elucidated and investigated from several lenses to draw a realistic picture of the current situation.

53. Validation and AsFinal Considerationsessment of the Proposed Taxonomy

This proposes a taxonomic framework aimed at accommodating a diverse set of infrastructurally supported crowd-algorithm interactions that occur in a certain time and space within two separate orders of intelligence, which, therefore, can be combined in a hybrid model architecture. The interactions occurring in this hybrid space have a set of unique contextual and situational aspects and must be guided by ethical guidelines, rules, and principles in order to combine crowd and machine workflows effectively and transparently. To validate the proposed taxonomy and demonstrate its utility, this contribution examined the applicability of the taxonomy in a total of twenty-five studies presenting some type of crowd-machine interaction. This is in line with the need for a methodologically rigorous inspection of the possible effects of hybrid intelligence in practical settings. For instance, substantial literature on human-AI interaction has developed quickly across different areas [116], but few attempts have been made to gather evidence about this intersectional space at a crowd scale and thus understand the uses and limitations of hybrid crowd-s AI systems from a socio-technical design viewpoint. The results of the taxonomy-based review are provided in Figure 3, accompanied by an example of a scheme infused to explain the rationale behind the taxonomic classification (Figure 4). In order to determine whether each categorsy of the taxonomy was either present or absent, the following levels were considered:
Figure 3. Synthesis of the literature analysis based on the taxonomy proposed.
Figure 4. Example of a taxonomic scheme used to classify a crowd-AI interaction scenario [39].
Fully addressed: The temanuscript clearly emphasizes the specific elements underlying the taxonomic category by addressing one or more of its unique attributes, with a potential experiment, solution, or case study demonstrating applicability. For instance, Mohanty and co-authors [38] becomake explicit reference to the contextual information (e.g., biographical details) provided to the user about each portrait in Photo Sleuth, a crowd-AI-enabled face recognition platform where a crowd of both expert and non-expert volunteers can tag a picture using this supplementary piece of contextual data to aid the decision process.
Not addressed: The work does not directly address any of the aspects that are inherent to the category under consideration.
Partially addressed: The study procritical to many evides details that can be used to address the particular taxonomic category, even if not explicitly mentioned in the manuscript. By way of example, Kobayashi et al. [117] do not directly provide details about the contextual inryday liformation required in the natural disaster response setting used for demonstrating the proposed method, but the situational awareness and subsequent timely information required to manage the rapidly evolving scenarios toward well-informed and up-to-date decision-making are implicitly stated.
On the and work‐related basis of insights from previous analytical work, this taxonomically grounded literature review process has been adopted in areas like business intelligence and analytics [118] as a way of iteratively developing and refining taxonomic dimensions and characterictivitiestics while pinpointing areas requiring further investigation.
As can be seen from Figure 3, the taxonomy presented in this entry is far from comprehensive enough to accommodate all types of possible scenarios involving crowd-AI interaction. Instead, the goal is to facilitate a cohesive understanding as a basis for further scrutiny of crowd-computing hybrids in real-world applicative contexts. Note that there are some categories that can co-exist, taking into account the specificity of each situation or use case. As such, the first taxonomic unit contains the spatio-temporal elements (T1) that frame crowd-AI interaction in relation to the original time-space matrix proposed by Johansen [12]. In beed to incorporate hybrief terms, this classification model categorizes interactions as follows: same place/time, different places/same time, same place/different times, or different places/different times. To a broad extent, crowd-AI interactions can occur in asynchronous or real-time settings where the individuals that constitute the crowd can be physically and virtually co-located or geographically dispersed (remote). In addition, the worker location and task duration time [11] were also considered, as the latter is intimately connected to the time frame or limit that is set to com intelligence in complete a task. In the example provided in Figure 4, a nearly real-time on-demand crowd-powered x system is proposed to collect responses from crowd workers that can be at any location but need to be available to provide contributions in real-time due to the quickly changing contextual requirements underlying the type of tasks performed. Looking at the results of the taxonomy-based literature review in detail, a total of 84% (n = 21) of inttings becluded papers have reported temporal and/or spatial aspects of crowd activity. As a brief example, Chan and colleagues [119] introduced a mixed-initiatime eve system with an annotation time of 1 min per paper on average in analogy matching tasks. In terms of real-time crowd-AI settings, some primary studies (e.g., [36][39][40][98][120]) presented synchronous interactions between crowd members, although most of the crowdsourcing systems relied on an asynchronous model.
Consistent with the premore evious literature, the most addressed taxonomic unit is related to task design, assignment, and execution (T2), with a total of 25 primary studies. In crowdsourcing experiments, task design is seen as a cornerstone to achieving the goals of a project or campaign since the characteristics and configuration of crowdsourced tasks influence the general outcomes obtained from the crowd [91]. In general, different tent. In this entrypes of tasks were found in the selected sample. As mentioned before, tasks differ both in terms of attributes, complexity, and granularity [11]. For inwe briefly stance, Scalpel-CD [121] generates label inspection microtasks in a dynamic way, while Evorus [39] focuses on classification tasks in the form of voting. A slightly different task specification is employed in Photo Sleuth [38], where crowd workers are invited to perform person identification/recognition tasks that are therefore augmented with visual tags to allow portrait seeking. Moreover, CollabLearn [36]zed the main is based on crowd query tasks where human processing is needed to highlight damaged areas from cultural heritage imagery. A somewhat related body of work (e.g., [34]) has sought to support the execution of crowd-in-the-loop interactive ikeaway mage labeling tasks with the ultimate goal of enhancing AI-powered damage scene assessment algorithms. All in all, the task-related aspects discussed in the growing literature on the interplay between crowdsourcing and AI systems have been playing an indispensable role in explaining complex relationships among crowd inputs and further integration into hybrid workflows.
Extrapolating to the ethical principles and standards in crowd-AI settings (T3), the review only identified nine papers (36%) that explicitly discuss ethical behaviors frossages from a requester-, crowd- or even AI-centered standpoint. Despite the recognized need for fair payment and long-term career building in online crowd work platforms [122], this shows that the ethical concerns underlying the interaction-centric crowd-AI activity are often overlooked from a practical perspective, despite some eaxamples of strategies presented in the crowdsourcing literature such as ensuring fair compensation by paying crowd workers in conformity with the complexity of the task being performed [123]. Based on the findings from the chosen sample, Palmer and conomy-authors [124] provide one of the few examples of studies calling attention to possible unethical actions associated with the disclosure of sensitive information from images and videos. In a similar way, only 20% of primary studies (n = 5) fully describe macsed review on hine and human (crowd) agency, governance practices, or control (oversight) (T4), although extensive research has been conducted about the potential risks and unintentional harms associated with the lack of an effective governance strategy able to regulate algorithmic actions [125]. In this regard, trust building [126][127] appears among the most critical factors affecting technology acceptance d crowhen considering human-AI interaction at a massive scale.
. One enduring taxonomic unit that has been largely addressed since the very beginning of the field of CSCW is concerned with the contextual and situational information (T5) that is then used to support awareness about the environment in which the interaction takes place. This includes what goes on in the environment, who is available, who leaves, and how individuals “remain sensitive to the conduct of others so that an event or action, which may have some passing significance, can be displayed to each other without it necessarily gaining interactional or sequential import” [128]. If the entire sample is considered, 48% of studies (n = 12) mentioned some kind of contextual or situational issues. For instance, Huang et al. [39] proposed a crowd-machine hyall, our study contribrid system where the conversation context is used to provide response candidates using recorded facets and previous chat conversation logs. In particular, the task-specific contextual data is captured with the help of the crowd (by using chat logs) to improve the quality of responses based on current and past conversations. Moreover, Park and associates [129] used seted to find a gap relf-adapting mechanisms based on reinforcement learning (RL) and contextual features extracted to increase crowdsourcing participation over time, while Guo and co-workers [40] cted to the ronsidered the lack of context as a determining factor for failure in smart environments.
Turning to te of ethe role of infrastructural support (T6) in interactive human-AI practices at a crowd level, the review disclosed a total of 20 studies (80%) where infrastructure or the characteristics of a crowd-computing platform are reported. In CSCW, the concept of ‘infrastructure’ and its ecological nature [130] has decal principles and perceiveloped over the years to characterize socio-technical assemblages “that underpins and enables action, engagement, and awareness” [131]. On the basis o f their research review, Hosseini and colleagues [29] gave a detailed desciription of the features that are commonly found in crowdsourcing platforms. In line with this, Santos and co-authors [102] stressed that a crowdsourcing system must provide functions and components aess in ble to support workflows involving actions such as task assignment, pre-selecting crowd workers, stating rewards, and selecting contributions. From payment mechanisms to result aggregation, a crowd-computing platform must combine crowd-, requester-, task- and platform-related information and facilities (i.e., infrastructural elements) that act in unison to carry out tasks in accordance with the different requirements. From an infrastructural perspective, Huang and associates [39] described the conversational worker interface used for chatting and real-time response modeling along with the automatic response voting and generating algorithms deployed to operate in a continuous manner as the conversation continues. Using a crowd-ilding and deploying AI hybrid intelligence lens, the results showed a total of 14 studies addressing algorithmic reasoning, inference, explainability, and interpretability (T7). For instance, human-AI decision-making processes are complex by nature, and AI-infused systems require a certain level of explainabilitesponsibly [132] and interpretability [133] to provide insights about the algorithmic actions taken during the AI-enabled experience. However, several studies agree that these explanations must manifestly be comprehensible, transparent, and actionable (i.e., how humans use or find the explanations useful) to ensure traceability and trust in AI-advised crowd decision-making [134]. Moreover, incorporating reasoning capabilities into hybrid intelligence systems at a massive scale can provide support for better decisions since RL and related algorithms can learn from crowd behavior [104] while offering a lot of possibilities to improve decision-making at a large scale.
This points to the notions of scalability and adaptability (T8) and their importance in highly dynamic and unpredictable environments. Due to their flexibility, hybrid crowd-algorithm methods represent a means of handling complexity and gathering high-th adequality training data. From the entire sample, 17 studies (68%) addressed scalability and/or crowd-AI adaptability. As an example, Anjum et al. [135] stressed the value of scalable imae ge annotation, while Trouille and co-authors [136] have drawn attention to scalable application programming interfaces with the ability to quickly configure a citizen science campaign. A further focus of the taxonomic-based review presented here is on the learning and training processes (T9) behind the current AI models. In crowd-machine settings, humans may “feed” the algorithm to act in situ in an automatic fashion based on data inputs that can work as training samples [137]. On this point, 96% of included studies (n = 24) addressed aspects related to this taxonomic unit. nance strategies. For instance, Kaspar and colleagues [35] proposed a crowd-AI hybrid workflow in which the training data is generated through video segmentation. Further expanding the scope, a related important question is how to train the crowd itself when an AI output is used [117]. Accordingly, Zhang and associates [36][120] call for more research into aspects like AI r investigations in this bias mitigation and the detection of imperfect or biased inputs from the crowd as factors that may compromise the system’s reliability. A look at the work conducted by Huang et al. [39] denrgeotes that the machine learning model that works behind the conversational assistant proposed is fed with training data from past up/down votes given by crowd workers. This continuous learning approach allows optimization of the entire automatic voting process based on the assessment of the quality of the human responses.
Stemming from the literature of social aning domain are required behavioral sciences, the extraction of behavior features from crowd activity (T10) has been particularly relevant to unravel the complexities of crowdsourcing practice and improving the synergistic interaction between humans (crowds) and algorithms. However, the results from this scoping review show that only 40 percent of the literature sample (n = 10) focused on aspects of crowd activity from a behavioral standpoint. Buildino characterize emerging on the collective intelligence genome [23], the underistanding of what, why, who, how, and the circumstances under which such interaction takes place can be enhanced through the behavioral analysis of traces of past activity [138][139]. In hybrid crowd-algorithm interactive settings, user activity tracking involving ues likeystroke, eye tracking, time duration, and mouse click recording (e.g., window resizing) can contribute to the cognitive, physical, and perceptual augmentation of the crowd with practical implications for improving task assignment, performance estimation, and worker pre-selection and/or recommendation based on reliability measures [140][141][142][143]. From a behavioral point of view, identifyinalgorithm aversion and alig active workers can play a critical role in systems such as Evorus [39] sincme the model strongly depends on human inputs, while capturing crowd members’ meta-information is important to personalize the experience to the user in more intelligent ways. Although the development of AI systems supported by online interfaces able to log user actions has a great capacity to conduct behavior analysis [144], ret of crowd perspecent research works (e.g., [145]) have shown that there are a lot of resources required to realize the effective capture of these behavioral traces from an infrastructural lens.
A closely relatees and line of investigation involves the quality control mechanisms (T11) that are used in crowdsourcing systems to reduce the occurrence of inaccuracies and biased inputs provided by malicious (or poorly motivated) crowd workers. Empirically, this work shows that there were only five papers (20%) that did not explicitly report strategies for ensuring quality control and modeling crowd bias. In general terms, quality control strategies for detecting low-quality work can vary from input and output agreement to majority voting/consensus, ground truth (e.g., gold standard questions), contributor evaluation, expert review, real-time support, or even fine-grained behavioral traces [146]. Yet, as pointed out by Daniel and co-auteedback outcomes while improving thors [112] and further developed by Jin et al. [86], a quality assessment process can abe performed computationally (e.g., task execution log analysis), collaboratively (e.g., peer review), or even individually (e.g., qualification test). Regarding the latter, worker pre-selection has been used by requesters as a common approach to filter unqualified workers by taking into consideration factors like reputation and credentials. In the example of the scenario shown in Figure 4, the system has a high error tolerance for imperfect automatelity to learn from crowd actions from voting algorithms and chatbots since the oversight is done by the (human) crowd.
Throughoutity and the last decades, several scholars have stressed the importance of motivational factors (T12) as a quality assurance determinant and also a catalyst for sustained participation in crowdsourcing [147]. Briefly, turthe taxonomy-based review identified 20 primary studies (80%) addressing motivation and incentive mechanisms regarding the use of algorithmic systems powered by crowdsourcing techniques. This includes extrinsic incentives (e.g., immediate payoffs) and also intrinsic (hedonic) motives like inherent satisfaction and entertainment [112]. For example, Evorus [39] provides a continuously updated scoreboard th improve decision-mat displays the reward points given to each crowd worker according to his/her performance on a particular task, where the value is automatically converted into a monetary bonus. As Truong et al. [148] have noted, croing processes wdsourcing contests are also considered intuitive ways for incentivizing crowd workers and are frequently used in macrotask crowdsourcing for solving problems with an elevated degree of complexity [81][149]. In general terth ims, the incentives reported in the literature range from monetary rewards to gifts and gamifilication strategiess [112]. Concerning the former, the review presented here also provides a summary of the primary studies from the sample that presented experimental work based on monetary rewards. 60% of the papers included in the taxonomy-based literature review (n = 15) have repcrorted paid experiments in remote settings. For paid crowdsourcing experiments where the crowd had to execute the whole experiment remotely, this part of the analysis considered the time allotted, pre-selection mechanism(s), crowd size, platform(s) used, and reward in terms of cost per HIT in US Dollars ($). This is in line with previous studies (e.g., [91]) rep-algorting aspects related to the several stages of experimental design in crowdsourcing settings.
Regarding the filtering mechanisms used for earlthmic sy pre-selection of crowd workers, the review of the literature showed five studies where the HIT acceptance rate was set to more than 95%. Moreover, this contribution also identified four studies where the number of tasks completed by a potential crowd worker had to be at least 1000. From this scoping review, a total of five experiments involved some type of ground truth in the form of a gold standard or test question. The selected sample also contained cases in which no pre-selection strategies were applied, while one of the experiments disregarded crowd workers with more than 15 percent of incorrect answers. It is also worth noting that one of the primary studies contained workers located in the United States only. Taken all together, the utilization of these pre-selection techniques can be useful to specify the characteristics of potential contributors improve the likelihood that only skilled, high performing, and/or trustworthy crowd workers are allowed to participate. When considering the platforms used to recruit participants, the results show a clear preference for the use of MTurk (n = 14). Although some tasks were paid up to $0.20, some workers only received $0.05 per task performed. Going back to the payment imbalances and unfair compensation that challenge ethical norms in crowdsourcing marketplaces [150][151], a lens into the literature has revealed that there is an increasing awareness of the crowd worker’s conditions and that the monetary compensation must be set in a fair manner when adopting crowdsourcing for tasks such as data collection and analysis. Overall, this also revealed different average times of HIT completion in accordance with the complexity and requirements of each task, while a remarkable number of primary studies (n = 10) did not mention the total number of crowd workers involved in the experiment. tem design. Nonetheless, some studies involve both crowd workers and experts in their experimental settings, with a crowd size ranging from 2 to 7 crowd workers per task and a maximum size of 147 paid online workers in a single experiment.

6. Concluding Discussion and Challenges Ahead

Owing to the uildifficulty in handling problems of increasing complexity involving noisy and complex data streams, hybrid crowd-machine interactive workflows have been implemented to efficiently scale training data and parameter models in order to produce insights and support decision-making processes in a way that was not possible using conventional methods. In various problem domains, new patterns can be identified from complex decision rules for further verification in a human-in-the-loop basis encapsulated in crowd-AI systems and architectures able to support tasks like content regulation and medical diagnosis. Considering the latter, machine learning skills are now increasingly crowdsourced in the form of contests or competitions running on predictive modeling and analytics services where both monetary and non-monetary incentives are used to aggregate crowd knowledge and thus help to better streamline the early detection and treatment processes that are critical in healthcare settings. However, building trust in crowd-machine interaction while mag trust in crowd-machine interaction while making AI more efficient and adaptable are among the prevalent challenges in crowdsourcing and are usually seen as hindering factors for the successful adoption and use of these systems in practice.
An initial taxonomy of crowd-AI hybrid interaction was proposed as a guiding framework for system developers, public and private health professionals, scientists, and other stakeholders worldwide interested in this emerging area. Despite the contribution towards a comprehensive scheme to explain how crowd-machine hybrid interaction has been addressed in various scenarios presented in the literature, here constitutes only one piece of a much larger puzzle. In other words, the information obtained from work presented here is considered a basis for further expansions and testing scenarios in real-world contexts in the form of continuous observation of the co-evolving relations between humans and algorithms with the goal of informing the design of intelligent systems adequately and cohesively. Framing a territory in constant expansion like crowd-AI hybrids is a challenging task. Overall, the taxonomy-based review found a gap in terms of understanding, both empirically and conceptually, the role of ethical principles and perceived fairness in building and deploying AI responsibly and with adequate governance strategies. This also shows that more experimentation and additional investigative steps will be needed to cope with inconsistent records from crowd workers. Moreover, there are also a number of directions for future work that should be beneficial to extend in the near future for new types of research practices involving crowd-computing hybrids so that scientific institutions, companies, and the general public can all benefit from the knowledge generated from this convergence and therefore better respond to the volatile nature and changing demands of the current environments. 

References

  1. Lofi, C.; El Maarry, K. Design patterns for hybrid algorithmic-crowdsourcing workflows. In Proceedings of the IEEE 16th Conference on Business Informatics, Geneva, Switzerland, 14–17 July 2014; pp. 1–8. Akata, Z.; Balliet, D.; de Rijke, M.; Dignum, F.; Dignum, V.; Eiben, G.; Fokkens, A.; Grossi, D.; Hindriks, K.V.; Hoos, H.H.; et al. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 2020, 53, 18–28.
  2. Heim, E.; Roß, T.; Seitel, A.; März, K.; Stieltjes, B.; Eisenmann, M.; Lebert, J.; Metzger, J.; Sommer, G.; Sauter, A.W.; et al. Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 2018, 5, 034002. Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 274–283.
  3. Vargas-Santiago, M.; Monroy, R.; Ramirez-Marquez, J.E.; Zhang, C.; Leon-Velasco, D.A.; Zhu, H. Complementing solutions to optimization problems via crowdsourcing on video game plays. Appl. Sci. 2020, 10, 8410. Peeters, M.M.M.; van Diggelen, J.; van den Bosch; K. et al. Hybrid collective intelligence in a human–AI society. AI & Soc. 2021, 36, 217–238, 10.1007/s00146-020-01005-y.
  4. Bharadwaj, A.; Gwizdala, D.; Kim, Y.; Luther, K.; Murali, T.M. Flud: A hybrid crowd–algorithm approach for visualizing biological networks. ACM Trans. Comput. Interact. 2022, 29, 1–53. Kittur, A.; Nickerson, J.V.; Bernstein, M.; Gerber, E.; Shaw, A.; Zimmerman, J.; Lease, M.; Horton, J.J. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1301–1318.
  5. Grudin, J.; Poltrock, S. Taxonomy and theory in computer supported cooperative work. Oxf. Handb. Organ. Psychol. 2012, 2, 1323–1348. Correia, A.; Grover, A.; Schneider, D.; Pimentel, A.P.; Chaves, R.; de Almeida, M.A.; Fonseca, B. Designing for hybrid intelligence: A taxonomy and survey of crowd-machine interaction. Appl. Sci. 2023, 13, 2198, 10.3390/app13042198.
  6. Nickerson, R.C.; Varshney, U.; Muntermann, J. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 2013, 22, 336–359.
  7. Harris, A.M.; Gómez-Zará, D.; DeChurch, L.A.; Contractor, N.S. Joining together online: The trajectory of CSCW scholarship on group formation. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–27. Alter, S. Work system theory: Overview of core concepts, extensions, and challenges for the future. J. Assoc. Inf. Syst. 2013, 14, 2.
  8. McGrath, J.E. Groups: Interaction and Performance; Prentice-Hall: Englewood Cliffs, NJ, USA, 1984. Venumuddala, V.R.; Kamath, R. Work systems in the Indian information technology (IT) industry delivering artificial intelligence (AI) solutions and the challenges of work from home. Inf. Syst. Front. 2022, 1–25.
  9. Shaw, M.E. Scaling group tasks: A method for dimensional analysis. JSAS Cat. Sel. Doc. Psychol. 1973, 3, 8. Nardi, B. Context and Consciousness: Activity Theory and Human-Computer Interaction; MIT Press: Cambridge, MA, USA, 1996.
  10. Modaresnezhad, M.; Iyer, L.; Palvia, P.; Taras, V. Information technology (IT) enabled crowdsourcing: A conceptual framework. Inf. Process. Manag. 2020, 57, 102135. Neale, D.C.; Carroll, J.M.; Rosson, M.B. Evaluating computer-supported cooperative work: Models and frameworks. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chicago, IL, USA, 6–10 November 2004; pp. 112–121.
  11. Bhatti, S.S.; Gao, X.; Chen, G. General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey. J. Syst. Softw. 2020, 167, 110611. Correia, A.; Paredes, H.; Schneider, D.; Jameel, S.; Fonseca, B. Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 4013–4018.
  12. Johansen, R. Groupware: Computer Support for Business Teams; The Free Press: New York, NY, USA, 1988. Vargas-Santiago, M.; Monroy, R.; Ramirez-Marquez, J.E.; Zhang, C.; Leon-Velasco, D.A.; Zhu, H. Complementing solutions to optimization problems via crowdsourcing on video game plays. Appl. Sci. 2020, 10, 8410.
  13. Lee, C.P.; Paine, D. From the matrix to a model of coordinated action (MoCA): A conceptual framework of and for CSCW. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015; pp. 179–194. Heim, E.; Roß, T.; Seitel, A.; März, K.; Stieltjes, B.; Eisenmann, M.; Lebert, J.; Metzger, J.; Sommer, G.; Sauter, A.W.; et al. Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 2018, 5, 034002.
  14. Renyi, M.; Gaugisch, P.; Hunck, A.; Strunck, S.; Kunze, C.; Teuteberg, F. Uncovering the complexity of care networks—Towards a taxonomy of collaboration complexity in homecare. Comput. Support. Cooperative Work. (CSCW) 2022, 31, 517–554. Lee, S.W.; Krosnick, R.; Park, S.Y.; Keelean, B.; Vaidya, S.; O’Keefe, S.D.; Lasecki, W.S. Exploring real-time collaboration in crowd-powered systems through a UI design tool. Proc. ACM Human-Computer Interact. 2018, 2, 1–23.
  15. Thomer, A.K.; Twidale, M.B.; Yoder, M.J. Transforming taxonomic interfaces: “Arm’s length” cooperative work and the maintenance of a long-lived classification system. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–23. Johansen, R. Groupware: Computer Support for Business Teams; The Free Press: New York, NY, USA, 1988.
  16. Akata, Z.; Balliet, D.; de Rijke, M.; Dignum, F.; Dignum, V.; Eiben, G.; Fokkens, A.; Grossi, D.; Hindriks, K.V.; Hoos, H.H.; et al. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 2020, 53, 18–28. Wang, X.; Ding, L.; Wang, Q.; Xie, J.; Wang, T.; Tian, X.; Guan, Y.; Wang, X. A picture is worth a thousand words: Share your real-time view on the road. IEEE Trans. Veh. Technol. 2016, 66, 2902–2914.
  17. Pescetelli, N. A brief taxonomy of hybrid intelligence. Forecasting 2021, 3, 633–643. Agapie, E.; Teevan, J.; Monroy-Hernández, A. Crowdsourcing in the field: A case study using local crowds for event reporting. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, San Diego, CA, USA, 8–11 November 2015; pp. 2–11.
  18. Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 274–283. Lafreniere, B.J.; Grossman, T.; Anderson, F.; Matejka, J.; Kerrick, H.; Nagy, D.; Vasey, L.; Atherton, E.; Beirne, N.; Coelho, M.H.; et al. Crowdsourced fabrication. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 15–28.
  19. Dubey, A.; Abhinav, K.; Jain, S.; Arora, V.; Puttaveerana, A. HACO: A framework for developing human-AI teaming. In Proceedings of the 13th Innovations in Software Engineering Conference, Jabalpur, India, 27–29 February 2020; pp. 1–9. Aristeidou, M.; Scanlon, E.; Sharples, M. Profiles of engagement in online communities of citizen science participation. Comput. Hum. Behav. 2017, 74, 246–256.
  20. Littmann, M.; Suomela, T. Crowdsourcing, the great meteor storm of 1833, and the founding of meteor science. Endeavour 2014, 38, 130–138. Bouwer, A. Under which conditions are humans motivated to delegate tasks to AI? A taxonomy on the human emotional state driving the motivation for AI delegation. In Marketing and Smart Technologies; Springer: Singapore, 2022; pp. 37–53.
  21. Corney, J.R.; Torres-Sánchez, C.; Jagadeesan, A.P.; Regli, W.C. Outsourcing labour to the cloud. Int. J. Innovation Sustain. Dev. 2009, 4, 294–313. Lubars, B.; Tan, C. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 57–67.
  22. Rouse, A.C. A preliminary taxonomy of crowdsourcing. In Proceedings of the Australasian Conference on Information Systems, Brisbane, Australia, 1–3 December 2010; Volume 76. Sun, Y.; Ma, X.; Ye, K.; He, L. Investigating crowdworkers’ identify, perception and practices in micro-task crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–20.
  23. Malone, T.W.; Laubacher, R.; Dellarocas, C. The collective intelligence genome. IEEE Eng. Manag. Rev. 2010, 38, 38–52. Khan, V.J.; Papangelis, K.; Lykourentzou, I.; Markopoulos, P. Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019.
  24. Zwass, V. Co-creation: Toward a taxonomy and an integrated research perspective. Int. J. Electron. Commer. 2010, 15, 11–48. Nakatsu, R.T.; Grossman, E.B.; Iacovou, C.L. A taxonomy of crowdsourcing based on task complexity. J. Inf. Sci. 2014, 40, 823–834.
  25. Doan, A.; Ramakrishnan, R.; Halevy, A.Y. Crowdsourcing systems on the world-wide web. Commun. ACM 2011, 54, 86–96. Hosseini, M.; Shahri, A.; Phalp, K.; Taylor, J.; Ali, R. Crowdsourcing: A taxonomy and systematic mapping study. Comput. Sci. Rev. 2015, 17, 43–69.
  26. Saxton, G.D.; Oh, O.; Kishore, R. Rules of crowdsourcing: Models, issues, and systems of control. Inf. Syst. Management 2013, 30, 2–20. Teevan, J. The future of microwork. XRDS Crossroads ACM Mag. Stud. 2016, 23, 26–29.
  27. Nakatsu, R.T.; Grossman, E.B.; Iacovou, C.L. A taxonomy of crowdsourcing based on task complexity. J. Inf. Sci. 2014, 40, 823–834. Zulfiqar, M.; Malik, M.N.; Khan, H.H. Microtasking activities in crowdsourced software development: A systematic literature review. IEEE Access 2022, 10, 24721–24737.
  28. Gadiraju, U.; Kawase, R.; Dietze, S. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media, Santiago, Chile, 1–4 September 2014; pp. 218–223. Rahman, H.; Roy, S.B.; Thirumuruganathan, S.; Amer-Yahia, S.; Das, G. Optimized group formation for solving collaborative tasks. VLDB J. 2018, 28, 1–23.
  29. Hosseini, M.; Shahri, A.; Phalp, K.; Taylor, J.; Ali, R. Crowdsourcing: A taxonomy and systematic mapping study. Comput. Sci. Rev. 2015, 17, 43–69. Schmitz, H.; Lykourentzou, I. Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Trans. Soc. Comput. 2018, 1, 1–33.
  30. Alabduljabbar, R.; Al-Dossari, H. Towards a classification model for tasks in crowdsourcing. In Proceedings of the Second International Conference on Internet of Things and Cloud Computing, Cambridge, UK, 22–23 March 2017; pp. 1–7. Jin, Y.; Carman, M.; Zhu, Y.; Xiang, Y. A technical survey on statistical modelling and design methods for crowdsourcing quality control. Artif. Intell. 2020, 287, 103351.
  31. Chen, Q.; Magnusson, M.; Björk, J. Exploring the effects of problem- and solution-related knowledge sharing in internal crowdsourcing. J. Knowl. Manag. 2022, 26, 324–347. Moayedikia, A.; Ghaderi, H.; Yeoh, W. Optimizing microtask assignment on crowdsourcing platforms using Markov chain Monte Carlo. Decis. Support Syst. 2020, 139, 113404.
  32. Chilton, L.B.; Little, G.; Edge, D.; Weld, D.S.; Landay, J.A. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 1999–2008. Xu, W.; Dainoff, M.J.; Ge, L.; Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J. Human–Computer Interact. 2022, 39, 494–518.
  33. Sharif, A.; Gopal, P.; Saugstad, M.; Bhatt, S.; Fok, R.; Weld, G.; Dey, K.A.M.; Froehlich, J.E. Experimental crowd+AI approaches to track accessibility features in sidewalk intersections over time. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event, 18–22 October 2021; pp. 1–5. Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.T.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for human-AI interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019.
  34. Zhang, D.Y.; Huang, Y.; Zhang, Y.; Wang, D. Crowd-assisted disaster scene assessment with human-AI interactive attention. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 2717–2724. Rafner, J.; Gajdacz, M.; Kragh, G.; Hjorth, A.; Gander, A.; Palfi, B.; Berditchevskiaia, A.; Grey, F.; Gal, K.; Segal, A.; et al. Mapping citizen science through the lens of human-centered AI. Hum. Comput. 2022, 9, 66–95.
  35. Kaspar, A.; Patterson, G.; Kim, C.; Aksoy, Y.; Matusik, W.; Elgharib, M. Crowd-guided ensembles: How can we choreograph crowd workers for video segmentation? In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018. Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 2020, 10, 1–31.
  36. Zhang, Y.; Zong, R.; Kou, Z.; Shang, L.; Wang, D. CollabLearn: An uncertainty-aware crowd-AI collaboration system for cultural heritage damage assessment. IEEE Trans. Comput. Soc. Syst. 2021, 9, 1515–1529. Ramírez, J.; Sayin, B.; Baez, M.; Casati, F.; Cernuzzi, L.; Benatallah, B.; Demartini, G. On the state of reporting in crowdsourcing experiments and a checklist to aid current practices. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–34.
  37. Maier-Hein, L.; Ross, T.; Gröhl, J.; Glocker, B.; Bodenstedt, S.; Stock, C.; Heim, E.; Götz, M.; Wirkert, S.J.; Kenngott, H.; et al. Crowd-algorithm collaboration for large-scale endoscopic image annotation with confidence. In Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 616–623. Robert, L.; Romero, D.M. Crowd size, diversity and performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1379–1382.
  38. Mohanty, V.; Thames, D.; Mehta, S.; Luther, K. Photo Sleuth: Combining human expertise and face recognition to identify historical portraits. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; pp. 547–557. Blandford, A. Intelligent interaction design: The role of human-computer interaction research in the design of intelligent systems. Expert Syst. 2001, 18, 3–18.
  39. Huang, T.H.; Chang, J.C.; Bigham, J.P. Evorus: A crowd-powered conversational assistant built to automate itself over time. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; p. 295. Huang, K.; Zhou, J.; Chen, S. Being a solo endeavor or team worker in crowdsourcing contests? It is a long-term decision you need to make. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32.
  40. Guo, A.; Jain, A.; Ghose, S.; Laput, G.; Harrison, C.; Bigham, J.P. Crowd-AI camera sensing in the real world. Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol. 2018, 2, 1–20.
  41. Correia, A.; Paredes, H.; Schneider, D.; Jameel, S.; Fonseca, B. Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 4013–4018. Venkatagiri, S.; Thebault-Spieker, J.; Kohler, R.; Purviance, J.; Mansur, R.S.; Luther, K. GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–30.
  42. Xu, W.; Dainoff, M.J.; Ge, L.; Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J. Human–Computer Interact. 2022, 39, 494–518. Zhou, S.; Valentine, M.; Bernstein, M.S. In search of the dream team: Temporally constrained multi-armed bandits for identifying effective team structures. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018.
  43. Colazo, M.; Alvarez-Candal, A.; Duffard, R. Zero-phase angle asteroid taxonomy classification using unsupervised machine learning algorithms. Astron. Astrophys. 2022, 666, A77. Gray, M.L.; Suri, S.; Ali, S.S.; Kulkarni, D. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 134–147.
  44. Mock, F.; Kretschmer, F.; Kriese, A.; Böcker, S.; Marz, M. Taxonomic classification of DNA sequences beyond sequence similarity using deep neural networks. Proc. Natl. Acad. Sci. USA 2022, 119, e2122636119. Zhang, X.; Zhang, W.; Zhao, Y.; Zhu, Q. Imbalanced volunteer engagement in cultural heritage crowdsourcing: A task-related exploration based on causal inference. Inf. Process. Manag. 2022, 59, 103027.
  45. Rasch, R.F. The nature of taxonomy. Image J. Nurs. Scholarsh. 1987, 19, 147–149. McNeese, N.J.; Demir, M.; Cooke, N.J.; She, M. Team situation awareness and conflict: A study of human–machine teaming. J. Cogn. Eng. Decis. Mak. 2021, 15, 83–96.
  46. Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.; Colquhoun, H.; Kastner, M.; Levac, D.; Ng, C.; Sharpe, J.P.; Wilson, K.; et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med. Res. Methodol. 2016, 16, 15. Dafoe, A.; Bachrach, Y.; Hadfield, G.; Horvitz, E.; Larson, K.; Graepel, T. Cooperative AI: Machines must learn to find common ground. Nature 2021, 593, 33–36.
  47. Sokal, R.R. Phenetic taxonomy: Theory and methods. Annu. Rev. Ecol. Syst. 1986, 17, 423–442. Alorwu, A.; Savage, S.; van Berkel, N.; Ustalov, D.; Drutsa, A.; Oppenlaender, J.; Bates, O.; Hettiachchi, D.; Gadiraju, U.; Gonçalves, J.; et al. REGROW: Reimagining global crowdsourcing for better human-AI collaboration. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7.
  48. Oberländer, A.M.; Lösser, B.; Rau, D. Taxonomy research in information systems: A systematic assessment. In Proceedings of the 27th European Conference on Information Systems, Stockholm and Uppsala, Sweden, 8–14 June 2019. Santos, C.A.; Baldi, A.M.; de Assis Neto, F.R.; Barcellos, M.P. Essential elements, conceptual foundations and workflow design in crowd-powered projects. J. Inf. Sci. 2022.
  49. Gerber, A. Computational ontologies as classification artifacts in IS research. In Proceedings of the 24th Americas Conference on Information Systems, New Orleans, LA, USA, 16–18 August 2018. Valentine, M.A.; Retelny, D.; To, A.; Rahmati, N.; Doshi, T.; Bernstein, M.S. Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 3523–3537.
  50. Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q. 2002, 26, 2. Kamar, E. Directions in hybrid intelligence: Complementing AI systems with human intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 4070–4073.
  51. Schmidt-Kraepelin, M.; Thiebes, S.; Tran, M.C.; Sunyaev, A. What’s in the game? Developing a taxonomy of gamification concepts for health apps. In Proceedings of the 51st Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 3–6 January 2018; pp. 1–10. Tocchetti, A.; Corti, L.; Brambilla, M.; Celino, I. EXP-Crowd: A gamified crowdsourcing framework for explainability. Front. Artif. Intell. 2022, 5, 826499.
  52. Sai, A.R.; Buckley, J.; Fitzgerald, B.; Le Gear, A. Taxonomy of centralization in public blockchain systems: A systematic literature review. Inf. Process. Manag. 2021, 58, 102584. Vaughan, J.W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res. 2017, 18, 7026–7071.
  53. Andraschko, L.; Wunderlich, P.; Veit, D.; Sarker, S. Towards a taxonomy of smart home technology: A preliminary understanding. In Proceedings of the 42nd International Conference on Information Systems, Austin, TX, USA, 12–15 December 2021. Barbosa, N.M.; Chen, M. Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–12.
  54. Larsen, K.R.; Hovorka, D.; Dennis, A.; West, J.D. Understanding the elephant: The discourse approach to boundary identification and corpus construction for theory review articles. J. Assoc. Inf. Syst. 2019, 20, 15. Basker, T.; Tottler, D.; Sanguet, R.; Muffbur, J. Artificial intelligence and human learning: Improving analytic reasoning via crowdsourcing and structured analytic techniques. Comput. Educ. 2022, 3, 1003056.
  55. Elliott, J.H.; Turner, T.; Clavisi, O.; Thomas, J.; Higgins, J.P.T.; Mavergames, C.; Gruen, R.L. Living systematic reviews: An emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014, 11, e1001603. Mirbabaie, M.; Brendel, A.B.; Hofeditz, L. Ethics and AI in information systems research. Commun. Assoc. Inf. Syst. 2022, 50, 38.
  56. Singh, V.K.; Singh, P.; Karmakar, M.; Leta, J.; Mayr, P. The journal coverage of Web of Science, Scopus and Dimensions: A comparative analysis. Scientometrics 2021, 126, 5113–5142. Sundar, S.S. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). J. Comput. Commun. 2020, 25, 74–88.
  57. Kittur, A.; Nickerson, J.V.; Bernstein, M.; Gerber, E.; Shaw, A.; Zimmerman, J.; Lease, M.; Horton, J.J. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1301–1318. Liu, B. In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. J. Comput. Commun. 2021, 26, 384–402.
  58. Zhang, D.; Zhang, Y.; Li, Q.; Plummer, T.; Wang, D. CrowdLearn: A crowd-AI hybrid system for deep learning-based damage assessment applications. In Proceedings of the 39th IEEE International Conference on Distributed Computing Systems, Dallas, TX, USA, 7–10 July 2019; pp. 1221–1232. Kang, H.; Lou, C. AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. J. Comput. Commun. 2022, 27, zmac014.
  59. Landolt, S.; Wambsganss, T.; Söllner, M. A taxonomy for deep learning in natural language processing. In Proceedings of the 54th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5 January 2021; pp. 1061–1070. Daniel, F.; Kucherbaev, P.; Cappiello, C.; Benatallah, B.; Allahbakhsh, M. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 2018, 51, 1–40.
  60. Straus, S.G. Testing a typology of tasks: An empirical validation of McGrath’s (1984) group task circumplex. Small Group Research 1999, 30, 166–187. Pedersen, J.; Kocsis, D.; Tripathi, A.; Tarrell, A.; Weerakoon, A.; Tahmasbi, N.; Xiong, J.; Deng, W.; Oh, O.; de Vreede, G.-J. Conceptual foundations of crowdsourcing: A review of IS research. In Proceedings of the 46th Hawaii International Conference on System Sciences, Wailea, HI, USA, 7–10 January 2013; pp. 579–588.
  61. Chesbrough, H.W. Open Innovation: The New Imperative for Creating and Profiting from Technology; Harvard Business Press: Boston, MA, USA, 2003. Hansson, K.; Ludwig, T. Crowd dynamics: Conflicts, contradictions, and community in crowdsourcing. Comput. Support. Coop. Work. 2019, 28, 791–794.
  62. Karachiwalla, R.; Pinkow, F. Understanding crowdsourcing projects: A review on the key design elements of a crowdsourcing initiative. Creativity Innov. Manag. 2021, 30, 563–584. Gimpel, H.; Graf-Seyfried, V.; Laubacher, R.; Meindl, O. Towards artificial intelligence augmenting facilitation: AI affordances in macro-task crowdsourcing. Group Decis. Negot. 2023, 1–50.
  63. Hemmer, P.; Schemmer, M.; Vössing, M.; Kühl, N. Human-AI complementarity in hybrid intelligence systems: A structured literature review. In Proceedings of the 25th Pacific Asia Conference on Information Systems, Virtual Event, Dubai, United Arab Emirates, 12–14 July 2021; p. 78.
  64. Holstein, K.; Aleven, V.; Rummel, N. A conceptual framework for human-AI hybrid adaptivity in education. In Proceedings of the 21st International Conference on Artificial Intelligence in Education, Ifrane, Morocco, 6–10 July 2020; pp. 240–254.
  65. Siemon, D. Elaborating team roles for artificial intelligence-based teammates in human-AI collaboration. Group Decis. Negot. 2022, 31, 871–912.
  66. Weber, E.; Marzo, N.; Papadopoulos, D.P.; Biswas, A.; Lapedriza, A.; Ofli, F.; Imran, M.; Torralba, A. Detecting natural disasters, damage, and incidents in the wild. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 331–350.
  67. Vaughan, J.W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res. 2017, 18, 7026–7071.
  68. Hamadi, R.; Ghazzai, H.; Massoud, Y. A generative adversarial network for financial advisor recruitment in smart crowdsourcing platforms. Appl. Sci. 2022, 12, 9830.
  69. Alter, S. Work system theory: Overview of core concepts, extensions, and challenges for the future. J. Assoc. Inf. Syst. 2013, 14, 2.
  70. Venumuddala, V.R.; Kamath, R. Work systems in the Indian information technology (IT) industry delivering artificial intelligence (AI) solutions and the challenges of work from home. Inf. Syst. Front. 2022, 1–25.
  71. Nardi, B. Context and Consciousness: Activity Theory and Human-Computer Interaction; MIT Press: Cambridge, MA, USA, 1996.
  72. Neale, D.C.; Carroll, J.M.; Rosson, M.B. Evaluating computer-supported cooperative work: Models and frameworks. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chicago, IL, USA, 6–10 November 2004; pp. 112–121.
  73. Lee, S.W.; Krosnick, R.; Park, S.Y.; Keelean, B.; Vaidya, S.; O’Keefe, S.D.; Lasecki, W.S. Exploring real-time collaboration in crowd-powered systems through a UI design tool. Proc. ACM Human-Computer Interact. 2018, 2, 1–23.
  74. Wang, X.; Ding, L.; Wang, Q.; Xie, J.; Wang, T.; Tian, X.; Guan, Y.; Wang, X. A picture is worth a thousand words: Share your real-time view on the road. IEEE Trans. Veh. Technol. 2016, 66, 2902–2914.
  75. Agapie, E.; Teevan, J.; Monroy-Hernández, A. Crowdsourcing in the field: A case study using local crowds for event reporting. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, San Diego, CA, USA, 8–11 November 2015; pp. 2–11.
  76. Lafreniere, B.J.; Grossman, T.; Anderson, F.; Matejka, J.; Kerrick, H.; Nagy, D.; Vasey, L.; Atherton, E.; Beirne, N.; Coelho, M.H.; et al. Crowdsourced fabrication. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 15–28.
  77. Aristeidou, M.; Scanlon, E.; Sharples, M. Profiles of engagement in online communities of citizen science participation. Comput. Hum. Behav. 2017, 74, 246–256.
  78. Bouwer, A. Under which conditions are humans motivated to delegate tasks to AI? A taxonomy on the human emotional state driving the motivation for AI delegation. In Marketing and Smart Technologies; Springer: Singapore, 2022; pp. 37–53.
  79. Lubars, B.; Tan, C. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 57–67.
  80. Sun, Y.; Ma, X.; Ye, K.; He, L. Investigating crowdworkers’ identify, perception and practices in micro-task crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–20.
  81. Khan, V.J.; Papangelis, K.; Lykourentzou, I.; Markopoulos, P. Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019.
  82. Teevan, J. The future of microwork. XRDS Crossroads ACM Mag. Stud. 2016, 23, 26–29.
  83. Zulfiqar, M.; Malik, M.N.; Khan, H.H. Microtasking activities in crowdsourced software development: A systematic literature review. IEEE Access 2022, 10, 24721–24737.
  84. Rahman, H.; Roy, S.B.; Thirumuruganathan, S.; Amer-Yahia, S.; Das, G. Optimized group formation for solving collaborative tasks. VLDB J. 2018, 28, 1–23.
  85. Schmitz, H.; Lykourentzou, I. Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Trans. Soc. Comput. 2018, 1, 1–33.
  86. Jin, Y.; Carman, M.; Zhu, Y.; Xiang, Y. A technical survey on statistical modelling and design methods for crowdsourcing quality control. Artif. Intell. 2020, 287, 103351.
  87. Moayedikia, A.; Ghaderi, H.; Yeoh, W. Optimizing microtask assignment on crowdsourcing platforms using Markov chain Monte Carlo. Decis. Support Syst. 2020, 139, 113404.
  88. Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.T.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for human-AI interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019.
  89. Rafner, J.; Gajdacz, M.; Kragh, G.; Hjorth, A.; Gander, A.; Palfi, B.; Berditchevskiaia, A.; Grey, F.; Gal, K.; Segal, A.; et al. Mapping citizen science through the lens of human-centered AI. Hum. Comput. 2022, 9, 66–95.
  90. Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 2020, 10, 1–31.
  91. Ramírez, J.; Sayin, B.; Baez, M.; Casati, F.; Cernuzzi, L.; Benatallah, B.; Demartini, G. On the state of reporting in crowdsourcing experiments and a checklist to aid current practices. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–34.
  92. Robert, L.; Romero, D.M. Crowd size, diversity and performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1379–1382.
  93. Blandford, A. Intelligent interaction design: The role of human-computer interaction research in the design of intelligent systems. Expert Syst. 2001, 18, 3–18.
  94. Huang, K.; Zhou, J.; Chen, S. Being a solo endeavor or team worker in crowdsourcing contests? It is a long-term decision you need to make. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32.
  95. Venkatagiri, S.; Thebault-Spieker, J.; Kohler, R.; Purviance, J.; Mansur, R.S.; Luther, K. GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–30.
  96. Zhou, S.; Valentine, M.; Bernstein, M.S. In search of the dream team: Temporally constrained multi-armed bandits for identifying effective team structures. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018.
  97. Gray, M.L.; Suri, S.; Ali, S.S.; Kulkarni, D. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 134–147.
  98. Zhang, X.; Zhang, W.; Zhao, Y.; Zhu, Q. Imbalanced volunteer engagement in cultural heritage crowdsourcing: A task-related exploration based on causal inference. Inf. Process. Manag. 2022, 59, 103027.
  99. McNeese, N.J.; Demir, M.; Cooke, N.J.; She, M. Team situation awareness and conflict: A study of human–machine teaming. J. Cogn. Eng. Decis. Mak. 2021, 15, 83–96.
  100. Dafoe, A.; Bachrach, Y.; Hadfield, G.; Horvitz, E.; Larson, K.; Graepel, T. Cooperative AI: Machines must learn to find common ground. Nature 2021, 593, 33–36.
  101. Alorwu, A.; Savage, S.; van Berkel, N.; Ustalov, D.; Drutsa, A.; Oppenlaender, J.; Bates, O.; Hettiachchi, D.; Gadiraju, U.; Gonçalves, J.; et al. REGROW: Reimagining global crowdsourcing for better human-AI collaboration. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7.
  102. Santos, C.A.; Baldi, A.M.; de Assis Neto, F.R.; Barcellos, M.P. Essential elements, conceptual foundations and workflow design in crowd-powered projects. J. Inf. Sci. 2022.
  103. Valentine, M.A.; Retelny, D.; To, A.; Rahmati, N.; Doshi, T.; Bernstein, M.S. Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 3523–3537.
  104. Kamar, E. Directions in hybrid intelligence: Complementing AI systems with human intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 4070–4073.
  105. Tocchetti, A.; Corti, L.; Brambilla, M.; Celino, I. EXP-Crowd: A gamified crowdsourcing framework for explainability. Front. Artif. Intell. 2022, 5, 826499.
  106. Barbosa, N.M.; Chen, M. Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–12.
  107. Basker, T.; Tottler, D.; Sanguet, R.; Muffbur, J. Artificial intelligence and human learning: Improving analytic reasoning via crowdsourcing and structured analytic techniques. Comput. Educ. 2022, 3, 1003056.
  108. Mirbabaie, M.; Brendel, A.B.; Hofeditz, L. Ethics and AI in information systems research. Commun. Assoc. Inf. Syst. 2022, 50, 38.
  109. Sundar, S.S. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). J. Comput. Commun. 2020, 25, 74–88.
  110. Liu, B. In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. J. Comput. Commun. 2021, 26, 384–402.
  111. Kang, H.; Lou, C. AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. J. Comput. Commun. 2022, 27, zmac014.
  112. Daniel, F.; Kucherbaev, P.; Cappiello, C.; Benatallah, B.; Allahbakhsh, M. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 2018, 51, 1–40.
  113. Pedersen, J.; Kocsis, D.; Tripathi, A.; Tarrell, A.; Weerakoon, A.; Tahmasbi, N.; Xiong, J.; Deng, W.; Oh, O.; de Vreede, G.-J. Conceptual foundations of crowdsourcing: A review of IS research. In Proceedings of the 46th Hawaii International Conference on System Sciences, Wailea, HI, USA, 7–10 January 2013; pp. 579–588.
  114. Hansson, K.; Ludwig, T. Crowd dynamics: Conflicts, contradictions, and community in crowdsourcing. Comput. Support. Coop. Work. 2019, 28, 791–794.
  115. Gimpel, H.; Graf-Seyfried, V.; Laubacher, R.; Meindl, O. Towards artificial intelligence augmenting facilitation: AI affordances in macro-task crowdsourcing. Group Decis. Negot. 2023, 1–50.
  116. Wu, T.; Terry, M.; Cai, C.J. AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022.
  117. Kobayashi, M.; Wakabayashi, K.; Morishima, A. Human+AI crowd task assignment considering result quality requirements. In Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing, Virtual, 14–18 November 2021; pp. 97–107.
  118. Eggert, M.; Alberts, J. Frontiers of business intelligence and analytics 3.0: A taxonomy-based literature review and research agenda. Bus. Res. 2020, 13, 685–739.
  119. Chan, J.; Chang, J.C.; Hope, T.; Shahaf, D.; Kittur, A. SOLVENT: A mixed initiative system for finding analogies between research papers. Proc. ACM Hum.-Comput. Interact. 2018, 2, 1–21.
  120. Zhang, Y.; Shang, L.; Zong, R.; Wang, Z.; Kou, Z.; Wang, D. StreamCollab: A streaming crowd-AI collaborative system to smart urban infrastructure monitoring in social sensing. In Proceedings of the Ninth AAAI Conference on Human Computation and Crowdsourcing, Virtual, 14–18 November 2021; pp. 179–190.
  121. Yang, J.; Smirnova, A.; Yang, D.; Demartini, G.; Lu, Y.; Cudré-Mauroux, P. Scalpel-CD: Leveraging crowdsourcing and deep probabilistic modeling for debugging noisy training data. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2158–2168.
  122. Schlagwein, D.; Cecez-Kecmanovic, D.; Hanckel, B. Ethical norms and issues in crowdsourcing practices: A Habermasian analysis. Inf. Syst. J. 2018, 29, 811–837.
  123. Gadiraju, U.; Demartini, G.; Kawase, R.; Dietze, S. Crowd anatomy beyond the good and bad: Behavioral traces for crowd worker modeling and pre-selection. Comput. Support. Cooperative Work. 2018, 28, 815–841.
  124. Palmer, M.S.; Huebner, S.E.; Willi, M.; Fortson, L.; Packer, C. Citizen science, computing, and conservation: How can “crowd AI” change the way we tackle large-scale ecological challenges? Hum. Comput. 2021, 8, 54–75.
  125. Mannes, A. Governance, risk, and artificial intelligence. AI Mag. 2020, 41, 61–69.
  126. Choung, H.; David, P.; Ross, A. Trust and ethics in AI. AI Soc. 2022, 1–13.
  127. Zheng, Q.; Tang, Y.; Liu, Y.; Liu, W.; Huang, Y. UX research on conversational human-AI interaction: A literature review of the ACM Digital Library. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022.
  128. Heath, C.; Svensson, M.S.; Hindmarsh, J.; Luff, P.; Vom Lehn, D. Configuring awareness. Comput. Support. Coop. Work. 2002, 11, 317–347.
  129. Park, J.; Krishna, R.; Khadpe, P.; Fei-Fei, L.; Bernstein, M. AI-based request augmentation to increase crowdsourcing participation. In Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing, Stevenson, WA, USA, 28–30 October 2019; pp. 115–124.
  130. Star, S.L.; Ruhleder, K. Steps towards an ecology of infrastructure: Complex problems in design and access for large-scale collaborative systems. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chapel Hill, NC, USA, 22–26 October 1994; pp. 253–264.
  131. Mosconi, G.; Korn, M.; Reuter, C.; Tolmie, P.; Teli, M.; Pipek, V. From Facebook to the neighbourhood: Infrastructuring of hybrid community engagement. Comput. Support. Coop. Work (CSCW) 2017, 26, 959–1003.
  132. Ehsan, U.; Liao, Q.V.; Muller, M.; Riedl, M.O.; Weisz, J.D. Expanding explainability: Towards social transparency in AI systems. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–19.
  133. Thieme, A.; Cutrell, E.; Morrison, C.; Taylor, A.; Sellen, A. Interpretability as a dynamic of human-AI interaction. Interactions 2020, 27, 40–45.
  134. Walzner, D.D.; Fuegener, A.; Gupta, A. Managing AI advice in crowd decision-making. In Proceedings of the International Conference on Information Systems, Copenhagen, Denmark, 9–14 December 2022; p. 1315.
  135. Anjum, S.; Verma, A.; Dang, B.; Gurari, D. Exploring the use of deep learning with crowdsourcing to annotate images. Hum. Comput. 2021, 8, 76–106.
  136. Trouille, L.; Lintott, C.J.; Fortson, L.F. Citizen science frontiers: Efficiency, engagement, and serendipitous discovery with human-machine systems. Proc. Natl. Acad. Sci. USA 2019, 116, 1902–1909.
  137. Zhou, Z.; Yatani, K. Gesture-aware interactive machine teaching with in-situ object annotations. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, Bend, OR, USA, 29 October–2 November 2022; pp. 1–14.
  138. Avdic, M.; Bødker, S.; Larsen-Ledet, I. Two cases for traces: A theoretical framing of mediated joint activity. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28.
  139. Tchernavskij, P.; Bødker, S. Entangled artifacts: The meeting between a volunteer-run citizen science project and a biodiversity data platform. In Proceedings of the Nordic Human-Computer Interaction Conference, Aarhus, Denmark, 8–12 October 2022; pp. 1–13.
  140. Rzeszotarski, J.M.; Kittur, A. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 13–22.
  141. Newman, A.; McNamara, B.; Fosco, C.; Zhang, Y.B.; Sukhum, P.; Tancik, M.; Kim, N.W.; Bylinskii, Z. TurkEyes: A web-based toolbox for crowdsourcing attention data. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13.
  142. Goyal, T.; McDonnell, T.; Kutlu, M.; Elsayed, T.; Lease, M. Your behavior signals your reliability: Modeling crowd behavioral traces to ensure quality relevance annotations. In Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, Zürich, Switzerland, 5–8 July 2018; pp. 41–49.
  143. Hettiachchi, D.; Van Berkel, N.; Kostakos, V.; Goncalves, J. CrowdCog: A cognitive skill based system for heterogeneous task assignment and recommendation in crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–22.
  144. Zimmerman, J.; Oh, C.; Yildirim, N.; Kass, A.; Tung, T.; Forlizzi, J. UX designers pushing AI in the enterprise: A case for adaptive UIs. Interactions 2020, 28, 72–77.
  145. Hettiachchi, D.; Kostakos, V.; Goncalves, J. A survey on task assignment in crowdsourcing. ACM Comput. Surv. 2022, 55, 1–35.
  146. Pei, W.; Yang, Z.; Chen, M.; Yue, C. Quality control in crowdsourcing based on fine-grained behavioral features. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28.
  147. Bakici, T. Comparison of crowdsourcing platforms from social-psychological and motivational perspectives. Int. J. Inf. Manag. 2020, 54, 102121.
  148. Truong, N.V.-Q.; Dinh, L.C.; Stein, S.; Tran-Thanh, L.; Jennings, N.R. Efficient and adaptive incentive selection for crowdsourcing contests. Appl. Intell. 2022, 1–31.
  149. Correia, A.; Jameel, S.; Paredes, H.; Fonseca, B.; Schneider, D. Hybrid machine-crowd interaction for handling complexity: Steps toward a scaffolding design framework. In Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019; pp. 149–161.
  150. Sutherland, W.; Jarrahi, M.H.; Dunn, M.; Nelson, S.B. Work precarity and gig literacies in online freelancing. Work Employ. Soc. 2019, 34, 457–475.
  151. Salminen, J.; Kamel, A.M.S.; Jung, S.-G.; Mustak, M.; Jansen, B.J. Fair compensation of crowdsourcing work: The problem of flat rates. Behav. Inf. Technol. 2022, 1–22.
More
Video Production Service