Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 9093 2023-02-22 18:56:19 |
2 format -91 word(s) 9002 2023-02-23 03:24:25 | |
3 Adapted from Correia et al. (2023) -440 word(s) 2732 2023-02-24 00:53:49 | |
4 Adapted from Correia et al. (2023) + 7 word(s) 2739 2023-02-24 01:06:52 | |
5 format -45 word(s) 2694 2023-03-01 03:51:00 | |
6 Adapted from Correia et al. (2023) Meta information modification 2694 2023-03-01 10:36:22 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Correia, A.; Grover, A.; Schneider, D.; Pimentel, A.P.; Chaves, R.; De Almeida, M.A.; Fonseca, B. Designing for Hybrid Intelligence. Encyclopedia. Available online: https://encyclopedia.pub/entry/41557 (accessed on 27 July 2024).
Correia A, Grover A, Schneider D, Pimentel AP, Chaves R, De Almeida MA, et al. Designing for Hybrid Intelligence. Encyclopedia. Available at: https://encyclopedia.pub/entry/41557. Accessed July 27, 2024.
Correia, António, Andrea Grover, Daniel Schneider, Ana Paula Pimentel, Ramon Chaves, Marcos Antonio De Almeida, Benjamim Fonseca. "Designing for Hybrid Intelligence" Encyclopedia, https://encyclopedia.pub/entry/41557 (accessed July 27, 2024).
Correia, A., Grover, A., Schneider, D., Pimentel, A.P., Chaves, R., De Almeida, M.A., & Fonseca, B. (2023, February 22). Designing for Hybrid Intelligence. In Encyclopedia. https://encyclopedia.pub/entry/41557
Correia, António, et al. "Designing for Hybrid Intelligence." Encyclopedia. Web. 22 February, 2023.
Designing for Hybrid Intelligence
Edit

A taxonomy and survey of crowd-machine interaction is proposed. Specifically, this summary aims to provide a glimpse into the unique characteristics of artificial intelligence (AI)-powered crowdsourcing by characterizing its uses, limitations, and prospects when seen from a socio-technical perspective grounded on hybrid machine-crowd interaction. To this end, a scoping review of the existing literature was performed  in order to frame the relevant aspects of this particular form of hybrid intelligence in light of the progress reported in prior research when considering human-algorithmic arrangements at a massive scale. From understanding the role of crowd-AI ethicality to the analysis of the spatio-temporal characteristics of crowd activity and the behavioral traces left by crowd workers as a way of improving performance outcomes and user experience (UX) design.

conceptual framework crowd-machine hybrid interaction design implications

1. Introductory Remarks

The functional structure of intelligent systems has been augmented with new properties in recent years principally owing to the advancements in the field of artificial intelligence (AI). From a conceptual point of view, hybrid intelligence can be understood as the “combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines” [1]. In line with this definition, Dellermann and co-authors [2] characterized the design space of hybrid intelligence systems taking into account the structural co-evolvability of their constituent parts. By proposing a conceptual framework to describe interactions and dynamics between AI applications, human-agent teams, and society, Peeters and associates [3] identified a set of core design principles for implementing hybrid collective intelligence effectively. Nonetheless, most of conceptualizations and taxonomic frameworks of human-algorithmic activity in a hybrid mode are still in infancy and not yet fully mapped out the characteristics of hybrid machine-crowd systems and their use cases in real-world applications. In general terms, crowd intelligence can be particularly useful in supervising, training, or even supplementing automation, while AI techniques can make the crowd more accurate while augmenting human capabilities and interactions by means of machine intelligence [4]. Through a taxonomy-based review of empirical studies involving some type of crowd-AI hybrid interaction, this entry summarizes the main points addressed in our article [[5]] by describing the taxonomic properties in an integrated fashion.

2. Definitional Issues and Taxonomy Proposal for Crowd-AI Hybrid Intelligent Systems and their Applications

To some extent, taxonomies provide a useful guide and theoretical foundation for assessing technological developments due to their capability to organize complex concepts and knowledge structures into understandable formats [6]. The point of departure for proposing the taxonomy summarized in this entry was the crowdsourcing literature found in the intersectional design space of human-AI interaction. From a taxonomy-building methodological viewpoint, the taxonomic design approach was largely inspired by the Work System Theory as proposed by Alter [7] and further developed by Venumuddala and Kamath [8], who conducted an ethnographic fieldwork in an AI research laboratory. Furthermore, some elements from the Activity Theory [9] inspired model for assessing computer-supported cooperative work (CSCW) in distributed settings [10] were also introduced. As a result, a previous human-centered AI framework [11] was revised and extended to highlight the role of agency and control, explainability, fairness, common ground, and situational awareness in the design space of hybrid crowd-AI systems.

2.1. Spatio-Temporal Aspects of Crowd-Machine Interaction

Crowdsourcing can be seen as a gateway to obtain reliable solutions to problems of varying levels of difficulty when there is an urgent need for quick and prompt action or even when the development of a games with a purpose (GWAP) [12] or medical image segmentation application [13] is required without the strict rigidity to be situated physically close [14]. At the interaction level, hybrid crowd-AI systems can be able to support real-time crowdsourcing activities involving chatting and live tracking services, and also those occurring asynchronously, such as post-match soccer video analysis. In framing this discussion within the time-space matrix originally described in the context of groupware applications [15], this concentrates on the spatio-temporal patterns of human-AI partnerships at a crowd scale. Thus, one can argue that the notion of space has been reshaped to incorporate the provision of localization and navigation information into crowdsourcing settings as a way of exploring the full potential of local-and-remote on-demand real-time response in tasks like road data acquisition [16] and local news reporting [17]. That is, crowd workers can be physically or virtually distributed in a dispersed or co-located manner or even “synchronize in both time and physical space” [18]. As some scholars noted, the level of engagement in both paid and non-profit crowdsourcing communities can also be evaluated, taking into account the daily-devoted time of participants, periodicity of interactions, and activity duration [19]. In this regard, the contribution time and availability of the crowd constitute key information sources in crowd-AI hybrid settings.

2.2. Intelligent Task Assignment and Execution in Crowd-AI Hybrid Settings

The rapid progress of AI-based technology has led to novel ways of motivating humans to delegate tasks to AI for further fulfillment. Bouwer [20] proposed a four-quadrant taxonomic model for AI-based task delegation and stressed the importance of emotional/affective states as key deterministic factors for task delegation. In line with this, Lubars and Tan [21] mentioned the relevance of trust, motivation, difficulty, and risk as influential determinants of human-AI delegation decisions. In particular, trust and reliance assume a special significance in terms of delegation preferences. The strategic line behind most of the tasks that are commonly crowdsourced in current digital labor platforms is still grounded in microtask design settings [22], although some recent attention has been given to macrotasking activities (e.g., creative work) which involve crowd-powered tools designed to support computer-hard tasks that need specialized expertise and thus cannot be executed by AI algorithms in an effective manner [23]. By focusing on the task properties and attributes in crowdsourcing, Nakatsu and co-workers [24] introduced a taxonomy that classifies the structure (well-structured vs. unstructured) and level of interdependence (independent vs. interdependent) together with a third binary dimension involving the degree of commitment (low vs. high) required to accomplish a task.
Going back to the levels of complexity that may be present in crowdsourcing tasks, Hosseini et al. [25] briefly divided them into two main categories: simple and complex. Using this rationale, microtasks have been largely described as being simple for crowd workers to perform well and easily in the sense that they involve a lesser degree of context dependence [26]. Furthermore, these self-contained tasks are usually short by nature and take little time to finish. Zulfiqar and co-authors [27] go even further by underlining that microtasks do not require specialized skills, which enable any worker to contribute in a rapid and cognitive effortless manner. Extrapolating to more complex crowdsourcing processes, many forms of advanced crowd work have emerged throughout the years, and there is now a renewed focus on task assignment optimization involving algorithmically-supported teams of crowd workers acting collaboratively [28][29]. While the possibilities for optimization are manifold across a number of different task scenarios, robust forms of hybrid crowd-machine task allocation and delegation are needed to yield accurate results and reliable outcomes not only for crowd workers acting at the individual level but also in terms of team composition and related performance.

2.3. The Role of Context and Situational Awareness in Crowd-Computing Hybrid Scenarios

Any crowd-machine hybrid interaction has its own contextual characteristics and specificities. Dwelling on this issue, one may wish to claim that crowdsourcing settings are highly context-dependent and situational information is particularly critical to achieving successful interactions in a crowd-AI working environment since a crowd can be affected by contextual factors such as geo-location, temporal availability, and surrounding devices [30]. Considering the context from which a crowd worker is interacting with an intelligent system can help to personalize the way the actions are developed and thus improve processes, such as task assignment [31] while providing resources and contextually relevant information tailored to the needs of each individual based on content usage behaviors [32] and other forms of context extraction. This involves a set of environmental, social, and cultural contexts [33] that come with fundamental challenges for hybrid algorithmic-crowdsourcing applications in terms of infrastructural support for achieving efficient and accurate context detection and interpretation. When designing a crowd-AI hybrid system, user-generated inputs must be handled adequately in order to filter the relevant information and better adapt the interaction elements and styles to each particular case [34]. In hindsight, this is also somewhat related to the notions of explainability and trust in AI systems [35] since the trustworthy nature of these interactions will be affected by the quality of the contextual information provided and the degree to which a user perceives the AI system they are interacting with as useful for aiding their activities. In such scenarios, aspects like satisfaction shape the internal states of the actors [10] and can constrain the general performance of the crowd-AI partnerships if the system does not meet the expectations of the users.

2.4. Behavioral Traces of Crowd Activity in Human-Algorithmic Ensembles

To some extent, both paid and non-paid forms of crowdsourcing have served as “Petri dishes” for many behavioral studies involving experimental work [36]. A crowd can differ in terms of attention level, size, emotional state, motivation and preferences, and expertise/skills, among many other characteristics [30]. In this vein, Robert and Romero [37] found a considerable impact of diversity and crowd size on performance outcomes while testing the registered users of a WikiProject Film community. As such, online crowd behaviors are volatile by nature and vary given the contextual factors and situational complexity of the work, along with the surrounding environment of its members. Neale and co-authors [10] briefly explained the importance of context for creating a common ground which can be understood as the shared awareness among actors in their joint activities, including their mutual knowledge. That is, sustaining an appropriate shared understanding can constitute a critical success factor for achieving a successful interaction when designing intelligent systems [38]. This also applies to the range of crowd work activities that involve self-organized behaviors and transient identities [39], which imply a reinforced need for effective quality control mechanisms (e.g., gold standard questions) in crowd-AI settings [40]. Furthermore, some crowds are arbitrary, while others are socially networked or organized into teams that coalesce and dissolve in response to an open call for solutions where the nature of the task being crowdsourced is largely dependent on collective actions instead of individual effort only. In some specific cases, these tasks are non-decomposable and involve a shared context, mutual dependencies, changing requirements, and expert skills [41][42]. In this vein, some prior research has revealed the presence of “a rich network of collaboration” [43] through which the crowd constituents are connected and interact in a social manner, although there are many concerns about the bias introduced by these social ties. Seen from a human-machine teaming perspective, imbalanced crowd engagement [44], conflict management [45], and lack of common ground [46] are also key aspects that must be taken into account in such arrangements.

2.5. Infrastructural Elements as Facilitators of Hybrid Intelligence

As AI-infused systems thrive and expand, crowdsourcing platforms continue to play an active role in aggregating inputs that are used by companies and other requesters around the globe toward the ultimate goal of enabling algorithms with the ability to cope with complex problems that neither humans nor machines can solve alone [47]. However, designing for AI with a crowd-in-the-loop includes a set of infrastructure-level elements such as data objects, software elements, and functions that together must provide effective support for actions like assigning tasks, stating rewards, setting time periods, providing feedback, evaluating crowd workers, selecting the best submissions, and aggregating results [48]. To realize the full potential of these systems, online algorithms can be incorporated into task assignment optimization processes for different types of problems involving simple (decomposable), complex (non-decomposable), and well-structured tasks [29]. By showing reasonable results in terms of effectiveness, some algorithms have been proposed to organize teams of crowd workers as cooperative units able to perform joint activities and accomplish tasks of varying complexity [41][42][49]. From an infrastructural perspective fitted into the taxonomy proposed in this entry, the contribution on Kamar’s [50] work to stress the importance of combining both human and machine capabilities in a co-evolving synergistic way.
Taken together, crowd and machine intelligence can offer a lot of opportunities for predicting future events while improving large-scale decision-making since online algorithms can learn from crowd behavior using different integration and coupling levels. In many settings, hybrid intelligence systems can help to draw novel conclusions by interpreting complex patterns in highly dynamic scenarios. In line with this, many have studied novel forms of incorporating explainable AI approaches, such as gamification [51], for enhancing human perceptions and interpretations of algorithmic decisions in a more transparent and understandable manner. Due to their scalability, crowd-AI architectures can constitute an effective instrument for handling complexity, and thus more research is needed to explore how to best develop hybrid crowd-AI-centered systems taking into account the requirements and personal needs of each crowd worker. In particular, this domain raises some questions about the use of AI to enhance the quality of crowdsourcing outputs through high-quality training data [52] and related interaction experiences, as seen from a human-centered design perspective [53]. To summarize, crowd-powered systems can present a wide variety of opportunities to train algorithms “in situ” [54] while providing learning mechanisms and configuration features for customizing the levels of automation over time.

2.6. Social-Ethical Caveats in Hybrid Crowd-Artificial Intelligence Arrangements

There is a clarion call for an investigation on the ethical, privacy, and trust aspects of human-AI interaction from several causes. For instance, Amershi and colleagues [33] raised a set of concerns related to the need to avoid social biases and detrimental behaviors. To tackle those issues, it is necessary to dive deep into the harms provided by AI decisions in a contextualized way to ensure fairness, transparency, and accountability in such interactions [55]. This can be realized by materializing human agency and other strategies that can provide more control over machine behaviors [56][57][58]. From diversity to inclusiveness—and subsequently justice—there is still a long way until these goals are accomplished within the dynamic frame of human-AI interaction and hybrid intelligence augmentation. To address these shortcomings, system developers can play a critical role by considering the potential effects of AI-infused tools on user experiences.
Extrapolating to the crowdsourcing settings, Daniel and co-workers [59] reported a concern with the ethical conditions, terms, and standards aligned with the compliance towards regulations and laws that are sometimes overlooked in such arrangements. When considering crowd work regulation, aspects of intellectual property, privacy, and confidentiality in terms of participant identities constitute pivotal points [60]. A look into previous works (e.g., [61]) shows multiple concerns regarding worker rights, ambiguous task descriptions, acknowledgment of crowd contributions, licensing and consent, low wages, and unjustified rejected work. Such ethical and legal issues are even more expressive in the context of hybrid crowd-AI systems where there are not only online experiments and other human intelligence tasks (HITs) running on crowdsourcing platforms but also machine-in-the-loop processes within the entire hybrid workflow. In a particular setting, strategies like shared decision-making and informed consent can be particularly helpful to mitigate the threats of bad conduct and malicious work if based on a governance strategy where the guidelines, rules, actions, and policies are socially organized by the crowd itself [62]. In this vein, the potential impacts of the aforementioned socio-ethical concerns surrounding crowd-powered hybrid intelligence systems must be further elucidated and investigated from several lenses to draw a realistic picture of the current situation.

3. Final Considerations

As AI-infused tools continue to thrive and expand to become pervasive and ubiquitous in many everyday life and work‐related activities, the need to incorporate hybrid intelligence in complex settings become even more evident. In this entry we briefly summarized the main takeaway messages from a taxonomy-based review on hybrid crowd-AI interaction. Overall, our study contributed to find a gap related to the role of ethical principles and perceived fairness in building and deploying AI responsibly and with adequate governance strategies. Further investigations in this domain are required to characterize emerging issues like algorithm aversion and alignment of crowd perspectives and feedback outcomes while improving the ability to learn from crowd activity and further improve decision-making processes with implications for crowd-algorithmic system design. Nonetheless, building trust in crowd-machine interaction while making AI more efficient and adaptable are among the prevalent challenges in crowdsourcing and are usually seen as hindering factors for the successful adoption and use of these systems in practice.

References

  1. Akata, Z.; Balliet, D.; de Rijke, M.; Dignum, F.; Dignum, V.; Eiben, G.; Fokkens, A.; Grossi, D.; Hindriks, K.V.; Hoos, H.H.; et al. A research agenda for hybrid intelligence: Augmenting human intellect with collaborative, adaptive, responsible, and explainable artificial intelligence. Computer 2020, 53, 18–28.
  2. Dellermann, D.; Calma, A.; Lipusch, N.; Weber, T.; Weigel, S.; Ebel, P. The future of human-AI collaboration: A taxonomy of design knowledge for hybrid intelligence systems. In Proceedings of the 52nd Hawaii International Conference on System Sciences, Maui, HI, USA, 8–11 January 2019; pp. 274–283.
  3. Peeters, M.M.M.; van Diggelen, J.; van den Bosch; K. et al. Hybrid collective intelligence in a human–AI society. AI & Soc. 2021, 36, 217–238, 10.1007/s00146-020-01005-y.
  4. Kittur, A.; Nickerson, J.V.; Bernstein, M.; Gerber, E.; Shaw, A.; Zimmerman, J.; Lease, M.; Horton, J.J. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, San Antonio, TX, USA, 23–27 February 2013; pp. 1301–1318.
  5. Correia, A.; Grover, A.; Schneider, D.; Pimentel, A.P.; Chaves, R.; de Almeida, M.A.; Fonseca, B. Designing for hybrid intelligence: A taxonomy and survey of crowd-machine interaction. Appl. Sci. 2023, 13, 2198, 10.3390/app13042198.
  6. Nickerson, R.C.; Varshney, U.; Muntermann, J. A method for taxonomy development and its application in information systems. Eur. J. Inf. Syst. 2013, 22, 336–359.
  7. Alter, S. Work system theory: Overview of core concepts, extensions, and challenges for the future. J. Assoc. Inf. Syst. 2013, 14, 2.
  8. Venumuddala, V.R.; Kamath, R. Work systems in the Indian information technology (IT) industry delivering artificial intelligence (AI) solutions and the challenges of work from home. Inf. Syst. Front. 2022, 1–25.
  9. Nardi, B. Context and Consciousness: Activity Theory and Human-Computer Interaction; MIT Press: Cambridge, MA, USA, 1996.
  10. Neale, D.C.; Carroll, J.M.; Rosson, M.B. Evaluating computer-supported cooperative work: Models and frameworks. In Proceedings of the ACM Conference on Computer Supported Cooperative Work, Chicago, IL, USA, 6–10 November 2004; pp. 112–121.
  11. Correia, A.; Paredes, H.; Schneider, D.; Jameel, S.; Fonseca, B. Towards hybrid crowd-AI centered systems: Developing an integrated framework from an empirical perspective. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Bari, Italy, 6–9 October 2019; pp. 4013–4018.
  12. Vargas-Santiago, M.; Monroy, R.; Ramirez-Marquez, J.E.; Zhang, C.; Leon-Velasco, D.A.; Zhu, H. Complementing solutions to optimization problems via crowdsourcing on video game plays. Appl. Sci. 2020, 10, 8410.
  13. Heim, E.; Roß, T.; Seitel, A.; März, K.; Stieltjes, B.; Eisenmann, M.; Lebert, J.; Metzger, J.; Sommer, G.; Sauter, A.W.; et al. Large-scale medical image annotation with crowd-powered algorithms. J. Med. Imaging 2018, 5, 034002.
  14. Lee, S.W.; Krosnick, R.; Park, S.Y.; Keelean, B.; Vaidya, S.; O’Keefe, S.D.; Lasecki, W.S. Exploring real-time collaboration in crowd-powered systems through a UI design tool. Proc. ACM Human-Computer Interact. 2018, 2, 1–23.
  15. Johansen, R. Groupware: Computer Support for Business Teams; The Free Press: New York, NY, USA, 1988.
  16. Wang, X.; Ding, L.; Wang, Q.; Xie, J.; Wang, T.; Tian, X.; Guan, Y.; Wang, X. A picture is worth a thousand words: Share your real-time view on the road. IEEE Trans. Veh. Technol. 2016, 66, 2902–2914.
  17. Agapie, E.; Teevan, J.; Monroy-Hernández, A. Crowdsourcing in the field: A case study using local crowds for event reporting. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, San Diego, CA, USA, 8–11 November 2015; pp. 2–11.
  18. Lafreniere, B.J.; Grossman, T.; Anderson, F.; Matejka, J.; Kerrick, H.; Nagy, D.; Vasey, L.; Atherton, E.; Beirne, N.; Coelho, M.H.; et al. Crowdsourced fabrication. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan, 16–19 October 2016; pp. 15–28.
  19. Aristeidou, M.; Scanlon, E.; Sharples, M. Profiles of engagement in online communities of citizen science participation. Comput. Hum. Behav. 2017, 74, 246–256.
  20. Bouwer, A. Under which conditions are humans motivated to delegate tasks to AI? A taxonomy on the human emotional state driving the motivation for AI delegation. In Marketing and Smart Technologies; Springer: Singapore, 2022; pp. 37–53.
  21. Lubars, B.; Tan, C. Ask not what AI can do, but what AI should do: Towards a framework of task delegability. In Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 57–67.
  22. Sun, Y.; Ma, X.; Ye, K.; He, L. Investigating crowdworkers’ identify, perception and practices in micro-task crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–20.
  23. Khan, V.J.; Papangelis, K.; Lykourentzou, I.; Markopoulos, P. Macrotask Crowdsourcing—Engaging the Crowds to Address Complex Problems; Human-Computer Interaction Series; Springer: Cham, Switzerland, 2019.
  24. Nakatsu, R.T.; Grossman, E.B.; Iacovou, C.L. A taxonomy of crowdsourcing based on task complexity. J. Inf. Sci. 2014, 40, 823–834.
  25. Hosseini, M.; Shahri, A.; Phalp, K.; Taylor, J.; Ali, R. Crowdsourcing: A taxonomy and systematic mapping study. Comput. Sci. Rev. 2015, 17, 43–69.
  26. Teevan, J. The future of microwork. XRDS Crossroads ACM Mag. Stud. 2016, 23, 26–29.
  27. Zulfiqar, M.; Malik, M.N.; Khan, H.H. Microtasking activities in crowdsourced software development: A systematic literature review. IEEE Access 2022, 10, 24721–24737.
  28. Rahman, H.; Roy, S.B.; Thirumuruganathan, S.; Amer-Yahia, S.; Das, G. Optimized group formation for solving collaborative tasks. VLDB J. 2018, 28, 1–23.
  29. Schmitz, H.; Lykourentzou, I. Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Trans. Soc. Comput. 2018, 1, 1–33.
  30. Jin, Y.; Carman, M.; Zhu, Y.; Xiang, Y. A technical survey on statistical modelling and design methods for crowdsourcing quality control. Artif. Intell. 2020, 287, 103351.
  31. Moayedikia, A.; Ghaderi, H.; Yeoh, W. Optimizing microtask assignment on crowdsourcing platforms using Markov chain Monte Carlo. Decis. Support Syst. 2020, 139, 113404.
  32. Xu, W.; Dainoff, M.J.; Ge, L.; Gao, Z. Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. Int. J. Human–Computer Interact. 2022, 39, 494–518.
  33. Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.T.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for human-AI interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019.
  34. Rafner, J.; Gajdacz, M.; Kragh, G.; Hjorth, A.; Gander, A.; Palfi, B.; Berditchevskiaia, A.; Grey, F.; Gal, K.; Segal, A.; et al. Mapping citizen science through the lens of human-centered AI. Hum. Comput. 2022, 9, 66–95.
  35. Shneiderman, B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Trans. Interact. Intell. Syst. 2020, 10, 1–31.
  36. Ramírez, J.; Sayin, B.; Baez, M.; Casati, F.; Cernuzzi, L.; Benatallah, B.; Demartini, G. On the state of reporting in crowdsourcing experiments and a checklist to aid current practices. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–34.
  37. Robert, L.; Romero, D.M. Crowd size, diversity and performance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1379–1382.
  38. Blandford, A. Intelligent interaction design: The role of human-computer interaction research in the design of intelligent systems. Expert Syst. 2001, 18, 3–18.
  39. Huang, K.; Zhou, J.; Chen, S. Being a solo endeavor or team worker in crowdsourcing contests? It is a long-term decision you need to make. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32.
  40. Guo, A.; Jain, A.; Ghose, S.; Laput, G.; Harrison, C.; Bigham, J.P. Crowd-AI camera sensing in the real world. Proc. ACM Interactive, Mobile, Wearable Ubiquitous Technol. 2018, 2, 1–20.
  41. Venkatagiri, S.; Thebault-Spieker, J.; Kohler, R.; Purviance, J.; Mansur, R.S.; Luther, K. GroundTruth: Augmenting expert image geolocation with crowdsourcing and shared representations. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–30.
  42. Zhou, S.; Valentine, M.; Bernstein, M.S. In search of the dream team: Temporally constrained multi-armed bandits for identifying effective team structures. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018.
  43. Gray, M.L.; Suri, S.; Ali, S.S.; Kulkarni, D. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, San Francisco, CA, USA, 27 February–2 March 2016; pp. 134–147.
  44. Zhang, X.; Zhang, W.; Zhao, Y.; Zhu, Q. Imbalanced volunteer engagement in cultural heritage crowdsourcing: A task-related exploration based on causal inference. Inf. Process. Manag. 2022, 59, 103027.
  45. McNeese, N.J.; Demir, M.; Cooke, N.J.; She, M. Team situation awareness and conflict: A study of human–machine teaming. J. Cogn. Eng. Decis. Mak. 2021, 15, 83–96.
  46. Dafoe, A.; Bachrach, Y.; Hadfield, G.; Horvitz, E.; Larson, K.; Graepel, T. Cooperative AI: Machines must learn to find common ground. Nature 2021, 593, 33–36.
  47. Alorwu, A.; Savage, S.; van Berkel, N.; Ustalov, D.; Drutsa, A.; Oppenlaender, J.; Bates, O.; Hettiachchi, D.; Gadiraju, U.; Gonçalves, J.; et al. REGROW: Reimagining global crowdsourcing for better human-AI collaboration. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–7.
  48. Santos, C.A.; Baldi, A.M.; de Assis Neto, F.R.; Barcellos, M.P. Essential elements, conceptual foundations and workflow design in crowd-powered projects. J. Inf. Sci. 2022.
  49. Valentine, M.A.; Retelny, D.; To, A.; Rahmati, N.; Doshi, T.; Bernstein, M.S. Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 3523–3537.
  50. Kamar, E. Directions in hybrid intelligence: Complementing AI systems with human intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 4070–4073.
  51. Tocchetti, A.; Corti, L.; Brambilla, M.; Celino, I. EXP-Crowd: A gamified crowdsourcing framework for explainability. Front. Artif. Intell. 2022, 5, 826499.
  52. Vaughan, J.W. Making better use of the crowd: How crowdsourcing can advance machine learning research. J. Mach. Learn. Res. 2017, 18, 7026–7071.
  53. Barbosa, N.M.; Chen, M. Rehumanized crowdsourcing: A labeling framework addressing bias and ethics in machine learning. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. 1–12.
  54. Basker, T.; Tottler, D.; Sanguet, R.; Muffbur, J. Artificial intelligence and human learning: Improving analytic reasoning via crowdsourcing and structured analytic techniques. Comput. Educ. 2022, 3, 1003056.
  55. Mirbabaie, M.; Brendel, A.B.; Hofeditz, L. Ethics and AI in information systems research. Commun. Assoc. Inf. Syst. 2022, 50, 38.
  56. Sundar, S.S. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). J. Comput. Commun. 2020, 25, 74–88.
  57. Liu, B. In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. J. Comput. Commun. 2021, 26, 384–402.
  58. Kang, H.; Lou, C. AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. J. Comput. Commun. 2022, 27, zmac014.
  59. Daniel, F.; Kucherbaev, P.; Cappiello, C.; Benatallah, B.; Allahbakhsh, M. Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions. ACM Comput. Surv. 2018, 51, 1–40.
  60. Pedersen, J.; Kocsis, D.; Tripathi, A.; Tarrell, A.; Weerakoon, A.; Tahmasbi, N.; Xiong, J.; Deng, W.; Oh, O.; de Vreede, G.-J. Conceptual foundations of crowdsourcing: A review of IS research. In Proceedings of the 46th Hawaii International Conference on System Sciences, Wailea, HI, USA, 7–10 January 2013; pp. 579–588.
  61. Hansson, K.; Ludwig, T. Crowd dynamics: Conflicts, contradictions, and community in crowdsourcing. Comput. Support. Coop. Work. 2019, 28, 791–794.
  62. Gimpel, H.; Graf-Seyfried, V.; Laubacher, R.; Meindl, O. Towards artificial intelligence augmenting facilitation: AI affordances in macro-task crowdsourcing. Group Decis. Negot. 2023, 1–50.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , , , ,
View Times: 481
Revisions: 6 times (View History)
Update Date: 01 Mar 2023
1000/1000
Video Production Service