Cognitive Personalization of Microtask Design: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , , , , ,

The study of data quality in crowdsourcing campaigns is a prominent research topic, given the diverse range of participants involved. A potential solution to enhancing data quality processes in crowdsourcing is cognitive personalization, which involves appropriately adapting or assigning tasks based on a crowd worker’s cognitive profile. There are two common methods for assessing a crowd worker’s cognitive profile: administering online cognitive tests, and inferring behavior from task fingerprinting based on user interaction log events. 

  • crowdsourcing
  • cognitive abilities
  • human-computer interaction
  • microtask design
  • personalization
  • task fingerprinting

1. Introduction

The international classification of functioning disability and health (ICF) encompasses the classification of health information [1]. Studies have been conducted to improve this information, regarding the transparency and reliability of the process of linking health information to the ICF [2]. In recent years, the ICF has been employed to categorize cognition-related information, including cognitive-communication disorders, which entail several challenges in terms of terminology, assessment, and sociocultural context. In this regard, the usage of ICF can lead to significant therapeutic interventions for individuals suffering with this kind of disorder [3]. Gauthier and colleagues [4] described mild cognitive impairment (MCI) and provided a conceptual background on this impairment, the pathophysiology, the tools typically used for the diagnosis, some procedures and statistics regarding the management per patient, and what can be used to prevent it. Dementia in the stage of serious decline can be considered as a mild cognitive impairment, and the worldwide costs are enormous, unevenly distributed, and increasing. Since 2010, dementia costs have reached USD 818 billion globally, an increase of 35% [5]. Technology benefits people with MCI, by providing a means of support [6]. Nowadays, technology can be used to support cognitive rehabilitation by maintaining or even improving an individual’s mental state. Braley and colleagues [7] conducted a study to evaluate the feasibility of constructing smart home systems to help people with dementia in quotidian activities through the use of auto-prompts. The feasibility was validated, and factors such as positive reinforcement, training, and research related to human interaction were identified as necessary for developing these types of systems. Another article proposed a model for assistive technology, joining physical and cognitive rehabilitation [8]. This approach is interesting in merging both approaches and prescribing rehabilitation, giving therapists opportunities to customize the best rehabilitation exercise for their patients.
Furthermore, this approach also comprises a social feature, including collaborative exercises. Another fact that must be taken into consideration regarding technology that supports cognitive rehabilitation is the level of dependency of the user. In a systematic review based on technology-based cognitive rehabilitation for individuals with cognitive impairment, most of the identified studies featured the direct help of a therapist for the participant [9]. The direct help could bias the result or also mean that a design in the form of personal help is essential when developing cognition-aware technology [10]. However, if technology is neither well designed nor cognition-aware, in the case of individuals with MCI, they can commit errors in terms of accuracy and miss the default time windows in tasks, with the latter being problematic, for example with cash machines [11]. Drawing together the findings from prior literature, numerous scholars have argued that the proper design of technology must take into consideration the user’s cognitive abilities.
Cognitive abilities are wider than specific abilities and belong to the group of general mental ability (GMA). As a construct, GMA obtains significant correlation outcomes with occupation level and performance in job training. Even job historical performance has a weaker correlation when compared with GMA. With this framing, workers with a higher GMA acquire more and faster job knowledge, which translates to higher levels of job performance [12]. Cognitive abilities, which include but are not restricted to working memory (managing and storing information at the same time [13]) and executive functions (cognitive processes involved in behavior towards goal accomplishment [14]), can predict performance in most jobs and situations. In Web-based unsupervised environments, the potential for faking non-cognitive and cognitive ability measures underlines the need for caution and there is an ongoing discussion about their harmful effects [15]. In [16], the authors provided an updated taxonomy for cognitive abilities and personality domains. Cognitive abilities can be measured remotely, with potential for the administration of self-reported tools for assessing capabilities in cases where there is a significant personality bias [17]. With this in mind, the utility of the tool is questioned. This reinforces the usage of online tasks (e.g., microtasks in the context of crowdsourcing), including cognitive tests to assess cognitive abilities, instead of using self-measurement tools that are biased by nature. Moreover, IQ can be used to predict a worker’s job satisfaction, as well as expected job performance, while personality type can moderate the previous relationship [18]. It is also worth noting that cognitive abilities can be improved when emotional intelligence is promoted [19]. In this sense, besides cognitive abilities, social-cognitive factors contribute to remarkable differences in technology task performance. In the elderly, computer proficiency can be predicted from scores in cognitive ability tests. Specifically, predictions can be made using sense of control, psychomotor speed, and inductive reasoning [20]. Among the most promising techniques enabling a more personalized experience with online digital labor platforms, this work opens a new perspective for using crowdsourcing in assessing the cognitive abilities of each worker based on cognitive tests, with the ultimate goal of matching tasks and crowd workers’ individual capabilities.
Over the years, crowdsourcing has evolved, both from a technological point of view and regarding research interest, with several literature reviews and surveys having been published in the last decade (e.g., [21][22]). Since the term’s inception in the mid-2000s [23], crowdsourcing has become increasingly prevalent in local and remote settings, where there is a need to obtain timely information for solving simple to moderately complex problems of varying nature and length. In fact, crowd-powered systems have gradually matured across the world, and we can see examples of renowned pioneer companies (e.g., [23]) adopting crowdsourcing as part of their innovation strategy and business model. Considered an important form of on-demand digital labor, crowdsourcing tasks require fewer skills and less time to complete, giving a flexible job opportunity. In line with this, the evolution of digital work has created conditions where people can overcome social and geographical barriers in an inclusive setting, especially in the context of microtask crowdsourcing, due to the ease of performing decomposed tasks. As a result, more and more opportunities arise for supporting people with cognitive or physical impairments to perform remote work [24]. Considering microtasks, people can perform tasks with different levels of complexity via Web or mobile applications, by doing something they are interested in (e.g., play games, transcribe language, or label images) [25].
Task fingerprinting is a technique for identifying behavior traces from crowd worker activity, to improve quality control techniques. The pioneer work conducted by Rzeszotarski and Kittur [26] inferred behavior patterns in a crowd work context by analyzing the user interaction log events, such as the click details and key presses. This method of microtask fingerprinting develops prediction models based on machine learning (ML) to identify the behavioral traits of workers. Aligned with this goal, various research works have been conducted with the aim of developing this line of inquiry. For instance, a study on quality control mechanisms proposed a set of indicators and a general framework covering more types of microtask, including open-ended answers [27]. This work obtained better outcomes when compared to the state-of-the-art methods, such as the traditional analysis of historical performance in crowd work [28]. Furthermore, a supervised ML model was proposed, where more types of crowd worker profile were detected, with a higher granularity [29]. Additionally, a model was created to define behavior, motivation, and performance from a general perspective. Fine-grained features were also analyzed in another study, with the results corroborating their underlying benefits for quality control mechanisms [30].
In addition to traditional quality control mechanisms and task fingerprinting, cognitive personalization can be applied to microtask assignment arrangements. Goncalves and colleagues [31] proposed a method for performing task routing, based on cognitive ability tests. However, while the former was performed on a computer, the latter was administered in pencil and paper. The positive results obtained in this study support the assessment of cognitive ability for routing microtasks. In the following related work, Hettiachchi and colleagues [32] transformed pencil-and-paper cognitive ability tests into microtasks suitable for short crowdsourcing scenarios. However, the cognition-based task routing was not performed in real-time. A thorough study was subsequently performed involving 574 crowd workers, and this time involving real-time task assignment. Predominantly, this research concluded that short-length cognitive tasks supported better outcomes in the task routing when compared to the conventional methods, including validated state-of-the-art microtask assignment methods [33]. While these studies based on cognition-based task assignment obtained excellent results, two pertinent questions arise: can cognitive tests support personalization in the design of microtasks? Task assignment is important in providing microtasks suitable for each crowd worker, but can any microtask be adapted such that each crowd worker is sufficiently motivated to perform it? The answer to these questions will allow an increase in the democracy in crowdsourcing settings, by improving the microtask itself and taking into consideration the adaptation requirements of the crowd worker. 

2. Cognitive Personalization of Microtask Design

The ground-breaking works of Hettiachchi and colleagues [32][33] revealed the underlying potential of cognition-based microtask assignment. However, in addition to microtask assignment, there have been other approaches to performing cognitive personalization of microtask design. Eickhoff [34] conducted a study to identify the cognitive biases (i.e., systematic errors in the thinking processes) of crowd workers and indicated that microtask design should take these biases into consideration, in order to avoid a decrease in terms of work performance. In another work based on collaboration scenarios, a model was developed using the random forest algorithm to identify relevant collaboration skills [35]. Moreover, Sampath et al. [36] found that performance in text-transcription tasks can be improved significantly if the microtask design takes into account working memory or visual saliency. Paulino and colleagues [37][38] suggested that cognitive styles could be used to infer information processing preferences in a crowd work setting. These preferences can then be used to personalize the microtask interface. However, these studies can be considered analyses of user log interactions to enhance cognitive personalization.
Besides the seminal work of Rzeszotarski and Kittur [26], other similar techniques have been developed for identifying behavioral traces of crowd workers. A system was proposed to integrate different techniques for quality control in the context of crowdsourcing [39]. This system allowed the integration of outcomes related to gold standards, majority voting, and behavioral traces, with the generation of graphics and other forms of data visualization. Another study based on behavioral data captured from logging mouse interactions and eye tracking data showed that this is beneficial and can complement task duration metric for quality control purposes [40]. Additional behavioral traces from crowd workers can be identified, to develop a model for predicting label effectiveness, as well as worker performance. A previous work more focused on classification microtasks generated a model for optimizing label aggregation processes [41]. Furthermore, it was found that the classification accuracy can also be improved significantly when using gold judges based on behavioral data [42]. Regarding the data collected, most of the studies identified behavioral traces based on the raw data from clicks or key presses. In the pioneering work of Rzeszotarski and Kittur [26], the features identified were the simple interactions made in the microtask interface, such as the page scrolls, the special keys presses (e.g., tab or enter), and the change of focus, either on the input fields or in the browser tabs. To complement these indicators, the timestamp to the millisecond was identified for each interaction. This basic approach to processing raw data was used in other studies [27][41]. Additionally, Han and associates [27] created a browser extension to be used by crowd workers when performing microtasks. The authors extended the temporal and behavior indicators from the work of Rzeszotarski and Kittur [26] but provided new features regarding contextual and compound features. The contextual features refer to the study of two or more behaviors simultaneously, while the compound features are related to the analysis of a sequence of interactions that enrich behavioral identification (e.g., if a crowd worker scrolls the page frequently, it may indicate that he/she is hesitant). In another study, Gadiraju and colleagues [29] complemented behavioral trace identification by analyzing the time before, during, and after interactions with finer granularity.
While there has been significant progress in task fingerprinting in crowd work, especially by Rzeszotarski and Kittur [26] and later by Gadiraju and co-authors [29], a research gap exists in combining crowd workers’ interaction logs with cognitive tests. This combination has the potential to enhance the performance of crowd workers and improve the quality of work delivered to the requesters.

This entry is adapted from the peer-reviewed paper 10.3390/s23073571

References

  1. World Health Organization. International Classification of Functioning, Disability and Health (ICF); World Health Organization: Geneva, Switzerland, 2001.
  2. Cieza, A.; Fayed, N.; Bickenbach, J.; Prodinger, B. Refinements of the ICF Linking Rules to strengthen their potential for establishing comparability of health information. Disabil. Rehabil. 2019, 41, 574–583.
  3. Larkins, B. The Application of the ICF in Cognitive-Communication Disorders following Traumatic Brain Injury. Semin. Speech Lang. 2007, 28, 334–342.
  4. Gauthier, S.; Reisberg, B.; Zaudig, M.; Petersen, R.C.; Ritchie, K.; Broich, K.; Belleville, S.; Brodaty, H.; Bennett, D.; Chertkow, H.; et al. Mild cognitive impairment. Lancet 2006, 367, 1262–1270.
  5. Wimo, A.; Guerchet, M.; Ali, G.-C.; Wu, Y.-T.; Prina, A.M.; Winblad, B.; Jönsson, L.; Liu, Z.; Prince, M. The worldwide costs of dementia 2015 and comparisons with 2010. Alzheimer’s Dement. 2017, 13, 1–7.
  6. Holthe, T.; Halvorsrud, L.; Karterud, D.; Hoel, K.-A.; Lund, A. Usability and acceptability of technology for community-dwelling older adults with mild cognitive impairment and dementia: A systematic literature review. Clin. Interv. Aging 2018, 13, 863.
  7. Braley, R.; Fritz, R.; Van Son, C.R.; Schmitter-Edgecombe, M. Prompting Technology and Persons With Dementia: The Significance of Context and Communication. Gerontology 2018, 59, 101–111.
  8. Oliver, M.; Molina, J.P.; Fernandez-Caballero, A.; Gonzalez, P. Collaborative computer-assisted cognitive rehabilitation system. ADCAIJ Adv. Distrib. Comput. Artif. Intell. J. 2017, 6, 57–74.
  9. Ge, S.; Zhu, Z.; Wu, B.; McConnell, E.S. Technology-based cognitive training and rehabilitation interventions for individuals with mild cognitive impairment: A systematic review. BMC Geriatr. 2018, 18, 213.
  10. Shraga, R.; Scharf, C.; Ackerman, R.; Gal, A. Incognitomatch: Cognitive-aware matching via crowdsourcing. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, Portland, OR, USA, 14–19 June 2020.
  11. Schmidt, L.I.; Wahl, H.-W. Predictors of Performance in Everyday Technology Tasks in Older Adults With and Without Mild Cognitive Impairment. Gerontology 2018, 59, 90–100.
  12. Schmidt, F.L.; Hunter, J. General mental ability in the world of work: Occupational attainment and job performance. J. Personal. Soc. Psychol. 2004, 86, 162.
  13. Peng, P.; Kievit, R.A. The development of academic achievement and cognitive abilities: A bidirectional perspective. Child Dev. Perspect. 2020, 14, 15–20.
  14. Miller, E.; Wallis, J. Executive function and higher-order cognition: Definition and neural substrates. Encycl. Neurosci. 2009, 4, 99–104.
  15. Schmitt, N. Personality and cognitive ability as predictors of effective performance at work. Annu. Rev. Organ. Psychol. Organ. Behav. 2014, 1, 45–65.
  16. Stanek, K.C.; Ones, D.S. Taxonomies and compendia of cognitive ability and personality constructs and measures relevant to industrial, work and organizational psychology. In Handbook of Industrial, Work & Organizational Psychology: Personnel Psychology and Employee Performance; Ones, D.S., Anderson, N., Viswesvaran, C., Sinangil, H.K., Eds.; Sage Publications: Thousand Oaks, CA, USA, 2018; pp. 366–407.
  17. Herreen, D.; Zajac, I.T. The reliability and validity of a self-report measure of cognitive abilities in older adults: More personality than cognitive function. J. Intell. 2018, 6, 1.
  18. Murtza, M.H.; Gill, S.A.; Aslam, H.D.; Noor, A. Intelligence quotient, job satisfaction, and job performance: The moderating role of personality type. J. Public Aff. 2020, 21, e2318.
  19. Nguyen, N.N.; Nham, P.T.; Takahashi, Y. Relationship between Ability-Based Emotional Intelligence, Cognitive Intelligence, and Job Performance. Sustainability 2019, 11, 2299.
  20. Zhang, S.; Grenhart, W.C.M.; McLaughlin, A.C.; Allaire, J.C. Predicting computer proficiency in older adults. Comput. Hum. Behav. 2017, 67, 106–112.
  21. Hosseini, M.; Shahri, A.; Phalp, K.; Taylor, J.; Ali, R. Crowdsourcing: A taxonomy and systematic mapping study. Comput. Sci. Rev. 2015, 17, 43–69.
  22. Bhatti, S.S.; Gao, X.; Chen, G. General framework, opportunities and challenges for crowdsourcing techniques: A Comprehensive survey. J. Syst. Softw. 2020, 167, 110611.
  23. Füller, J.; Bartl, M.; Ernst, H.; Mühlbacher, H. Community based innovation: How to integrate members of virtual communities into new product development. Electron. Commer. Res. 2006, 6, 57–73.
  24. Zyskowski, K.; Morris, M.R.; Bigham, J.P.; Gray, M.L.; Kane, S.K. Accessible crowdwork? Understanding the value in and challenge of microtask employment for people with disabilities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada, 14–18 March 2015.
  25. Cheng, J.; Teevan, J.; Iqbal, S.T.; Bernstein, M.S. Break it down: A comparison of macro-and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015.
  26. Rzeszotarski, J.M.; Kittur, A. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011.
  27. Han, S.; Dai, P.; Paritosh, P.; Huynh, D. Crowdsourcing Human Annotation on Web Page Structure: Infrastructure Design and Behavior-Based Quality Control. ACM Trans. Intell. Syst. Technol. 2016, 7, 56.
  28. Zheng, Y.; Wang, J.; Li, G.; Cheng, R.; Feng, J. QASCA: A quality-aware task assignment system for crowdsourcing applications. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, Melbourne, Australia, 31 May–4 June 2015.
  29. Gadiraju, U.; Demartini, G.; Kawase, R.; Dietze, S. Crowd Anatomy Beyond the Good and Bad: Behavioral Traces for Crowd Worker Modeling and Pre-selection. Comput. Support. Coop. Work (CSCW) 2019, 28, 815–841.
  30. Pei, W.; Yang, Z.; Chen, M.; Yue, C. Quality Control in Crowdsourcing based on Fine-Grained Behavioral Features. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–28.
  31. Goncalves, J.; Feldman, M.; Hu, S.; Kostakos, V.; Bernstein, A. Task routing and assignment in crowdsourcing based on cognitive abilities. In Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017.
  32. Hettiachchi, D.; van Berkel, N.; Hosio, S.; Kostakos, V.; Goncalves, J. Effect of Cognitive Abilities on Crowdsourcing Task Performance. In Human-Computer Interaction–INTERACT 2019; Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2019; pp. 442–464.
  33. Hettiachchi, D.; van Berkel, N.; Kostakos, V.; Goncalves, J. CrowdCog: A Cognitive Skill based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–22.
  34. Eickhoff, C. Cognitive biases in crowdsourcing. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey, CA, USA, 5–9 February 2018.
  35. Stewart, A.E.B.; Vrzakova, H.; Sun, C.; Yonehiro, J.; Stone, C.A.; Duran, N.D.; Shute, V.; D’Mello, S.K. I Say, You Say, We Say: Using Spoken Language to Model Socio-Cognitive Processes during Computer-Supported Collaborative Problem Solving. Proc. ACM Hum.-Comput. Interact. 2019, 3, 194.
  36. Alagarai Sampath, H.; Rajeshuni, R.; Indurkhya, B. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014.
  37. Paulino, D.; Correia, A.; Reis, A.; Guimarães, D.; Rudenko, R.; Nunes, C.; Silva, T.; Barroso, J.; Paredes, H. Cognitive Personalization in Microtask Design. In Universal Access in Human-Computer Interaction: Novel Design Approaches and Technologies; Springer International Publishing: Cham, Switzerland, 2022.
  38. Paulino, D.; Correia, A.; Guimarães, D.; Barroso, J.; Paredes, H. Uncovering the Potential of Cognitive Personalization for UI Adaptation in Crowd Work. In Proceedings of the 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Hangzhou, China, 4–6 May 2022.
  39. Rzeszotarski, J.; Kittur, A. CrowdScape: Interactively visualizing user behavior and output. In Proceedings of the 25th annual ACM Symposium on User Interface Software and Technology, Cambridge, MA, USA, 7–10 October 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 55–62.
  40. Yuasa, S.; Nakai, T.; Maruichi, T.; Landsmann, M.; Kise, K.; Matsubara, M.; Morishima, A. Towards quality assessment of crowdworker output based on behavioral data. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019.
  41. Goyal, T.; McDonnell, T.; Kutlu, M.; Elsayed, T.; Lease, M. Your behavior signals your reliability: Modeling crowd behavioral traces to ensure quality relevance annotations. In Proceedings of the Sixth AAAI Conference on Human Computation and Crowdsourcing, Zurich, Switzerland, 5–8 July 2018.
  42. Kazai, G.; Zitouni, I. Quality Management in Crowdsourcing using Gold Judges Behavior. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, San Francisco, CA, USA, 22–25 February 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 267–276.
More
This entry is offline, you can click here to edit this entry!
Video Production Service