Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 + 1714 word(s) 1714 2022-01-14 04:56:50 |
2 Done Meta information modification 1714 2022-01-24 04:22:52 | |
3 Done Meta information modification 1714 2022-01-24 04:25:14 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Borchert, A. Trustworthiness Facets. Encyclopedia. Available online: https://encyclopedia.pub/entry/18628 (accessed on 29 March 2024).
Borchert A. Trustworthiness Facets. Encyclopedia. Available at: https://encyclopedia.pub/entry/18628. Accessed March 29, 2024.
Borchert, Angela. "Trustworthiness Facets" Encyclopedia, https://encyclopedia.pub/entry/18628 (accessed March 29, 2024).
Borchert, A. (2022, January 21). Trustworthiness Facets. In Encyclopedia. https://encyclopedia.pub/entry/18628
Borchert, Angela. "Trustworthiness Facets." Encyclopedia. Web. 21 January, 2022.
Trustworthiness Facets
Edit

Trustworthiness facets are considered essential characteristics for assessing the trustworthiness of parties with whom one interacts, such as other people, technologies, or organizations. It is assumed that the better a social media application supports its users in their trustworthiness assessment, the safer and more promising user interactions will be.

Trustworthiness Trust computer-mediated introductions

1. Trustworthiness Facets

“Trustworthiness facets” is a term that is coined in the computer-mediated introductions (CMIs) context. They encompass attributes possessed by the involved parties of CMI, which are (i) organizations that present themselves on the CMI platform, like advertisers or the CMI service provider, (ii) the corresponding CMI software and (iii) CMI users [1]. Trustworthiness facets are desirable characteristics from which an individual can infer whether the parties involved are able and willing to act as desired and expected, and thus are trustworthy [1][2]. They may be personality traits or descriptive qualities. As CMI users assess the involved parties by their trustworthiness, the three parties take the role of so-called trustees while CMI users simultaneously are trustors, who may place their trust in the others [3].
Since the three trustees differ in their nature of existence as institution, technology, and individual, different trustworthiness facets have been identified by different streams of research. Social and organizational psychology, sociology, economics and computer science partly identified facets that can be led back to the factors of trustworthiness ability, benevolence and integrity [1]. The factors of trustworthiness are considered the attributes primarily associated with trustworthiness [1]. Originally determined for the interpersonal context, these attributes have been widely used for other trustee types, such as organizations and technologies [4][5][6]. Most often, researchers have adapted their definitions to their respective context and kept the terminology or renamed them for their purposes, such as competence for ability [7] or fairness for integrity [8]. As another example, Caldwell and Clapham determine competence, quality assurance and financial balance for organizational trustworthiness as a comparison to ability for interpersonal trustworthiness [7]. By these adoptions, some facets may have either the same terminology but different definitions or different terminologies despite describing a similar phenomenon.
Furthermore, former research identified facets for the various trustee types that are not based on the factors of trustworthiness. As an example, for trustworthiness facets of technology, Mohammadi et al. related software qualities to the trustworthiness of software [9]. Software qualities describe characteristics that enhance software [10]. They encompass characteristics necessary for the functioning of a system concerning the back-end as well as those for the front-end that are partly directly perceivable for the user [10].

2. Placing Trustworthiness Facets in the Context of Trust

Trustworthiness facets are in line with the trust definition of McKnight et al. [2]. They define trust as a reflection of an individual’s beliefs about a trustee’s possession of suitable attributes necessary to perform as expected in a given situation [1][2]. This reflection of beliefs is irrespective of the trustor’s ability to monitor or control the trustee [1]. It rather depends on the trustor’s personal characteristics and how she subjectively perceives the trustee’s trustworthiness and related facets. This process of trustworthiness assessment is usually a bilateral exchange of trustor and trustee in terms of interpersonal interactions to result in an effective engagement. The trustee is interested in showing her trustworthiness by presenting her facets while the trustor perceives what the trustee reveals [11]. As trustor and trustee take simultaneously both roles in an interaction, it is a mutual process [12].
A trustworthiness assessment takes place when initiating or continuing an interaction [13]. It involves the evaluation of facets and the extent to which they are available. An unmet trustworthiness assessment may mean that trustworthiness facets are insufficiently available or irrelevant for the given situation. This may result in the impression of the party being untrustworthy and might lead to a termination of an interaction. Furthermore, the trustor may assess other attributes besides the trustworthiness facets, such as general personality traits, values, or goals. These are significant for the development of identification-based trust, which means that trust is established due to the fact that an individual identifies with a trustee [14]. If the trustor does not identify with the trustee, this might lead to the termination of an interaction, as well [14].
The process of evaluating trustworthiness facets differs in terms of timing. At the beginning of the first interaction, the trustor has no experience with the trustee. The trustworthiness assessment cannot draw on a knowledge base. Therefore, the interaction is foremost based on the trustor’s first judgement of given cues by the trustee from which trustworthiness facets or other attributes can be inferred. These are relevant for cognitive categorization processes concerning the trustee on which basis the trustor develops initial trust [4][15]. Categorization processes can include (i) reputation categorization, (ii) stereotyping and (iii) unit grouping [4]. Reputation categorization makes use of second-hand information on which basis the trustor attributes trustworthiness facets to the trustee. Stereotyping means to put a trustee into a general category, which is also associated with certain facets. Unit grouping describes that the trustor places the trustee in the same category as herself. This positively affects the trustworthiness assessment, since the trustor beliefs in sharing a social identity and, thus similar appreciated attributes and facets [16]. It can be concluded that although categorization processes and the trustworthiness assessment mutually define each other, perceived trustworthiness facets are strongly biased by cognitive processes.
In addition to categorization processes, calculative processes may also play a role leading to calculus-based trust [17]. The decision to start trusting and interacting with another party depends on rationally derived costs and benefits that may accompany or result from an interaction. The derivation process can be related to the trustee’s predictability and further facets that influence the trustor’s expectations concerning an interaction [17]. If the benefits exceed the costs, the trustor is likelier to interact with the trustee and extend her trust [18]. Again, a mutual influence can be observed between calculation processes and the trustworthiness assessment.
In general, it can be argued that initial trust is based on an insufficient knowledge base and assumptions made by the trustor [2]. Knowledge about the actual and past performance of the trustee are yet missing. This changes during an interaction when initial trust develops to knowledge-based trust. The knowledge base about the trustee increases and enables insights about potentially existent trustworthiness facets. On these grounds, the trustworthiness assessment is more reliable during the knowledge-based trust phase than during the initial trust phase [5]. Therefore, the facet-oriented approach can be particularly assigned to knowledge-based trust [2]. At that point, a trustor knows a trustee well enough to predict the trustee’s behavior to a certain extent regarding a specific situation [15]. As trustor and trustee have a common history, knowledge-based trust is more persistent than initial trust. While initial trust strongly depends on rapidly changing costs and benefits and made impressions, knowledge-based trust is more stable when it comes to performance lapses or circumstance changes [14].
In the context of CMI, both initial trust and knowledge-based trust are relevant as people get to know and interact with each other on the CMI platform. For example, in online dating, the selection of a person of interest is rather driven by assumptions from the first impression, where profile photos or profile information serve as cues [19]. These are used for categorization processes, as for example gender stereotyping [20]. After first interactions, online dating users gain more knowledge about trustees during interactions, which facilitates the trustworthiness assessment. Some online dating platforms, such as affiny.co.uk or neu.de, allow their users to stepwise disclose more personal information, such as photos, when people feel more secure after having built knowledge-based trust to the other end-user first.
Initial and knowledge-based trust is also relevant for the interaction with CMI organizations and applications. Initial trust in a service provider is most often based on customer familiarity with the organization, its reputation, quality of provided information about the company and the service, certifications from third parties and attractive rewards for using the technology [21]. Trust in the service provider can foster initial trust in the software, which impacts why a user chooses to use a particular application [22]. During software usage, knowledge-based trust in both the service provider and the application can develop. According to Siau and Shen [21], trust in the service provider further develops while using its application by assessing the quality of the software, the provider’s competence and integrity, appropriate privacy policies and security controls, the possibility of open communication and community building as well as external auditing. Trust in the software especially develops when it is perceived as reliable. Even if the trustworthiness assessment is not satisfactory concerning the service provider during software use, but for the software itself, the user may continue to use the application [2].

3. Trust-Building through Software in the Past

The development of trust between individuals through software has been researched and considered for software engineering in the past. Jones and Marsh introduced the TRUST notation for social media of Computer Supported Cooperative Work [23]. The notation respects the elements knowledge, importance of interaction, utility of cooperation, basic trust and conceptual trust as a basis to record and evaluate interpersonal and group activities online. Thereby, TRUST serves as a tool for discussions in terms of software design for human-computer-human interactions.
In the context of e-commerce, Tran proposes a framework for trust modelling to protect buying agents from dishonest selling agents [24]. Their proposed algorithm is based on a trustworthiness threshold, which weighs among others price, quality and expected value of a product, as well as cooperation and penalty factor of an interaction. Thereby, the approach of Tran rather follows the principle of calculative-based trust. With their framework, buying agents may evaluate the trustworthiness of selling agents by software features including trust ratings.
Another trust modelling approach is the method for trust-related software features, called TrustSoFt, which is especially created for CMI [25]. Based on users’ trust concerns, software goals are derived in order to oppose the user concerns. In addition, trustworthiness facets are identified that would counteract the concerns if they were present in the respective context. On these grounds, requirements and features for the software to be developed are elicited. Resulting software features shall support users in assessing the trustworthiness of involved parties as a strategy to cope with their concerns. The overview of trustworthiness facets presented in this work can be regarded as a useful tool for applying TrustSoFt.

References

  1. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734.
  2. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 1–25.
  3. Becerra, M.; Gupta, A.K. Perceived Trustworthiness within the Organization: The Moderating Impact of Communication Frequency on Trustor and Trustee Effects. Organ. Sci. 2003, 14, 32–44.
  4. McKnight, D.H.; Cummings, L.L.; Chervany, N.L. Initial trust formation in new organizational relationships. Acad. Manag. Rev. 1998, 23, 473–490.
  5. McKnight, D.H.; Chervany, N.L. What Trust Means in E-Commerce Customer Relationships: An Interdisciplinary Conceptual Typology. Int. J. Electron. Commer. 2001, 6, 35–59.
  6. Paravastu, N. Dimensions of Technology Trustworthiness and Technology Trust Modes. In Encyclopedia of Information Science and Technology, 3rd ed.; IGI Global: Hershey, PA, USA, 2015; pp. 4301–4309.
  7. Caldwell, C.; Clapham, S.E. Organizational Trustworthiness: An International Perspective. J. Bus. Ethics 2003, 47, 349–364.
  8. Carnevale, D.G. Trustworthy Government: Leadership and Management Strategies for Building Trust and High Performance; Jossey-Bass: San Francisco, CA, USA, 1995.
  9. Mohammadi, N.G.; Paulus, S.; Bishr, M.; Metzger, A.; Koennecke, H.; Hartenstein, S.; Pohl, K. An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness. In Proceedings of the CLOSER, Aachen, Germany, 8–10 May 2013; pp. 542–552.
  10. Kan, S.H. Metrics and Models in Software Quality Engineering; Addison-Wesley Professional: San Francisco, CA, USA, 2003.
  11. Wilkins, C.H. Effective Engagement Requires Trust and Being Trustworthy. Med. Care 2018, 56, S6–S8.
  12. Ferrin, D.L.; Bligh, M.C.; Kohles, J.C. It takes two to tango: An interdependence analysis of the spiraling of perceived trustworthiness and cooperation in interpersonal and intergroup relationships. Organ. Behav. Hum. Decis. Process. 2008, 107, 161–178.
  13. Levin, D.Z.; Whitener, E.M.; Cross, R. Perceived trustworthiness of knowledge sources: The moderating impact of relationship length. J. Appl. Psychol. 2006, 91, 1163–1171.
  14. Lewicki, R.J.; Bunker, B.B. Developing and maintaining trust in work relationships. In Trust in Organizations: Frontiers of Theory and Research; Kramer, R.M., Tyler, T.R., Eds.; Sage: Thousand Oaks, CA, USA, 1996; pp. 114–139.
  15. Lewis, J.D.; Weigert, A. Trust as a social reality. Soc. Forces 1985, 63, 967–985.
  16. Shi, Y.; Sia, C.L.; Chen, H. Leveraging social grouping for trust building in foreign electronic commerce firms: An exploratory study. Int. J. Inf. Manag. 2013, 33, 419–428.
  17. Lewicki, R.J.; Bunker, B.B. Trust in Relationships: A Model of Development and Decline; Jossey-Bass: San Francisco, CA, USA; Wiley: Hoboken, NJ, USA, 1995.
  18. Coleman, J.S. Foundations of Social Theory; Harvard University Press: Cambridge, MA, USA, 1994.
  19. Fiore, A.T.; Taylor, L.S.; Mendelsohn, G.; Hearst, M. Assessing attractiveness in online dating profiles. In Proceedings of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 797–806.
  20. Chappetta, K.C.; Barth, J.M. How gender role stereotypes affect attraction in an online dating scenario. Comput. Hum. Behav. 2016, 63, 738–746.
  21. Siau, K.; Shen, Z. Building customer trust in mobile commerce. Commun. ACM 2003, 46, 91–94.
  22. Jia, L.; Cegielski, C.; Zhang, Q. The Effect of Trust on Customers’ Online Repurchase Intention in Consumer-to-Consumer Electronic Commerce. J. Organ. End User Comput. 2014, 26, 65–86.
  23. Jones, S.; Marsh, S. Human-computer-human interaction: Trust in CSCW. ACM SIGCHI Bull. 1997, 29, 36–40.
  24. Tran, T. Protecting buying agents in e-marketplaces by direct experience trust modelling. Knowl. Inf. Syst. 2010, 22, 65–100.
  25. Borchert, A.; Ferreyra, N.; Heisel, M. A Conceptual Method for Eliciting Trust-related Software Features for Computer-mediated Introduction. In Proceedings of the ENASE 2020, Prague, Czech Republic, 5–6 May 2020; pp. 269–280.
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 335
Revisions: 3 times (View History)
Update Date: 24 Jan 2022
1000/1000