As an example, let’s assume that Alice wants to buy the original cookbook written by Julia Child, called “Mastering the Art of French Cooking” from a used book seller through Amazon’s marketplace. First, she realizes that there are multiple editions of this book, with different numbers of volumes, by different publishers. To disambiguate, she first collects information by reading reviews of different versions, and scans other online information sources to understand the differences between the various editions of the book. Based on this information, she decides how desirable each version of the book is, and matches the different editions to the copies offered by different sellers. Soon she discovers that the information provided by the sellers is also somewhat vague. She uses clues she obtained from the reviews to make sure a specific book being sold is the version she wants the most. Furthermore, she needs to take into account the sale history of the sellers to decide which one seems to be the best choice at the right price point for the reported condition of the book. Finally, the purchase is made.

In fact, the seemingly simple decision of buying a book involved the consideration of many parameters collected from different sources. Along the way, Alice had to make many different trust decisions. She had to trust that the information provided by the reviews was correct, coming from reviewers that she did not know. Sometimes, she used a simple rule for determining the reliability of the reviews: if they were quite detailed, it was likely that the reviewers were knowledgeable about the review topic. She also had to trust herself in matching the desired book to the given descriptions. She had to trust that the seller was truthful in his description of the book, and that the seller was able and ready to fulfill her purchase.

So, Alice trusted many different people (and entities) for this one decision: the search engine used to find information about the book, the review writers, the seller, the Amazon marketplace, and herself. In addition, she had different goals in mind: the correctness of information, the ability of review writers to describe the book properly, the seller’s trustworthiness in delivering the book, the ability of Amazon to collect and to present information about sellers, and the ability to resolve possible future problems. Finally, many outside factors may have impacted her decisions. For example, the first review she read was praising a specific version of the book. This review established a baseline for her and she evaluated everything else in relation to this version. She was not even aware that this was happening.

This is an example of how many decisions inherently require trust. Computing trust is not a simple matter of deciding whether someone or something is trusted or not. Decision making may involve complex goals made up of multiple sub- goals considered together, each requiring a trust evaluation for a different entity. Interdependent subgoals or even outside factors may affect the evaluation of trust. Furthermore, trusting Amazon, trusting the seller, and trusting online reviews may involve different considerations. Alice may rely on Amazon’s reputation system to judge the seller, Bob, since she has not bought anything else from him before. However, Alice has experience with Amazon and can draw from her experience there. When judging information, her familiarity with the subject matter may play a big role in her trust evaluation. For example, if she already knows something about a specific version of the book, she can use it to judge the credibility of different reviewers. Based on the same information, she can even use it to form judgments about the websites hosting the reviews as to whether they are objective or not.

All in all, trust evaluation is complex because there are so many different criteria to take into consideration. These criteria are intricately connected to each other, and their evaluation is not instantaneous and not independent. Trust computation is even more complex in networks that combine people with each other through computational tools. With the increasing integration of computerized tools into everyday life and the increasing population of “digital natives” (the generation born after the digital age), the field of computer science is becoming even more ubiquitous. Today’s computing is networked: people, information and computing infrastructure are linked to each other. The networked world is a part of almost all type of human activity, supporting and shaping it. Social networks connect people and organizations throughout the globe in cooperative and competitive activities. Information is created and consumed at a global scale. Systems, devices, and sensors create and process data, manage physical systems and participate in interactions with other entities: people, and systems alike.

These interdependent systems are sometimes called socio-technological net- works. From now on, we will refer to these systems simply as networks. Many applications in these networks aim to assist people in making decisions and provide trustable tools and recommendations. However, these systems also rely on people for their operation and must take into account trust in their operation. They must trust that human input is correct, timely and unbiased. They must trust that human actions are sound. As a result, two different aspects of trust, algorithmic and human trust, exist together in networks and depend on each other in many ways.

The dynamics of these socio-technological networks go beyond what has been traditionally studied by social or cognitive psychology and modeled by computer science algorithms. Leading universities are forming new academic centers dedicated to the study of such networks in all their complexity. In this brief, we introduce the vocabulary necessary to study trust in socio-technological networks in a principled way. We define trust context as an independent concept and define its main components. We claim that the trust context describes how the trust evaluation is dependent on other entities and gives us the first step towards a study of trust in complex networks.


The expected audience of this brief is network and computer scientists who study and model trust, and develop tools to compute trust. However, we are especially interested in trust computation that has a social or cognitive component. The broad review of trust offered in this brief is likely to be of use to researchers and practitioners in many other fields and help define a common vocabulary across different disciplines.

In Chap. 2, we give a definition of trust and provide the vocabulary necessary to study trust context in networks. In particular, we would like to define who is trusting whom for what, which signals are used to judge trust and how these signals are combined. The chapter introduces the readers to the notion of trust context as the encapsulation of the trust evaluation process.

We then introduce a survey of social psychology research in Chap.3, with a specific emphasis on different network contexts that impact the trust evaluation. We also survey cognitive psychology research that explains how trust beliefs are formed and evaluated. We include related research in information trust and credibility to further show the difference in evaluating trust for information and for actions, two different contexts. We especially concentrate on some recent work in these fields that has not yet percolated into the computer science discussion and models. The social and cognitive psychology research that is often used as a reference in computational models seems to draw from a somewhat limited pool. Furthermore, some computational models make assumptions that are somewhat ad-hoc, not verified in experimental studies. Our hope is to infuse a broad perspective into this discussion by introducing some new perspectives in thinking about trust, and point researchers to the different lines of research in social and cognitive psychology.

In Chap. 4, we conduct a short survey of trust in the computing literature, where many different definitions of trust seem to co-exist. This survey does not go into details of any specific computational model of trust, but highlights how various definitions of trust can be mapped to the concepts we have laid out. The survey provides a multidisciplinary view of trust in networks, and different computational approaches to the study of trust. We also draw some parallels between social and cognitive science research and computing research.

Finally, in Chap. 5, we provide a summary based on the surveys in Chaps. 3 and 4. We describe some important contextual components of trust and how trust evaluation is impacted by these components. We make some modeling suggestions for future work.

Table of Contents [tableofcontents.pdf]

... “The author achieves a brilliant equilibrium of various terms in a relatively small volume, making it very easy to read. I found this book a valuable contribution to the literature in social, computer and information sciences. It certainly will make interesting reading for researchers, students, and professionals in fields such as online and viral marketing, public relations, news agencies using social networks, and information services.”

ACM Computing Surveys, 2/11/2014