Speaker: Reshef Meir, Harvard University
Abstract: In multi-agent interactions, each agent often faces uncertainty over the incentives and the behavior of the other agents. The traditional approach assumes that agents each maximize their expected utility w.r.t. some common prior distribution. However in most real-world scenarios agents have no way to accurately or even approximately know this distribution. Moreover, numerous psychological experiments have demonstrated that human decision makers fail even at fairly simple tasks involving probabilistic reasoning, and are prone to cognitive biases such as risk-aversion and loss-aversion. I will describe an alternative, non-probabilistic, model for representing players’ uncertainty in games, inspired by artificial intelligence and bounded rationality approaches. While the model is quite general, I will demonstrate how it applies for preference aggregation mechanisms (voting), overcoming many shortcomings of previous theories. My main result is that the behavior of bounded-rational agents boils down to a simple and natural dynamics, which is guaranteed to converge to equilibrium. Extensive simulations show that the resulting equilibria replicate known phenomena from real-world voting. Finally, I will show how key components of this approach can be extracted and applied to very different settings, including online scheduling on Doodle and routing in networks with uncertain congestion. The talk is based on published and unpublished work with Omer Lev, David Parkes, Jeffrey S. Rosenschein, and James Zou.
Speaker: John Postl
Abstract: We study profit sharing games in which players select projects to participate in and share the reward resulting from that project equally. Unlike most existing work, in which it is assumed that the player utility is monotone in the number of participants working on their project, we consider non-monotone player utilities. Such utilities could result, for example, from “threshold” or “phase transition” effects, when the total benefit from a project improves slowly until the number of participants reaches some critical mass, then improves rapidly, and then slows again due to diminishing returns. Non-monotone player utilities result in a lot of instability: strong Nash equilibrium may no longer exist, and the quality of Nash equilibria may be far away from the centralized optimum. We show, however, that by adding additional requirements such as players needing permission to leave a project from the players currently on this project, or instead players needing permission to a join a project from players on that project, we ensure that strong Nash equilibrium always exists. Moreover, just the addition of permission to leave already guarantees the existence of strong Nash equilibrium within a factor of 2 of the social optimum. In this paper, we provide results on the existence and quality of several different coalitional solution concepts, focusing especially on permission to leave and join projects, and show that such requirements result in the existence of good stable solutions even for the case when player utilities are non-monotone.
Speaker: Lirong Xia
Abstract: In this paper, we take a statistical decision-theoretic viewpoint on social choice, putting a focus on the decision to be made on behalf of a system of agents. In our framework, we are given a statistical ranking model, a decision space, and a loss function defined on (parameter, decision) pairs, and formulate social choice mechanisms as decision rules that minimize expected loss. This suggests a general framework for the design and analysis of new social choice mechanisms. We compare Bayesian estimators, which minimize Bayesian expected loss, for the Mallows model and the Condorcet model respectively, and the Kemeny rule. We consider various normative properties, in addition to computational complexity and asymptotic behavior. In particular, we show that the Bayesian estimator for the Condorcet model satisfies some desired properties such as anonymity, neutrality, and monotonicity, can be computed in polynomial time, and is asymptotically different from the other two rules when the data are generated from the Condorcet model for some ground truth parameter.
Speaker: Jochen Konemann, University of Waterloo
Abstract: An undirected graph G=(V,E) is stable if its inessential vertices (those that are exposed by at least one maximum matching) form a stable set. We call a set of edges F a stabilizer if its removal from G yields a stable graph. In this talk we study the following natural edge-deletion question: given a graph G, can we find a minimum-cardinality stabilizer? Stable graphs play an important role in cooperative game theory. In the classic matching game introduced by Shapley and Shubik we are given an undirected graph G where vertices represent players, and we define the value of each subset S of vertices as the cardinality of a maximum matching in the subgraph induced by S. The core of such a game contains all fair allocations of the value of V among the players, and is well-known to be non-empty iff graph G is stable. The minimum stabilizer problem addresses the question of how to modify the graph to ensure that the core is non-empty. We show that this problem is vertex-cover hard. We then prove that there is a minimum-cardinality stabilizer that avoids some maximum-matching of G. We employ this insight to give efficient approximation algorithms for sparse graphs, and for regular graphs. Joint work with A. Bock, K. Chandrasekaran, B. Peis and L. Sanita
Speaker: James Thompson
Abstract: A prevalant model of computing today is cloud-based, and a common abstraction of this model is MapReduce. More and more lately, algorithms need to be developed and analyzed within this framework for practical large-scale applications.This talk/tutorial will give an introduction to MapReduce. To illustrate the basic mechanics, we will show how two simple algorithms on graphs can be brought within the MapReduce framework. The first is an iterative algorithm to improve a set-density (as would be used for example in clustering). The second is a page-rank like neighbourhood description algorithm (as would be useful in graph indexing for efficient search).
Speaker: Laurence L. Leff, Western Illinois University
Abstract: Click here for abstract
Speaker: John Postl
Abstract: We study assignment games in which jobs select machines, and in which certain pairs of jobs may conflict, which is to say they may incur an additional cost when they are both assigned to the same machine, beyond that associated with the increase in load. Questions regarding such interactions apply beyond allocating jobs to machines: when people in a social network choose to align themselves with a group or party, they typically do so based upon not only the inherent quality of that group, but also who amongst their friends (or enemies) choose that group as well. We show how semi-smoothness, a recently introduced generalization of smoothness, is necessary to find tight or near-tight bounds on the price of total anarchy, and thus on the quality of correlated and Nash equilibria, for several natural job-assignment games with interacting jobs. For most cases, our bounds on the price of total anarchy are either exactly 2 or approach 2. We also prove new convergence results implied by semi-smoothness for our games.
Speaker: Lirong Xia
Abstract: We discuss the state-of-the-art techniques and propose promising future directions in detecting two types of manipulating using statistical methods: vote manipulation in elections and false-name manipulation in mechanism design.
Speaker: Onkar Bhardwaj
Abstract: We study stable matching problems in networks where players are embedded in a social context, and may incorporate friendship relations or altruism into their decisions. Each player is a node in a social network and strives to form a good match with a neighboring player. We consider the existence, computation, and inefficiency of stable matchings from which no pair of players wants to deviate. When the benefits from a match are the same for both players, we show that incorporating the well-being of other players into their matching decisions significantly decreases the price of stability, while the price of anarchy remains unaffected. Furthermore, a good stable matching achieving the price of stability bound always exists and can be reached in polynomial time. We extend these results to more general matching rewards, when players matched to each other may receive different utilities from the match. For this more general case, we show that incorporating social context (i.e., caring about your friends") can make an even larger difference, and greatly reduce the price of anarchy. We show a variety of existence results, and present upper and lower bounds on the prices of anarchy and stability for various matching utility structures.
Speaker: Onkar Bhardwaj
Abstract. How and why people form ties is a critical issue for under- standing the fabric of social networks. In contrast to most existing work, we are interested in settings where agents are neither so myopic as to consider only the benefit they derive from their immediate neighbors, nor do they consider the effects on the entire network when forming con- nections. Instead, we consider games on networks where a node tries to maximize its utility taking into account the benefit it gets from the nodes it is directly connected to (called direct benefit), as well as the benefit it gets from the nodes it is acquainted with via a two-hop connection (called two-hop benefit). We call such games Two-Hop Games. The decision to consider only two hops stems from the observation that human agents rarely consider “contacts of a contact of a contact” (3-hop contacts) or further while forming their relationships. We consider several versions of Two-Hop games which are extensions of well-studied games. While the addition of two-hop benefit changes the properties of these games significantly, we prove that in many important cases good equilibrium solutions still exist, and bound the change in the price of anarchy due to two-hop benefit both theoretically and in simulation.
Speaker: Adrian Vetta, McGill University
Abstract. Suppose you are a monopolist for a digital good. What pricing mechanism should you use to sell the good? This sounds straightforward but there is a catch. Specifically, such a monopolist is a form of duropolist and Richard Coase (1972) conjectured that a duropolist has no monopoly power at all! We discuss this problem and the various pricing mechanisms available to a duropolist. We then examine the algorithmic implications involved and quantify the extent to which a duropolist can, in fact, generate higher profits than an equivalent monopolist. (Joint work with Peter Sloan and Gerardo Berbeglia.)
Speaker: Bill Zwicker
Abstract. A voting rule is manipulable if it is sometimes possible for a voter to change the election’s outcome to one she prefers, by switching away from her sincere ballot – one that represents her actual preferences – to an insincere ballot. Loosely speaking, the Gibbard-Satterthwaite Theorem tells us that every voting rule suffers from this problem, when there are more than two candidates. More precisely, the theorem states that with three or more alternatives every resolute, non-imposing, and non-dictatorial social choice function (SCF) is manipulable. We prove a variant of this theorem that uses stronger hypotheses and provides additional information, revealing a fundamental dichotomy in the type of strategic vulnerability for an SCF.