# Recent Seminars

## Spring 2014

Speaker: Jochen Konemann, University of Waterloo
Topic: Finding Small Stabilizers for Unstable Graphs
When and where: January 27, 4:00 pm, Ricketts 211
(CS Colloquium)

Abstract: An undirected graph G=(V,E) is stable if its inessential vertices (those that are exposed by at least one maximum matching) form a stable set. We call a set of edges F a stabilizer if its removal from G yields a stable graph. In this talk we study the following natural edge-deletion question: given a graph G, can we find a minimum-cardinality stabilizer? Stable graphs play an important role in cooperative game theory. In the classic matching game introduced by Shapley and Shubik we are given an undirected graph G where vertices represent players, and we define the value of each subset S of vertices as the cardinality of a maximum matching in the subgraph induced by S. The core of such a game contains all fair allocations of the value of V among the players, and is well-known to be non-empty iff graph G is stable. The minimum stabilizer problem addresses the question of how to modify the graph to ensure that the core is non-empty. We show that this problem is vertex-cover hard. We then prove that there is a minimum-cardinality stabilizer that avoids some maximum-matching of G. We employ this insight to give efficient approximation algorithms for sparse graphs, and for regular graphs. Joint work with A. Bock, K. Chandrasekaran, B. Peis and L. Sanita

Speaker: James Thompson
Topic: A Short, Illustrative MapReduce Tutorial
When and where: April 2, 11:00 am, Lally 02

Abstract: A prevalant model of computing today is cloud-based, and a common abstraction of this model is MapReduce. More and more lately, algorithms need to be developed and analyzed within this framework for practical large-scale applications.This talk/tutorial will give an introduction to MapReduce. To illustrate the basic mechanics, we will show how two simple algorithms on graphs can be brought within the MapReduce framework. The first is an iterative algorithm to improve a set-density (as would be used for example in clustering). The second is a page-rank like neighbourhood description algorithm (as would be useful in graph indexing for efficient search).

Speaker: Laurence L. Leff, Western Illinois University
Topic: Participatory Budgeting and Beyond
When and where: April 16, 11:00 am, Lally 02

Speaker: John Postl
Topic: Assignment Games with Conflicts
When and where: May 14, 11:00 am, Lally 02

Abstract: We study assignment games in which jobs select machines, and in which certain pairs of jobs may conflict, which is to say they may incur an additional cost when they are both assigned to the same machine, beyond that associated with the increase in load. Questions regarding such interactions apply beyond allocating jobs to machines: when people in a social network choose to align themselves with a group or party, they typically do so based upon not only the inherent quality of that group, but also who amongst their friends (or enemies) choose that group as well. We show how semi-smoothness, a recently introduced generalization of smoothness, is necessary to find tight or near-tight bounds on the price of total anarchy, and thus on the quality of correlated and Nash equilibria, for several natural job-assignment games with interacting jobs. For most cases, our bounds on the price of total anarchy are either exactly 2 or approach 2. We also prove new convergence results implied by semi-smoothness for our games.

Speaker: Lirong Xia
Topic: Statistical methods for detecting manipulation in preference aggregation systems
When and where: May 21, 11:00 am, Lally 02

Abstract: We discuss the state-of-the-art techniques and propose promising future directions in detecting two types of manipulating using statistical methods: vote manipulation in elections and false-name manipulation in mechanism design.

## Fall 2013

Speaker: Onkar Bhardwaj
Topic: Friendship and Stable Matching
When and where: August 28, 11:00 am, Lally 02

Abstract: We study stable matching problems in networks where players are embedded in a social context, and may incorporate friendship relations or altruism into their decisions. Each player is a node in a social network and strives to form a good match with a neighboring player. We consider the existence, computation, and inefficiency of stable matchings from which no pair of players wants to deviate. When the benefits from a match are the same for both players, we show that incorporating the well-being of other players into their matching decisions significantly decreases the price of stability, while the price of anarchy remains unaffected. Furthermore, a good stable matching achieving the price of stability bound always exists and can be reached in polynomial time. We extend these results to more general matching rewards, when players matched to each other may receive different utilities from the match. For this more general case, we show that incorporating social context (i.e., caring about your friends") can make an even larger difference, and greatly reduce the price of anarchy. We show a variety of existence results, and present upper and lower bounds on the prices of anarchy and stability for various matching utility structures.

Speaker: Onkar Bhardwaj
Topic: Friend of my Friend: Network Formation with Two-Hop Benefit
When and where: October 16, 11:00 am, Lally 02

Abstract. How and why people form ties is a critical issue for under- standing the fabric of social networks. In contrast to most existing work, we are interested in settings where agents are neither so myopic as to consider only the benefit they derive from their immediate neighbors, nor do they consider the effects on the entire network when forming con- nections. Instead, we consider games on networks where a node tries to maximize its utility taking into account the benefit it gets from the nodes it is directly connected to (called direct benefit), as well as the benefit it gets from the nodes it is acquainted with via a two-hop connection (called two-hop benefit). We call such games Two-Hop Games. The decision to consider only two hops stems from the observation that human agents rarely consider “contacts of a contact of a contact” (3-hop contacts) or further while forming their relationships. We consider several versions of Two-Hop games which are extensions of well-studied games. While the addition of two-hop benefit changes the properties of these games significantly, we prove that in many important cases good equilibrium solutions still exist, and bound the change in the price of anarchy due to two-hop benefit both theoretically and in simulation.

Topic: On Selling Digital Goods
When and where: November 19, 4:00 pm, CII (Low) 3051
(CS Colloquium)

Abstract. Suppose you are a monopolist for a digital good. What pricing mechanism should you use to sell the good? This sounds straightforward but there is a catch. Specifically, such a monopolist is a form of duropolist and Richard Coase (1972) conjectured that a duropolist has no monopoly power at all! We discuss this problem and the various pricing mechanisms available to a duropolist. We then examine the algorithmic implications involved and quantify the extent to which a duropolist can, in fact, generate higher profits than an equivalent monopolist. (Joint work with Peter Sloan and Gerardo Berbeglia.)

Speaker: Bill Zwicker
Topic: Condorcet VS Borda, Round n+1
When and where: December 4, 11:00 am, Lally 02

Abstract. A voting rule is manipulable if it is sometimes possible for a voter to change the election’s outcome to one she prefers, by switching away from her sincere ballot – one that represents her actual preferences – to an insincere ballot. Loosely speaking, the Gibbard-Satterthwaite Theorem tells us that every voting rule suffers from this problem, when there are more than two candidates. More precisely, the theorem states that with three or more alternatives every resolute, non-imposing, and non-dictatorial social choice function (SCF) is manipulable. We prove a variant of this theorem that uses stronger hypotheses and provides additional information, revealing a fundamental dichotomy in the type of strategic vulnerability for an SCF.

## Spring 2012

Speaker: Paul Horn, Harvard University
When and where: February 15, 11:00 am, Lally 02

Abstract: The Kronecker graphs, originally proposed by Leskovec et. al, is a graph model which has been designed to model real-world graphs. These graphs have been shown to have a wide variety of nice properties observed in graph data, and have been successfully fit to many real-world graphs. In this talk, we will look at a specific random variation of the original Kronecker graph and look at the rise of the giant component in this model, obtaining sharp estimates for the size of the component when it exists. An interesting aspect that the analysis must overcome is that the Kronecker graphs do not expand as nicely and uniformly as in the case of the Erdos-Renyi random graphs, in analogy with community clustering in real world networks. A combination of spectral and probabilistic tools are used to establish the result.

Speaker: Jeffry Gaston
Topic: Active Learning of Gaussian Mixture Models Using Direct Estimation of Error Reduction
When and where: March 28, 11:00 am, Lally 02

Abstract: Traditional supervised learning for training a classfier involves a static set of fully-labeled data. However, labels can be costly to obtain. Active learning often provides lower misclassificiation rates for an equivalent number of labeled datapoints by allowing the learner to choose particularly important points for labeling. Standard paradigms for choosing points to label may be to request points near the decision boundary, or points where uncertainty is maximum. We show how to do directly optimize the target criterion by estimating error reduction directly in a probabilistic framework (the Gaussian Mixture Model) and show that this can lead to the need for significantly fewer labeled training examples in several cases than the maximum uncertainty approach.

Speaker: Mithun Chakraborty
Topic: Liquidity Provision in Prediction Markets: A Survey
When and where: April 25, 11:00 am, Lally 02

Abstract: Prediction markets – betting platforms designed for aggregating information about some uncertain future event from a population of informants (traders) into a quantitative forecast – have gone from novelties to serious platforms that can be useful for policy and decision-making to scientists, the government, the military etc. One of the key challenges to running prediction markets is to provide liquidity which, roughly speaking, refers to the open interest or ongoing activity level in the market. Traditionally this problem has been tackled by using an algorithmic agent called a market maker: an intermediary that is always willing to take one side of the trade at prices dictated by itself. The talk, which is based on my review of relevant literature, is organized in three parts. First, I will discuss the state of the art in prediction markets: inventory-based or cost function-based market making. This is exemplified by Hanson's Logarithmic Market Scoring Rule (LMSR) which is by far the most widely used prediction market mechanism in practice. I will also touch upon an interesting real-life application (Gates Hillman Prediction Market) and other market makers influenced by LMSR. Next, I will present some important results on the operational and informational aspects of speculative markets (that subsume prediction markets) and discuss a novel information-based market making paradigm that builds on these concepts: Bayesian Market Making. Finally, I will point out how market making can be thought of as a problem of online learning from censored noisy data and will describe a few similar problems from diverse fields such as medicine, finance, and computer vision for this line of research has the potential to influence and be influenced by the area of information-based market making.

## Fall 2011

Subset Selection for Linear Regression
Speaker: David Kempe, USC
When and where: December 1, 4:00 pm, JEC 3117
(CS Colloquium)

Abstract: One of the central problems in many data-driven sciences is the right selection of attributes to observe in order to accurately predict a variable of interest. Applications abound in areas as diverse as medical sciences (predicting medical conditions based on observable attributes), social sciences (predicting future behaviors or outcomes) and sensor networks, among others. In many of these cases, time or cost constraints prohibit sampling more than a few attributes. Also, many application domains use linear regression as a method of prediction, and evaluate the quality of attributes in terms of the fit with the quantity to be predicted. This motivates the following formal problem definition: “Given the covariances between observable variables and a target variable , select of the variables such that the selected set has the best possible fit with Z.” The main result presented in this talk is that so long as the covariance matrix between the variables is far from singular, greedy algorithms frequently used in practice are provably constant-factor approximations. The proof is based on extending the widely used concept of submodularity to a notion of approximate submodularity, and relating it to the spectrum of the covariance matrix. Furthermore, we will investigate various graph-theoretical properties of covariance matrices which allow for efficient exact or approximate algorithms. We conclude with several exciting open questions.
(This talk is based on joint work with Abhimanyu Das, appearing in ICML 2011 and STOC 2008.)

The Price of Civil Society
Speaker: Tom Wexler, Oberlin College
When and where: November 17, 4:00 pm, JEC 3117
(CS Colloquium)

Abstract: Most work in algorithmic game theory assumes that players ignore the costs incurred by their fellow players. In this talk, we'll consider superimposing a social network over a game, where players are concerned with minimizing not only their own costs, but also the costs of their neighbors in the network. Our goal is to understand how properties of the underlying game are affected by this alteration to the standard model. In particular, we'll consider the ratio of the social cost of the worst Nash equilibrium when each player cares about both herself and her friends relative to the worst Nash under standard selfish play. We initiate this study in the context of a simple class of games. Counterintuitively, we show that when players become less selfish (optimizing over both themselves and their friends), the resulting outcomes can in fact be worse than they would have been in the base game. We give tight bounds on this degredation in a simple class of load-balancing games, over arbitrary social networks, and present some extensions.

An FPTAS for the Weighted Optimal Length Resolution Refutations of Difference Constraint Systems