Past Seminars

(For fall 2009 and earlier seminars, follow this link.)

Spring 2011

Data Streams, Dyck Languages, and Detecting Dubious Data Structures
Speaker: Andrew McGregor, UMass Amherst
When and where: Feb 17, JEC 3117
(CS Colloquium)

Abstract: You are watching an interaction with a priority queue: at each step the user either inserts a new value into the queue or asks for the current minimum to be extracted. However, you are suspicious that the priority queue may not always be extracting the correct minimum. Can you help the user recognize when this happens without having to remember all the values she has inserted? The above is an example of a problem in the data stream model where the input is a long stream of data and the goal is to compute properties of the input while only using memory that is sub-linear in the length of the stream. This theoretical model has become increasingly popular over the last decade. Its original motivation stems from applications such as monitoring network traffic, processing massive data sets, and aggregation in sensor networks. In this talk, we will present recent results on recognizing formal languages. We'll start with some algorithms for memory checking and then discuss connections with other problems. (Includes joint work with Amit Chakrabarti, Graham Cormode, and Ranganath Kondapally).

Viewing Market Price Discovery as an Algorithmic Process
Speaker: Richard Cole, Courant Institute, New York University
When and where: Feb 24, JEC 3117
(CS Colloquium)

Self-organizing behavior can often be viewed as arising from a distributed process. It is natural to ask when and why it occurs. Our thesis is that an algorithmic perspective may be helpful. One instance of such a distributed process is pricing in markets. A basic tenet of well-functioning markets is that they discover (converge toward) prices that simultaneously balance supplies and demands of all goods; these are called equilibrium prices. Further, the markets are self-stabilizing, meaning that they converge toward new equilibria as conditions change. This talk will seek to explain why this could happen. More specifically, we describe the setting of Ongoing Markets (by contrast with the classic Fisher and Exchange markets). An Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. The main task remaining is to specify and analyze a suitable price update rule. We consider a (tatonnement-style) rule with the following characteristics: 1. There is a separate instance of the price update procedure for each good. 2. The procedure is distributed: (i) the instances are independent, and ii) each instance uses limited ‘local’ information. 3. It is simple. 4. It is asynchronous: price updates do not have to be simultaneous. And for appropriate markets the rule enables: 5. Fast convergence. 6. Robustness in the face of (somewhat) inaccurate data. This talk is based on papers with Lisa Fleischer and Ashish Rastogi. February 24

  1. Learning the Demand Curve in Posted-Price Digital Goods Auctions

  2. Expert Mediated Search

Speaker: Meenal Chhabra
When and where: Apr 13, SAGE 2704

Abstract:

  1. Learning the Demand Curve in Posted-Price Digital Goods Auctions Online digital goods auctions are settings where a seller with an unlimited supply of goods (e.g. music or movie downloads) interacts with a stream of potential buyers. In the posted price setting, the seller makes a take-it-or-leave-it offer to each arriving buyer. We study the seller's revenue maximization problem in posted-price auctions of digital goods. We find that algorithms from the multi-armed bandit literature like UCB, which come with good regret bounds, can be slow to converge. We propose and study two alternatives:

    1. scheme based on using Gittins indices with priors that make appropriate use of domain knowledge;

    2. a new learning algorithm, LLVD, that assumes a linear demand curve, and maintains a Beta prior over the free parameter using a moment-matching approximation. LLVD is not only (approximately) optimal for linear demand, but also learns fast and performs well when the linearity assumption is violated, for example in the cases of two natural valuation distributions, exponential and log-normal.

  2. Expert Mediated Search Increasingly in both traditional, and especially Internet-based marketplaces, knowledge is becoming a traded commodity. This paper considers the impact of the presence of knowledge-brokers, or experts, on search-based markets with noisy signals. For example, consider a consumer looking for a used car on a large Internet marketplace. She sees noisy signals of the true value of any car she looks at the advertisement for, and can disambiguate this signal by paying for the services of an expert (for example, getting a Carfax report, or taking the car to a mechanic for an inspection). Both the consumer and the expert are rational, self-interested agents. We present a model for such search environments, and analyze several aspects of the model, making three main contributions:

    1. We derive the consumer's optimal search strategy in environments with noisy signals, with and without the option of consulting an expert;

    2. We find the optimal strategy for maximizing the expert's profit;

    3. We study the option of market designers to subsidize search in a way that improves overall social welfare. We illustrate our results in the context of a plausible distribution of signals and values.

Bundling in Expert-Mediated Search
Speaker: Sanmay Das
When and where: April 20, SAGE 2704

In search-based markets with noisy signals, like the market for used cars, experts can play an important role. These experts act as information brokers, revealing the true value of a good in exchange for the payment of a fee. Sometimes these experts may choose to sell only bundles of their services (three car inspections for a fixed price, e.g.). We analyze bundling of services in a model of expert-mediated (one-sided) search, and derive optimal strategies for experts and buyers. Our analysis reveals some surprising results. In particular, there are situations where offering only non-unit-size bundles of services can be pareto-improving for the expert and the buyer. Further, in markets with low search costs, the optimal strategy for the expert may well be to offer unlimited services for a single flat fee.
(This talk is based on joint work with Meenal Chhabra (RPI) and David Sarne (Bar-Ilan Univ))

Firefighting on Geometric Graphs
Speaker: Amir Barghi, Dartmouth
When and where: April 27, SAGE 2704

Abstract: In the Firefighter Problem, a fire starts at a vertex of a graph, and in discrete time intervals, it spreads from burned vertices to their neighbors unless they are protected by one of the f firefighters that are deployed every turn. Once burned or protected, a vertex remains in that state. This process terminates  when the fire can not spread any longer. In the case of finite graphs, firefighters wish to minimize the damage or the time that the fire rages. When a fire starts in a locally-finite infinite graph, the key question is whether the fire can be stopped. In this talk, we will discuss three different models for how the  vertices of an infinite graph are distributed on the Euclidean plane. All these models have a geometric aspect: in the first two models, random and extremal geometric graphs, the geometric aspect is due to the adjacency criterion, and in the third model, plane-like graphs, it is due to the specific representation of an infinite planar graph on the Euclidean plane.
(This is a joint work with Peter Winkler.)

A Little Linear Algebraic Lemma with Lots of Uses
Speaker: Malik Magdon-Ismail
When and where: May 4, SAGE 2704

Abstract: We cover a spectacular deterministic sparsification result. Specifically, given a set of vectors vi whose sum of outer products is the identity,sum(vi vtT)=I, is it possible to choose a small subset of these and still approximately maintain this property. The answer is yes and it can be done deterministically. The proof is elementary and will not be covered; some intuition will be given. We give a simple generalization of this result and then discuss 4 applications all solvable in deterministic low order polynomial time:

  1. It is possible to take any (dense) weighted graph and construct a sparse weighted graph such that every cut is approximately preserved in the relative sense. (deterministic low order polynomial time).

  2. It is possible to take any matrix and choose a small subset of its columns such that one can approximate the matrix well.

  3. Linear Regression (supervised learning). It is possible to take any constrained linear regression problem and convert it into a smaller one; it is possible to reduce the dimensionality of the regression problem (sparse regression) and obtain provable performance guarantees.

  4. K-means Clustering. It is possible to reduce the dimensionality of any K-means clustering problem to O(K) dimensions and obtain provable performance guarantees. Similar methods give an elementary proof of a result due to Bourgain on restricted invertibility. This will not be covered.

Optimization Problems in Internet Advertising
Speaker: Clifford Stein, Columbia University
When and where: May 6, JEC 3117
(CS Colloquium)

Abstract: The use of the internet has led to the creation of fundamentally new forms of advertising. In turn, this advertising provides the financial support for many on-line companies and technological breakthroughs. The development of online advertising has raised many new questions in economics, mathematics, computer science and engineering, particularly around the design of auctions and markets, and in the design of algorithms to efficiently manage them. Several problems of deciding which ads should be shown to users can be framed as online stochastic packing integer programs. In this talk, we will discuss results on solving these problems from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing integer programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple dual training-based algorithm achieves a (1-o(1))-approximation guarantee in the random order stochastic model. We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based algorithms on real data sets, and also indicates an intrinsic trade-off between fairness and efficiency
(This talk presents joint work with Jon Feldman, Nitish Korula, Vahab Mirrokni and Monika Henzinger.)

Fall 2010

Permutation Methods for Model Validation
Speaker: Malik Magdon-Ismail
When and where: Sep 15

We give a permutation approach to validation and model selection in data mining, machine learning and statistical inference. We define permutation complexity measures in learning, and give uniform bounds on the out-sample error, similar to a VC-style bounds. We also show concentration of measure for permutation complexity, which directly means that the methods are algorithmically efficient. The theoretical novelty is that we develop methods for analyzing dependent sampling schemes (e.g. sampling without replacement), and show how to get concentration of measure in such a dependent sampling setting. We demonstrate the practical value of these techniques with extensive model selection experiments in real as well as synthetic data.

Partition Equilibrium Always Exists in Resource Selection Games
Speaker: Ameya Hate
When and where: Sep 29

We consider the existence of Partition Equilibrium in Resource Selection Games. Super-strong equilibrium, where no subset of players has an incentive to change their strategies collectively, does not always exist in such games. We show, however, that partition equilibrium always exists in general resource selection games, as well as how to compute it efficiently. In a partition equilibrium, the set of players has a fixed partition into coalitions, and the only deviations considered are by coalitions that are sets in this partition. Our algorithm to compute a partition equilibrium in any resource selection game (i.e., load balancing game) settles the open question about existence of partition equilibrium in general resource selection games. Moreover, we show how to always find a partition equilibrium which is also a Nash equilibrium. This implies that in resource selection games, we do not need to sacrifice the stability of individual players when forming solutions stable against coalitional deviations. In addition, while super-strong equilibrium may not exist in resource selection games, we show that its existence can be decided efficiently, and how to find one if it exists.

Impersonation Strategies in Auctions
Speaker: Dr. Ian Kash, Harvard University
When and where: Oct 6

A common approach to analyzing repeated auctions, such as sponsored search auctions, is to treat them as complete information games, because it is assumed that, over time, players learn each other's types. This overlooks the possibility that players may impersonate another type. In this talk, I'll show that many standard auctions, including the Kelly mechanism, generalized second price auctions, and core-selecting auctions, have profitable impersonations. I'll define a notion of impersonation-proofness for the process by which players learn about each other's type together with the auction mechanism and associated complete information game and give several examples.
(Joint work with David Parkes.)

Contribution Games in Social Networks
Speaker: Elliot Anshelevich
When and where: Oct 27

We consider network contribution games, where each agent in a social network has a budget of effort that it can contribute to different collaborative projects or relationships. Depending on the contribution of the involved agents a relationship will flourish or drown, and to measure the success we use a reward function for each relationship. Every agent is trying to maximize the reward from all relationships that it is involved in. We consider pairwise equilibria of this game, and characterize the existence, computational complexity, and quality of equilibrium based on the types of reward functions involved. For example, when all reward functions are concave, we prove that the price of anarchy is at most 2. For convex functions the same only holds under some special but very natural conditions. A special focus of the talk are minimum effort games, where the reward of a relationship depends only on the minimum effort of any of the participants. Finally, we show tight bounds for approximate equilibria and convergence of dynamics in these games.

Spring 2010

Dynamic and Non-Uniform Pricing Strategies for Revenue Maximization
Speaker: Tanmoy Chakraborty
When and where: Feb 10

Abstract: We study the Item Pricing problem for revenue maximization in the limited supply setting, where a single seller with n distinct items caters to m buyers with unknown subadditive valuation functions who arrive in a sequence. The seller sets the prices on individual items. Each buyer buys a subset of yet unsold items that maximizes her utility. Our goal is to design pricing strategies that guarantee an expected revenue that is within a small factor of the optimal social welfare – an upper bound on the maximum revenue possible. Most earlier work has focused on the unlimited supply setting, where selling an item to a buyer does not affect the availability of the item to the future buyers. Recently, Balcan et. al. studied the limited supply setting, giving a randomized pricing strategy that achieves a super-poly-logarithmic approximation; their strategy assigns a single price to all items (uniform pricing), and never changes it (static pricing). They also showed that no pricing strategy that is both static and uniform can give a poly-logarithmic approximation. We design dynamic uniform pricing strategies (all items are identically priced but item prices can change over time), as well as static non-uniform pricing strategies (different items can have different prices but prices do not change over time), that give poly-logarithmic approximation for revenue maximization. Thus in the limited supply setting, our results highlight a strong separation between the power of dynamic and non-uniform pricing strategies versus static uniform pricing strategy.
(This is a joint work with Zhiyi Huang and Sanjeev Khanna.)

Dynamic Projection Surfaces for Architectural Daylighting
Speaker: Barbara Cutler
When and where: Feb 17

Abstract: We present a system for dynamic projection on large, human-scale, moving projection screens and demonstrate this system for immersive visualization applications in several fields. We have designed and implemented efficient, low-cost methods for robust tracking of projection surfaces, a method to provide high frame rate output for computationally-intensive, low frame rate applications, and a method for evaluating the placement of projectors for such systems. We present a distributed rendering environment which allows many projectors to work together to illuminate the projection surfaces. This physically immersive visualization environment promotes innovation and creativity in design and analysis applications and facilitates exploration of alternative visualization styles and modes. The system provides for multiple participants to interact in a shared environment in a natural manner. Our new human-scale user interface is intuitive and novice users require essentially no instruction to operate the visualization. Our driving application applies interactive global illumination and spatially augmented reality to architectural daylight modeling that allows designers to explore alternative designs and new technologies for improving the sustainability of their buildings.

Collective Intelligence: Information Aggregation in Wikis, Blogs, and Markets
Speaker: Sanmay Das
When and where: March 3

Abstract: Over the last four decades, we have seen the development of a rich mathematical theory of information aggregation in markets. As new collective information aggregation venues like wikis and blogs become prevalent, we are in need of a similar descriptive, quantitative theory. I will discuss some early steps towards such a theory: a model for information growth in collaborative media that is able to reproduce many features of the edit dynamics observed on Wikipedia and on blogs collected from LiveJournal; in particular, our model captures the observed rise in the edit rate, followed by (1/t) decay. Insights from modeling can guide better design. I will present a practical market making algorithm based on classical finance theory that can substantially improve liquidity in markets without losing money in expectation. This market maker solves an interesting optimal learning problem. I will talk about how we validate this algorithm using a novel experimental design with human subjects trading in a prediction market. The design allows us to compare our market maker with the de facto standard, Hanson's logarithmic market scoring rule. Although it is not loss-bounded like Hanson's market maker, our algorithm offers several practical advantages, primarily the fact that in expectation we do not have to subsidize the market, but also more price stability.
(Parts of this talk are joint work with Aseem Brahma and with Malik Magdon-Ismail)

Approximation Algorithms for the Firefighter Problem
Speaker: Ameya Hate
When and where: March 17

We provide approximation algorithms for several variants of the Firefighter problem on general graphs. The Firefighter problem models the case where a diffusive process such as an infection (or an idea, a computer virus, a fire) is spreading through a network, and our goal is to contain this infection by using targeted vaccinations. Specifically, we are allowed to vaccinate at most a fixed number (called the budget) of nodes per time-step, with the goal of minimizing the effect of the infection. The difficulty of this problem comes from its temporal component, since we must choose nodes to vaccinate at every time-step while the infection is spreading through the network, leading to notions of cuts over time. We consider two versions of the Firefighter problem: a non-spreading model, where vaccinating a node means only that this node cannot be infected; and a spreading model where the vaccination itself is an infectious process, such as in the case where the infection is a harmful idea, and the vaccine to it is another infectious beneficial idea. We look at two measures: the MaxSave measure in which we want to maximize the number of nodes which are not infected given a fixed budget, and the MinBudget measure, in which we are given a set of nodes which we have to save and the goal is to minimize the budget. We study the approximability of these problems in both models.

Anarchy, Stability, and Utopia: Creating Better Matchings
Speaker: Elliot Anshelevich
When and where: March 24

Abstract: Historically, the analysis of matching has centered on designing algorithms to produce stable matchings as well as on analyzing the incentive compatibility of matching mechanisms. Less attention has been paid to questions related to the social welfare of stable matchings in cardinal utility models. We examine the loss in social welfare that arises from requiring matchings to be stable, the natural equilibrium concept under individual rationality. While this loss can be arbitrarily bad under general preferences, when there is some structure to the underlying graph corresponding to natural conditions on preferences, we prove worst case bounds on the price of anarchy. Surprisingly, under simple distributions of utilities, the average case loss turns out to be significantly smaller than the worst-case analysis would suggest. Furthermore, we derive conditions for the existence of approximately stable matchings that are also close to socially optimal, demonstrating that adding small switching costs can make socially (near-)optimal matchings stable. Our analysis leads to several concomitant results of interest on the convergence of decentralized partner-switching algorithms, and on the impact of heterogeneity of tastes on social welfare.

Low-rank Matrix-valued Chernoff Bounds and Applications
Speaker: Tasos Zouzias
When and where: April 7

In this talk we will discuss some recent results on a non-trivial generalization of the usual Chernoff bound for matrix-valued random variables by Ahlswede and Winter. I am going to present this generalization and compare it with Vershynin and Rudelson deviation inequality for rank-one symmetric operators. Then I will show how Vershynin and Rudelson's approach can be adapted to prove a sharper matrix-valued Chernoff bound when the (matrix) samples have low-rank. Finally, I will show the usefulness of the latter result on the problem of approximating (w.r.t. the spectral norm) matrix multiplication.
(Joint work with Avner Magen.)