Theory at RPI CS

|
|
|
Spring 2009 Seminars:
|
|
|
|
| Pricing in Network Games
using Supply Functions by Nicholas Stier-Moses |
02/12/09 |
|
One of the first problems
studied in the area of Algorithmic Game Theory has been to determine
the inefficiency that arises from selfish behavior in network games. In
these games, arcs have cost functions defined a-priori and players have
to choose paths in the network whose costs depend on the choices of
other players. Some common applications can be found in
telecommunication, transportation and logistic networks. From the
perspective of the system, one would like that players coordinate
themselves to maximize the global welfare but, in most cases, players
are oblivious to this and only pay attention to their own welfare. The
mismatch between the global and personal objectives leads the system to
operate in a suboptimal regime. For this reason, this stream of
research focused on quantifying the inefficiency introduced by selfish
behavior and determined that, under some assumptions, the operating
regime---represented by an equilibrium---cannot be more than 33% worse
than optimal. In this talk, I will review some of these results and
then introduce a model that endogenizes the selection of cost
functions, which were previously considered exogenous and fixed. As
opposed to traditional network games, the owners of arcs choose how
much to charge to users interested in selecting them. Owners do this by
determining the supply function that maximizes their own profits. Users
learn these functions and compete for paths as in a traditional network
game. This type of model and their solutions have been called supply
function equilibria and have immediate application to electricity
markets. Indeed, a mechanism of this type is used daily to decide how
much electricity each generator is going to produce to inject to the
grid. I will present necessary and sufficient conditions for the
existence of equilibria in the space of supply functions, which can be
used to show that charges at equilibrium equal the cost of providing
service plus a multiplicative markup. Finally, I will analyze the
inefficiency introduced by competition and the distortion of costs that
it generates.
|
|
| Estimation of Credit and
Interest Rate Risks for Banks’ Loan Portfolios by Andrey
Sarayev |
02/25/09 |
| For a bank which has a loan
portfolio, there is a need to estimate its risks, in order to prevent
possible big losses. Two main risks of a loan portfolio are credit and
interest rate risks. Most of existing models compute credit and
interest rate risks separately. We propose a model which estimates both
credit and interest rate risks either separately or combined. Combined
estimation of credit and interest rate risks is important, because in
many real-world scenarios credit and interest rate risks are not
independent. The model allows us to calculate probability distribution
function (pdf) of portfolio losses for a given time horizon. Given the
pdf of portfolio losses we can easily obtain various risk metrics, such
as Value-at-Risk or Expected Shortfall. The model does not have an
analytical solution for loss pdf, so in order to calculate loss pdf we
have to use numerical methods. We implement two methods for the
calculation of loss pdf based on our model – 1) Fast Monte-Carlo method
2) Numerical integration method. The running time of both algorithms is
linear with respect to the number of loans. Our implementations of
algorithms are optimized for speed and balance the workload between
multiple threads so the algorithms achieve significant speedup on
computers with multiple CPUs or on modern multi-core CPUs. |
|
|
A
Priority-Based Model of Routing by
Neil Olver
|
03/04/09 |
|
When being held up in
traffic, it is obviously only the cars in front of you that delay you.
In situations where the traffic density changes with time, different
users may experience different delays on the same link. In the standard
model of selfish routing however, all users experience the same delay.
In this talk, I will discuss a new network routing model which
introduces priorities in a very general way. The model is sufficiently
general to apply to some quite varied problems, and I will talk about
some of these applications. I will also discuss some 'price of anarchy'
results, which quantify the loss of efficiency that results from a lack
of centralised control. In particular, we have tight bounds on the
price of anarchy for a number of important cases.
|
|
| Bin Packing: From Theory
to Experiment and back Again by David Johnson |
03/18/09 |
| In the bin classical
1-dimensional packing problem, we are given a list items with sizes
between 0 and 1, and asked to pack them into a minimum number of
unit-capacity bins. This was one of the first NP-hard problems to be
studied from the "approximation algorithm" point of view, and has been
the source of many surprising results about the average case behavior
of such algorithms. Many of these surprises were first revealed by
experimentation, which led to conjectures and then to proofs. In this
talk I will survey these results, and also highlight some
as-yet-unproved conjectures suggested by the experimental data. |
|
| Event Recognition in
Sensor Networks by Means of Grammatical Inference by Sahin Cem
Geyik |
03/25/09 |
| Modern military and civilian
surveillance applications should provide end users with the high level
representation of events observed by sensors rather than with the raw
data measurements. Hence, there is a need for a system that can infer
higher level meaning from collected sensor data. We demonstrate that
probabilistic context free grammars (PCFGs) can be used as a basis for
such a system. To recognize events from raw sensor network
measurements, we use a PCFG inference method based on Stolcke(1994) and
Chen(1996). We present a fast algorithm for deriving a concise
probabilistic context free grammar from the given observational data.
The algorithm uses an evaluation metric based on Bayesian formula for
maximizing grammar a posteriori probability given the training data. We
also present a real-world scenario of monitoring a parking lot and the
simulation based on this scenario. We described the use of PCFGs to
recognize events in the results of such a simulation. We finally
demonstrate the deployment details of such an event recognition system. |
|
|
Rethinking Internet
Traffic Management using Optimization Theory by Jennifer Rexford
|
04/02/09 |
| In the Internet today,
traffic management spans congestion control (at end hosts), routing
protocols (on routers), and traffic engineering (by network operators).
Historically, this division of functionality evolved organically. This
talk presents a top-down redesign of traffic management using recent
innovations in optimization theory. First, we propose an objective
function that captures the goals of end users and network operators.
Using all known optimization decomposition techniques, we generate four
distributed algorithms that divide traffic over multiple paths based on
feedback from the network links. Combining the best features of the
algorithms, we construct a traffic management protocol that is
distributed, adaptive, robust, flexible and easy to manage. Further,
our new protocol can operate based on implicit feedback about packet
loss and delay. We show that using optimization decompositions as a
foundation, simulations as a building block, and human intuition as a
guide can be a principled approach to protocol design. This is joint
work with Jiayue He, Martin Suchara, Ma'ayan Bresler, and Mung Chiang. |
|
| Incremental Construction
of Classifier and Discriminant Ensembles by Murat Semerci |
04/22/09 |
|
We discuss approaches to
incrementally construct an ensemble. Our first construction is an
ensemble of classifiers choosing a subset from a larger set, and the
second construction is an ensemble of discriminants, where a classifier
is used for some classes only. We investigate criteria including
accuracy, significant improvement, diversity, correlation, and the role
of search direction. We find that the discriminant ensemble uses a
subset of discriminants and is simpler, interpretable, and accurate. We
also find that an incremental ensemble has higher accuracy than bagging
and random subspace method; has comparable accuracy to AdaBoost, but
fewer classifiers.
|
| Exact Calculation of
Distributions on Integers, with Application to Sequence Alignment by
Lee Newberg |
04/29/09 |
|
Often
dynamic
programming algorithms are used to compute an optimal
score. In many cases, the score is naturally defined on integers, such
as a count of the number of pairing differences between two sequence
alignments, or else an integer score has been adopted for computational
reasons, such as in the test of significance of DNA motif scores. The
probability distribution of an integer score under an appropriate
probabilistic model is of interest, such as in tests of the statistical
significance of motif scores, or in the calculation of Bayesian
confidence limits around an alignment. I will present three generic
algorithms for calculating the exact distribution of a dynamic
programming algorithm integer score; and give some concrete examples
from biology. This is work from doi:10.1089/cmb.2008.0137.
|
|
| Time-Space
Lower
Bounds
for Satisfiability and Related Problems on Randomized
Machines by Scott
Diehl |
05/06/09 |
In this talk, we consider
the task of establishing explicit lower bounds on the computational
resources required to solve hard problems such as satisfiability and
its quantified extensions. In particular, we describe the first
time-space lower bounds for randomized algorithms to solve QSATk, the
problem of deciding the validity of quantified Boolean formulas with at
most k-1 quantifier alternations. We show that for any integer k >=
2 and constant c < k, QSATk cannot be computed by randomized
machines with two-sided error in time n^{k-o(1)} and subpolynomial
space (n^{o(1)}). This result also applies to other natural complete
problems in the polynomial-time hierarchy, such as maximizing the
number of literals that can be removed from a disjunctive normal form
formula without changing its meaning.
The satisfiability problem corresponds to k = 1, for which the above
bound is trivial. We outline some results towards achieving nontrivial
time-space lower bounds for two-sided error randomized machines to
solve satisfiability, such as lower bounds on one-sided error
algorithms. |
|
| Concurrent Imitation
Dynamics in Congestion Games by Martin Hoefer |
05/08/09 |
| Imitating successful
behavior is a natural and frequently applied approach when making
decisions in various scenarios. In this talk, we consider such behavior
in atomic congestion games, which are simple models for routing and
load balancing in systems without central coordination. Based on
imitation we design a protocol for distributed load balancing, in which
each agent samples another agent and possibly imitates this agents'
strategy if the anticipated latency gain is sufficiently large. We show
that the resulting dynamics converge in a monotonic fashion to stable
states, in which none of the agents can improve its latency by
imitating somebody else. As the main result, we show rapid convergence
to approximate equilibria. At an approximate equilibrium only a small
fraction of agents sustains a latency significantly above or below
average. In particular, imitation dynamics behave like an FPTAS, and
fixing all other parameters, the convergence time depends only in a
logarithmic fashion on the number of agents. Additionally, we outline a
number of extended results regarding, e.g., the total cost of stable
states, convergence to Nash equilibria, and sequential dynamics, in
which only one agent moves at a time. This is joint work with Heiner
Ackermann, Petra Berenbrink, and Simon Fischer. |
|
|
|
Fall 2008 Seminars:
|
|
| Approximating Optimal
Binary Decision Trees by Brent Heeringa |
10/08/08 |
| We give a (ln n +
1)-approximation for the decision tree (DT) problem. An instance of DT
is a set of m binary tests T = (T1 , . . . , Tm) and a set of n items X
= (X1 , . . . , Xn ). The goal is to output a binary tree where each
internal node is a test, each leaf is an item and the total external
path length of the tree is minimized. Total external path length is the
sum of the depths of all the leaves in the tree. DT has a long history
in computer science with applications ranging from medical diagnosis to
experiment design. It also generalizes the problem of finding optimal
average-case search strategies in partially ordered sets which includes
several alphabetic tree problems. Our work decreases the previous upper
bound on the approximation ratio by a constant factor. We provide a new
analysis of the greedy algorithm that uses a simple accounting scheme
to spread the cost of a tree among pairs of items split at a particular
node. We conclude by showing that our upper bound also holds for the DT
problem with weighted tests. |
|
| |
| Uniform generation and
summarization of Frequent Graph Patterns by Mohammed Al Hasan |
10/15/08 |
| The recent research
direction in pattern mining has shifted from obtaining the total set to
generate a representative (summary) set. Most of the existing mining
approaches in this paradigm adopt a two-step process; in first step,
they obtain all the frequent patterns, then in the second step, some
form of clustering algorithm is used to achieve the summary pattern
set. However, two-step method is inefficient and sometimes infeasible
since the first step may fail to finish in a reasonable amount of time.
In this research, we consider an alternative viewpoint of frequent
pattern representativeness. It is based on uniform and identical
sampling. We obtain representative patterns by sampling uniformly from
the pool of all frequent maximal patterns. The uniformity is achieved
by a variant of Markov Chain Monte Carlo (MCMC) algorithm. It simulates
a random walk on the frequent pattern partial order graph with a
prescribed transition probability matrix, whose value is computed
locally during the simulation. In the stationary distribution of the
random walk, all maximal frequent pattern nodes in the partial order
graph is sampled uniformly. Experiments on various kind of graph
databases validates our claim. |
|
|
Probabilistic
Network
Formation
through Coverage and Freeze-Tag by Eric
Meisner
|
10/22/08 |
|
We
address the problem of propagating a piece of information among robots
scattered in an environment. Initially, a single robot has the
information. This robot searches for other robots to pass it along.
When a robot is discovered, it can participate in the process by
searching for other robots. Since our motivation for studying this
problem is to form an ad-hoc network, we call it the Network Formation
Problem. In this paper, we study the case where the environment is a
rectangle and the robots’ locations are unknown but chosen uniformly at
random. We present an efficient network formation algorithm, Stripes,
and show that its expected performance is within a logarithmic factor
of the optimal performance. We also compare Stripes with an intuitive
network formation algorithm in simulations. The feasibility of Stripes
is demonstrated with a proof-of-concept implementation.
|
|
| Inference of geometry
and dependence in modeling complex disease phenotypes by Sayan
Mukherjee |
11/05/08 |
|
Tumor progression is a
complex (disease) phenotype. The challenge in modeling tumorigenesis is
heterogeneity with respect to phenotype, stages of the disease, and
genotype or gene expression variation. This is particularly challenging
in the case of high-dimensional data. I will discuss machine learning
tools or Bayesian statistical models that allow for the inference of
which genes or sets of genes vary across stages of tumors and which are
specific to certian stages. I will also discuss the inference of gene
and pathway networks. The modeling tools are (Bayesian) hierarchical or
multi-task regression models as well as a generalization inverse
regression based on inference of gradients on manifolds. Statistical
properties of this approach will be developed in the context of
simultaneous dimension reduction and regression as well as graphical
models. Consistency of dimension reduction as well as inference of the
graphical model will be stated. An interesting observation is that the
rate of convergence of both estimates depends on the dimension of the
underlying manifold on which the marginal distribution is assumed to be
concentrated, the result does not depend on the number of non-zero
entries of the conditional independence matrix but on the rank of this
matrix. (Joint work with: Qiang Wu, Elena Edelman, and Justin Guinney.)
|
|
|
| Network Connection
Games: Properties and Complexity of Equilibria by Rajmohan
Rajaraman |
11/21/08 (Friday) |
|
This talk concerns two
classes of network connection games. Motivated by applications in
social networks, peer-to-peer and overlay networks, we first introduce
the Bounded Budget Connection (BBC) games. In a BBC game, we have a
collection of players each of whom has a budget for purchasing links;
each link has a cost as well as a length and each node has a set of
preference weights for each of the remaining nodes; the objective of
each node is to use its budget to buy a set of outgoing links so as to
minimize its sum of preference-weighted distances to the remaining
nodes. We first consider the special case of uniform BBC games, where
all link costs, link lengths and preference weights are equal and all
budgets are equal. We show that pure Nash equilibria always exist for
uniform BBC games, and establish near-tight bounds on the price of
anarchy. For general BBC games, however, determining the existence of a
pure Nash equilibrium is NP-hard. We counterbalance this result by
considering a natural variant, fractional BBC games -- where it is
permitted to buy fractions of links -- and show that a pure Nash
equilibrium always exists in such games. The second class of games is
based on a fractional model of the Border Gateway Protocol (BGP), the
interdomain routing protocol used in the Internet today. The selection
of routes in BGP can be modeled as a game, whose pure Nash equilibria
correspond to a set of stable routes for given route preferences. It
has been shown that pure Nash equilibria do not always exist in such
games, thus implying that for certain route preferences BGP may not
converge. In recent work, Haxell and Wilfong introduced a fractional
model for BGP and showed that when fractional routes are allowed,
equilibria always exist. We prove that the problem of computing even an
approximate equilibrium in a fractional BGP game is PPAD-hard. The
PPAD-hardness result also extends to finding approximate equilibria in
fractional BBC games. (Joint work with Nikos Laoutaris, Laura
Poplawski, Ravi Sundaram, and Shanghua Teng)
|
|
|
| Counting triangles in
large real-world networks: Algorithms and laws by Charalambos
Tsourakakis |
11/24/08 (Monday) |
| Triangles are important for
real world social networks, lying at the heart of the clustering
coefficient and of the transitivity ratio. However, straight-forward
and even approximate counting algorithms can be slow, trying to execute
or approximate the equivalent of a 3-way database join. In this talk we
present a new method, taking advantage of the special properties
appearing in large real-world networks, which has linear time and space
complexity with respect to the number of the edges. Furthermore, we
present new laws concerning the number of triangles in networks and a
theorem providing the number of triangles in Kronecker graphs, a
realistic graph generator model.
Presentation
Slides:
http://www.cs.cmu.edu/~ctsourak/presentations.htm
|
|
|
|
Spring 2008 Seminars:
|
|
|
|
|
An Improved Approximation
Algorithm for the Column Subset Selection Problem by Christos
Boutsidis
|
02/13/08
|
|
Abstract
|
|
|
Sampling algorithms and
Coresets for Regression and Applications by Michael W. Mahoney
|
03/26/08
|
|
Abstract: L2
regression, also known as the least squares approximation problem, and
more generally Lp regression, are fundamental problems that have
numerous applications in mathematics, computer science, and statistical
data analysis. Recall that the over-constrained Lp regression problem
takes as input an n x d matrix A, where n > d, an n-vector b, and a
number p, and it returns as output a d-vector x such that ||Ax-b||_p is
minimized. Several randomized algorithms for this problem, each of
which provides a relative-error approximation to the exact solution,
are described. The first algorithm applies novel ``subspace sampling''
probabilities to randomly sample constraints and thus construct a
coreset for the L2 regression problem. The second algorithm applies a
random preprocessing before the sampling step and uses a ''fast''
version of the Johnson-Lindenstrauss lemma to obtain an efficient
approximation to the L2 regression problem. The third algorithm
generalizes the concept of a ''well-conditioned'' basis and of
''subspace-preserving sampling'' to obtain efficiently a coreset for Lp
regression. Applications of these algorithms to internet data analysis
will be briefly discussed.
|
|
| Computer Science in the
Information Age by John Hopcroft |
04/03/08 |
| The last forty years have
seen computer science evolve as a major academic discipline. Today the
field is undergoing a major change. Some of the drivers of this change
are the internet, the world wide web, large sensor networks, large
quantities of information in digital form and the wide spread use of
computers for accessing information. This change is requiring
universities to revise the content of computer science programs. This
talk will cover the changes in the theoretical foundations needed to
support information access in the coming years. |
Location: Troy 2018
Time: 4:00 p.m. |
| The Role of Information
in the Cop Robber Game by Nikhil Karnad |
04/09/08 |
| In the first part of the
talk, I present our work on the cop-robber game played in turns on a
graph. A graph G is called cop-win if a single cop can capture
(coincide with) a robber on G. It is usually assumed that the players
can "see" each other at all times. We study the effect of reducing the
visibility of the cop and present results on cop-win graphs. The second
part of the talk focuses on work motivated by monocular vision systems
used in robotics applications. We study a variant of the Lion-and-Man
game in which the pursuer (lion) is able to gather bearing-only
information on the evader (man). We present results on how this
restricted sensing model affects the game. |
|
| Disjoint Paths in
Networks by Sanjeev Khanna |
04/16/08 |
| Abstract: A fundamental
problem in combinatorial optimization is the edge-disjoint paths
problem (EDP). We are given a network and a collection of
source-destination pairs in the network. The goal is to maximize the
number of pairs that can be connected by edge-disjoint paths. Even
special cases of EDP correspond to non-trivial optimization problems,
and the problem becomes NP-hard in very restricted settings. In this
talk, we will survey some recent progress on understanding the
approximability threshold of EDP and its variants. While the recent
developments have essentially resolved the approximability of EDP and
related problems in directed graphs, the status of the undirected case
remains wide open. We will describe a promising work for getting much
improved algorithms for undirected EDP when some congestion is allowed.
In particular, we will highlight a conjecture whose resolution is
strongly tied to the approximability of the undirected case, and
describe some results that lend support to this conjecture.
|
Location: Sage 3510
Time: 4:00 p.m.
|
| Epidemiology and Graph
Cuts by Elliot Anshelevich |
04/30/08 |
| The connection between
epidemiology and graph cuts leads to many new problems that are both
important and interesting. Specifically, we consider the question of
immunizing a few individuals in a social network in order to stop or
slow the process of an epidemic. We give a short overview of known
results in this area, and point out fundamental problems that are still
open. For example, given a graph, a set of infected nodes, and
transmission probabilities on edges, no good algorithms currently exist
for finding the nodes to "immunize" in order to save as many lives as
possible. |
|
| The Spectral Gap of a
Graph by Paul Horn |
05/07/08 |
| The spectrum of a graph
allows great insight on many important graph properties; among them
discrepancy, expansion and the mixing time of random walks. An
advantage of looking at the spectrum is that it can, in fact, be
computed efficiently and allows us a way of guaranteeing expansion
(which in general is NP-hard to compute). In this talk, we discuss
various graph spectra and their uses and then focus on the spectrum of
a normalized Laplacian whose spectrum captures information about graphs
which have arbitrary degree sequences. We establish bounds showing that
the spectral gap of a random subgraph of a graph is close to that of
the original graph. We then turn to the question of how 'good' can the
spectral gap of a graph be, giving an answer which generalizes a
classical theorem of Alon and Boppana for regular graphs to graphs with
an arbitrary degree sequence. Some of this work is joint work with Fan
Chung. |
|
|
|
Fall 2007 Seminar Dates:
|
|
|
|
|
Terminal Backup, 3D
Matching and Covering Cubic Graphs by Elliot Anshelevich
|
09/05/07
|
|
We define a problem called
Simplex Matching, and show that it is solvable in polynomial time.
While Simplex Matching is interesting in its own right as a nontrivial
extension of non-bipartite min-cost matching, its main value lies in
many (seemingly very different) problems that can be solved using our
algorithm. For example, suppose that we are given a graph with terminal
nodes, non-terminal nodes, and edge costs. Then, the Terminal Backup
problem, which consists of finding the cheapest forest connecting every
terminal to at least one other terminal, is reducible to Simplex
Matching. Simplex Matching is also useful for various tasks that
involve forming groups of at least 2 members, such as project
assignment and variants of facility location. In an instance of Simplex
Matching, we are given a hypergraph H with edge costs, and edge size at
most 3. We show how to find the min-cost perfect matching of H
efficiently, if the edge costs obey a simple and realistic inequality
that we call the Simplex Condition. The algorithm we provide is
relatively simple to understand and implement, but difficult to prove
correct. In the process of this proof we show some powerful new results
about covering cubic graphs with simple combinatorial objects.
|
|
Optimal
Oblivious
Routing
Using Metric Embeddings by Malik
Magdon Ismail
|
09/12/07
|
|
In oblivious routing the
packet paths must be constructed independently of each other. Such
algorithms are distributed and scalable. We give a simple oblivious
routing algorithm for geometric networks in which the nodes are
embedded in the Euclidean plane. In our algorithm, a packet path is
constructed by first choosing a random intermediate node in the space
between the source and destination, and then the packet is sent to its
destination through the intermediate node. We analyze the performance
of the algorithm in terms of the stretch and congestion of the
resulting paths. The challenge is to obtain constant stretch, and near
optimal congestion. We show that for well embeded networks, this is
possible, giving the first result to control both stretch and
congestion in a general class of networks. We give applications of our
general result to the mesh topology and uniformly distributed disc
graphs. Previous oblivious routing algorithms with near optimal
congestion use many intermediate nodes and do not control the stretch.
|
|
|
Locality model for the
evolution of social networks by Konstantin Mertsalov
|
09/19/07
|
We propose the model of a
dynamic social network that is based on the principle of local
communication. In this model participants of the network
communicate strictly with the members of it's local area. We
study the different ways of selecting the local area of a participant
and different rules for link creation with in the area and propose the
final model based on this. We also propose the testing
methodology for the dynamic model. We present the result of an
application of this model to simulation of the communication in the
Blogosphere.
|
|
|
From shortest paths to
quasi-concave minimization by Evdokia Nikolova
|
10/03/07
|
In this talk I will walk
through a journey of research, that started with a simple application
in mind: that of finding the shortest path in an uncertain environment
(or: how to get to the airport ontime).In particular, our goal was to
maximize the probability that the path length does not exceed a given
threshold value (deadline). This stochastic shortest path problem is
one of few that have considered a nonlinear objective function--due to
the difficulty of finding closed-form expressions for the objective, in
addition to the challenge of optimizing the objective, which falls in
the category of non-convex optimization. We give a surprising exact
$n^{\Theta(\log n)}$ algorithm for the case of independent normally
distributed edge lengths, which is based on quasi-concave minimization.
Motivated by the stochastic shortest path problem, we further embark on
giving new tools for quasi-concave minimization--a problem of
considerable difficulty, whose solutions are usually exponential, and
typically heuristics of unknown approximation guarantees are employeed.
To that effect, we provide the first smoothed analysis of quasi-concave
minimization. The analysis is based on a smoothed bound for the
number of extreme points of the projection of the feasible polytope
onto a $k$-dimensional subspace. The bound we derive can be used
to design a polynomial-time approximation algorithm for low-rank
quasi-concave minimization, as is the case of the shortest path problem
above. We supplement the positive smoothed bound with a hardness result
that general quasi-concave minimization is hard to approximate. The
latter implies that our bound is tight, in that no polynomial bound is
possible for general quasi-concave minimization. |
|
| Optimal
Oblivious
Routing
Using Metric Embeddings by Malik
Magdon Ismail(cont.) |
10/17/07 |
| In oblivious routing the
packet paths must be constructed independently of each other. Such
algorithms are distributed and scalable. We give a simple oblivious
routing algorithm for geometric networks in which the nodes are
embedded in the Euclidean plane. In our algorithm, a packet path is
constructed by first choosing a random intermediate node in the space
between the source and destination, and then the packet is sent to its
destination through the intermediate node. We analyze the performance
of the algorithm in terms of the stretch and congestion of the
resulting paths. The challenge is to obtain constant stretch, and near
optimal congestion. We show that for well embeded networks, this is
possible, giving the first result to control both stretch and
congestion in a general class of networks. We give applications of our
general result to the mesh topology and uniformly distributed disc
graphs. Previous oblivious routing algorithms with near optimal
congestion use many intermediate nodes and do not control the stretch. |
|
| Condorcet
winner
probabilities
and Voting Procedure by Mukkai
Krishnamoorthy |
10/31/07 |
| Condorcet voting scheme
chooses a winning candidate as one who defeats all others in pairwise
majority rule. We provide a review which includes the rigorous
mathematical treatment for calculating the limiting probability of a
Condorcet winner for any number of candidates and value of $n$ odd or
even and with arbitrary ran k order probabilities, when the voters are
independent. We provide a compact and complete Table for the limiting
probability of a Condorcet winner with three candidates and arbitrary
rank order probabilities. We present a simple proof of a result of May
to show the limiting probability of a Condorcet winner tends to zero as
the number of candidates tends to infinity. We show for the first time
that the limiting probability of a Condorcet winner for any given
number of candidates $m$ is monotone decreasing in $m$ for the equally
likely case. This, in turn, settles the conjectures of Kelly and
Buckley and Westen for the case $n o infty$. We prove the validity of
Gillett's conjecture on the minimum value of the probability of a
Condorcet winner for $m=3$ and any $n$. We generalize this result for
any $m$ and $n$ and obtain the minimum solution and the minimum
probability of a Condorcet winner. |
| Clusters in a
multigraph with elevated density by Mark Goldberg |
11/07/07 |
We will discuss the
edge-coloring problem for multigraphs, and present an old conjecture
which connects the chromatic index $\chi'$ of a
multigraph with its density $\Gamma$ and maximal vertex degree
$\Delta$. We show that
(a) two well-known lower
bounds for the chromatic index of a multigraph are identical;
(b) for multigraphs with $\Gamma > \Delta +1$, the size of any
cluster (a subset of vertices with an integrally
maximal density) is bounded from the above by a function, which depends
on $\Delta$ and $\Gamma$ only;
(c) for every multigraph with $\Gamma > \Delta$ the collection of
minimal clusters is cycle-free.
|
|
| The Border Gateway
Protocol (BGP) and Generalizations by Gordon Wilfong |
11/15/07 |
| The Border Gateway Protocol
(BGP) is the interdomain routing protocol used by routers to exchange
routing information in the Internet. While intradomain protocols such
as RIP can be viewed as distributed algorithms for computing shortest
paths, BGP is actually trying to solve a problem we call the Stable
Paths Problem (SPP). Most of the talk will be in terms of SPP to allow
us to mask the complex details of how BGP operates. We will discuss a
number of decision questions concerning BGP convergence and show that
they are NP-hard. Then we'll consider modifications to the protocol and
configuration constraints that guarantee convergence. A fractional
multi-path version of SPP is discussed and we show that this problem
always has a stable solution. This leads to open problems such as how
to design a protocol (i.e., a distributed algorithm) to achieve such a
fractional stable solution. Some optimization questions arising from
multi-path routing will also be discussed. |
Thursday, November
15, 4:00 p.m.
Location: AE 214
|
| Price of Stability of
Survivable Connection Game by Bugra Caskurlu |
11/28/07 |
| We study the survivable
version of the game theoretic network formation model known as the
Connection Game, originally introduced in STOC'03 by Anshelevich,
Dasgupta, Tardos and Wexler. In this model, players attempt to connect
certain pairs of nodes in a network by purchasing edges, and sharing
their costs with other players. We introduce the survivable version of
this game, where each player desires 2 edge-disjoint
connections between her pair of nodes, instead of just a single
connecting path. For the single-source version of this game, if a
player is only allowed to deviate by changing the payments on one of
her connection paths at a time, instead of all of them at once, we
prove that the price of stability is 1, i.e., there always
exists a stable solution that is as good as the centralized optimum.
This can model the case where a player wishes to keep most of her paths
the same as before the deviation, the case where a complex deviation
involving re-routing of all the player paths is too much for a player,
or the case where each path of a single player is managed by a
different entity, which is extremely possible when a player represents
a large company. For the general case, where players are allowed to
change the payments on all their paths at once, we show that there
always exists a 2-approximate Nash equilibrium that is as good
as the centralized optimum, and provide lower bounds for the price of
stability. |
|
| The Rank
Aggregation
Problem by David Williamson |
12/05/07 |
| The rank aggregation problem
was introduced by Dwork, Kumar, Naor, and Sivakumar (WWW 2001) in the
context of finding good rankings of web pages by drawing on multiple
input rankings from various search engines. The work draws on earlier
work in the social choice literature on combining voter preferences
specified by ranked alternatives. Dwork et al. propose finding a
ranking that minimizes the sum of its Kendall distance to the input
rankings; this can be viewed as a weighted feedback arc set problem. I
will give an overview of the rank aggregation problem and some of its
applications, including an application to intranet search (WWW 2003). I
will also cover recent work done on finding good approximation
algorithms for the problem. In particular, Ailon, Charikar, and Newman
(STOC 2005) introduce a randomized ``pivoting'' algorithm for rank
aggregation and related problems; recent work has extended this to
deterministic pivoting algorithms for constrained versions of the
problem (van Zuylen, Hegde, Jain and Williamson, SODA 2007, van Zuylen
and Williamson 2007) and has yielded a polynomial-time approximation
scheme for the problem (Kenyon-Mathieu and Schudy, STOC 2007). If time
permits, I will discuss some extensions of this work to finding good
clusterings and hierarchical clusterings. |
Wednesday, December 5,
4:00 p.m.
Location: Ricketts 211
|
|
|
|
|