Upcoming ISE Seminars

In the interconnected world of today, large-scale networked systems are ubiquitous. Some examples include communication networks, electricity grid and sensor networks. In this talk, we describe two recent results related to these networked systems. In the first part, we present a fast distributed asynchronous Alternating Direction Method of Multipliers (ADMM) based method for solving general separable convex problems in large-scale systems, which can be applied to the LASSO and many other machine learning problems. We show that this method has convergence guarantee and the best known rate of convergence. In the second part, we discuss our model on the competitive equilibrium in electricity markets where price fluctuation imposes difficulties in budgeting and planning. We introduce an explicit penalty on the price volatility and establish that price volatility penalty can be implemented via the use of storage.

Decision-making problems that arise in complex systems (e.g., power grids, emergency logistics, communications networks and supply chains) invariably involve uncertainty and risk. These problems are further complicated by the combinatorial nature of the decisions involved. First, we consider multi-stage linear optimization problems under reliability or quality of service considerations, which we model as chance-constrained linear programs. At the first stage, decisions must be made when some parameters are uncertain. As the uncertainty unfolds, recourse decisions are made at later stages to mitigate the risk. Using an example from inventory control, we demonstrate that significant cost savings are observed at desirable service levels when the decisions are adaptive to the realizations of uncertain data. We propose branch-and-cut and decomposition algorithms to solve the resulting large-scale mixed-integer programs. Next, we address the additional challenges with two-stage stochastic programs when there are discrete variables at both stages. We give finitely convergent decomposition algorithms based on the convexification of the second-stage integer programs. Our computational experiments show that our methods scale well with increasing number of scenarios.

Massive graph datasets are used operationally by providers of internet, social network and search services. Sampling can reduce storage requirements as well as query execution times, while prolonging the useful life of the data for baselining and retrospective analysis. Sampling must mediate between the characteristics of the data, the available resources, and the accuracy needs of queries. This talk concerns a cost-based formulation to express these opposing priorities, and how this formulation leads to optimal sampling schemes without prior statistical assumptions. The talk concludes with a discussion of open technical problems and potential applications of the methods beyond the Internet.

Nature loves symmetry. Artists love symmetry. People love symmetry. Mathematicians and computer scientists also love symmetry, with the only exception of mathematical programmers, who always want to break it. Why? Symmetry is of great help in simplifying optimization in a convex setting, but in the discrete case it can trick search algorithms because symmetric solutions are visited time and again. The usual approach to deal with this issue is to destroy symmetry by introducing artificial constraints in the formulation, or by using a clever branching strategy during search. We will outline a different approach, called orbital shrinking, where additional integer variables expressing variable sums within each orbit are used to "encapsulate" model symmetry.

Friday, September 28, 2012 - 2:30pm

One of the core technologies for obtaining protein mixtures is provided by the two dimensional polyacrylamide gel electrophoresis (2D-gel). In order to analyze variations in the protein gels obtained from different groups that account for biological variations we must first eliminate distortions to properly align images. The image alignment is recognized as a major bottleneck in proteomics. We formulate the image alignment problem as a large scale quadratic programming problem, which can be solved in polynomial time by interior point methods. Unlike previous approaches which are based on pairwise alignment, we use the information contained in the whole family of gels for creating an ideal gel and simultaneously determining the transformations that optimally align all images to this ideal gel. The ideal landmarks and the coefficients defining the transformations are the unknowns of the quadratic programming problem. The constraints of the problem ensure smoothness and consistency. Numerical results illustrate the effectiveness of this approach.

Friday, October 26, 2012 - 2:30pm

How should the Centers for Disease Control and Prevention revise national immunization recommendations so that gaps in vaccination coverage will be filled in a cost-effective manner? What is the most cost-effective way to use limited HIV prevention and treatment resources? To what extent should local communities stockpile antibiotics for response to a potential bioterror attack? This talk will describe examples from past and ongoing model-based analyses of public health policy questions. We also provide perspectives on key elements of a successful policy analysis and discuss ways in which such analysis can influence policy.

Wednesday, October 31, 2012 - 4:00pm

The software systems commonly used to solve linear and integer programming problems today make use of oating-point computation and the inexactness of these computations can lead to errors in the returned results. Although such numerical errors are sometimes tolerable, there are situations when exact results are necessary. This talk will describe a variety of methods for computing exact solutions to LPs and MIPs over the rational numbers. A straightforward approach to obtain exact results could be to replace the oating-point operations in current software entirely by exact rational arithmetic; unfortunately, this approach can increase the running times dramatically, motivating the development of more sophisticated techniques. As an alternative, many recently developed methods use a hybrid approach, combining numeric and exact computation.

The key observation is that, although numerically obtained results are not necessarily correct, one can often extract useful information that can be used to reduce the number of exact computations necessary to arrive at an exact solution. We will start by describing an iterative renement procedure for computing extended precision and exact solutions to LPs. Arbitrarily precise solutions are computed by solving a sequence of closely related LPs with limited precision arithmetic. Exact arithmetic is used only for some inexpensive bookkeeping operations and to reconstruct the exact rational solution, either by rounding an approximate solution to a suitable rational representation or by recomputing the corresponding basic solution exactly. Next we discuss the use of hybrid methods for exact MIPs. In particular, within a branch-and-bound process, we compare dierent strategies for computing valid LP dual bounds. Extensive computational results will be presented showing that exact solutions can often be obtained without a large overhead, compared to inexact methods. (Includes joint work with Bill Cook, Ambros Gleixner, Thorsten Koch and Kati Wolter)

Wednesday, November 7, 2012 - 4:00pm

Set functions, i.e., real mappings form the family of subsets of a nite set to the reals are known and widely used in discrete mathematics for almost a century, and in particular in the last 50 years. If we replace a finite set with its characteristic vector, then the same set function can be interpreted as a mapping from the set of binary vectors to the reals. Such mappings are called pseudo-Boolean functions and were introduced in the works of Peter L. Hammer in the 1960s. Pseudo-Boolean functions are dierent from set functions, only in the sense that their algebraic representation, a multilinear polynomial expression, is usually assumed to be available as an input representation.

The problem of minimizing a pseudo-Boolean function (over the set of binary vectors) appears to be the common form of numerous optimization problems, including the well-known MAX-SAT and MAX-CUT problems, and have applications in areas ranging from physics through chip design to computer vision, etc. Some of these applications lead to the minimization of a quadratic pseudo- Boolean function, and hence such quadratic binary optimization problems received ample attention in the past decades. One of the most frequently used technique is based on roof-duality, and aims at nding in polynomial time a simpler form of the given quadratic minimization problem, by fixing some of the variables at their provably optimum value (persistency) and decomposing the residual problem into variable disjoint smaller subproblems. The method in fact was found very eective in computer vision problems, where frequently it can fix up to 80-90% of the variables at their provably optimum value. This algorithm was used by computer vision experts and a very ecient implementation, called QPBO, is freely downloadable.

In many applications of pseudo-Boolean optimization, in particular in vision problems, the objective function is a higher degree multilinear polynomial. For such problems there are substantially fewer effective techniques available. In particular, there is no analogue to the persistencies (fixing variables at their provably optimum value) provided by roof-duality for the quadratic case. On the other hand, more and more applications would demand efficient methods for the minimization of such higher degree pseudo-Boolean functions. This increased interest, in particular in the computer vision community, lead to a systematic study of methods to reduce a higher degree minimization problem to a quadratic one. We report on the most recent techniques and the computational success of those. This is joint research with Alex Fix, Ramin Zabih (Cornell), and Aritanan Gruber (Rutgers).

Friday, November 16, 2012 - 2:30pm

We will explore the development of efficient batch optimization algorithms for solving large-scale statistical learning applications; particularly those that can be formulated as a nonlinear programming problem. We rst investigate smooth, unconstrained problems, with applications in speech recognition. To reduce the computational cost of the optimization process, we will introduce two effective strategies, being: stochastic Hessian information and dynamic gradient sampling. We will then focus on developing second order methods for non-smooth optimization problems, specically in devising a semi-smooth Newton framework that can be used to generate several popular methods for machine learning.

Monday, November 19, 2012 - 2:00pm

A symmetric matrix is called copositive, if it generates a quadratic form taking no negative values over the positive orthant. Contrasting to positive-semideniteness, checking copositivity is NP-hard. In a copositive optimization problem, we have to minimize a linear function of a symmetric matrix over the copositive cone subject to linear constraints. This convex program has no non-global local solutions. On the other hand, there are several hard non-convex programs which can be formulated as copositive programs, among them mixed-binary QPs or Standard QPs. Applications range from machine learning, optimization under uncertainty, to several combinatorial optimization problems, including the maximum-clique problem or the maximum-cut problem.

The dual conic program, unlike the more popular SDP case, involves a different matrix cone, that of completely positive matrices (those which allow for a symmetric, possibly rectangular factorization with no negative entries). This conic optimization technique shifts complexity from global optimization towards sheer feasibility questions. Therefore it is of central importance to devise positive and negative certifcates of copositivity and/or complete positivity.

Pages