Upcoming ISE Seminars

Nature loves symmetry. Artists love symmetry. People love symmetry. Mathematicians and computer scientists also love symmetry, with the only exception of mathematical programmers, who always want to break it. Why? Symmetry is of great help in simplifying optimization in a convex setting, but in the discrete case it can trick search algorithms because symmetric solutions are visited time and again. The usual approach to deal with this issue is to destroy symmetry by introducing artificial constraints in the formulation, or by using a clever branching strategy during search. We will outline a different approach, called orbital shrinking, where additional integer variables expressing variable sums within each orbit are used to "encapsulate" model symmetry.

Friday, September 28, 2012 - 2:30pm

One of the core technologies for obtaining protein mixtures is provided by the two dimensional polyacrylamide gel electrophoresis (2D-gel). In order to analyze variations in the protein gels obtained from different groups that account for biological variations we must first eliminate distortions to properly align images. The image alignment is recognized as a major bottleneck in proteomics. We formulate the image alignment problem as a large scale quadratic programming problem, which can be solved in polynomial time by interior point methods. Unlike previous approaches which are based on pairwise alignment, we use the information contained in the whole family of gels for creating an ideal gel and simultaneously determining the transformations that optimally align all images to this ideal gel. The ideal landmarks and the coefficients defining the transformations are the unknowns of the quadratic programming problem. The constraints of the problem ensure smoothness and consistency. Numerical results illustrate the effectiveness of this approach.

Friday, October 26, 2012 - 2:30pm

How should the Centers for Disease Control and Prevention revise national immunization recommendations so that gaps in vaccination coverage will be filled in a cost-effective manner? What is the most cost-effective way to use limited HIV prevention and treatment resources? To what extent should local communities stockpile antibiotics for response to a potential bioterror attack? This talk will describe examples from past and ongoing model-based analyses of public health policy questions. We also provide perspectives on key elements of a successful policy analysis and discuss ways in which such analysis can influence policy.

Wednesday, October 31, 2012 - 4:00pm

The software systems commonly used to solve linear and integer programming problems today make use of oating-point computation and the inexactness of these computations can lead to errors in the returned results. Although such numerical errors are sometimes tolerable, there are situations when exact results are necessary. This talk will describe a variety of methods for computing exact solutions to LPs and MIPs over the rational numbers. A straightforward approach to obtain exact results could be to replace the oating-point operations in current software entirely by exact rational arithmetic; unfortunately, this approach can increase the running times dramatically, motivating the development of more sophisticated techniques. As an alternative, many recently developed methods use a hybrid approach, combining numeric and exact computation.

The key observation is that, although numerically obtained results are not necessarily correct, one can often extract useful information that can be used to reduce the number of exact computations necessary to arrive at an exact solution. We will start by describing an iterative renement procedure for computing extended precision and exact solutions to LPs. Arbitrarily precise solutions are computed by solving a sequence of closely related LPs with limited precision arithmetic. Exact arithmetic is used only for some inexpensive bookkeeping operations and to reconstruct the exact rational solution, either by rounding an approximate solution to a suitable rational representation or by recomputing the corresponding basic solution exactly. Next we discuss the use of hybrid methods for exact MIPs. In particular, within a branch-and-bound process, we compare dierent strategies for computing valid LP dual bounds. Extensive computational results will be presented showing that exact solutions can often be obtained without a large overhead, compared to inexact methods. (Includes joint work with Bill Cook, Ambros Gleixner, Thorsten Koch and Kati Wolter)

Wednesday, November 7, 2012 - 4:00pm

Set functions, i.e., real mappings form the family of subsets of a nite set to the reals are known and widely used in discrete mathematics for almost a century, and in particular in the last 50 years. If we replace a finite set with its characteristic vector, then the same set function can be interpreted as a mapping from the set of binary vectors to the reals. Such mappings are called pseudo-Boolean functions and were introduced in the works of Peter L. Hammer in the 1960s. Pseudo-Boolean functions are dierent from set functions, only in the sense that their algebraic representation, a multilinear polynomial expression, is usually assumed to be available as an input representation.

The problem of minimizing a pseudo-Boolean function (over the set of binary vectors) appears to be the common form of numerous optimization problems, including the well-known MAX-SAT and MAX-CUT problems, and have applications in areas ranging from physics through chip design to computer vision, etc. Some of these applications lead to the minimization of a quadratic pseudo- Boolean function, and hence such quadratic binary optimization problems received ample attention in the past decades. One of the most frequently used technique is based on roof-duality, and aims at nding in polynomial time a simpler form of the given quadratic minimization problem, by fixing some of the variables at their provably optimum value (persistency) and decomposing the residual problem into variable disjoint smaller subproblems. The method in fact was found very eective in computer vision problems, where frequently it can fix up to 80-90% of the variables at their provably optimum value. This algorithm was used by computer vision experts and a very ecient implementation, called QPBO, is freely downloadable.

In many applications of pseudo-Boolean optimization, in particular in vision problems, the objective function is a higher degree multilinear polynomial. For such problems there are substantially fewer effective techniques available. In particular, there is no analogue to the persistencies (fixing variables at their provably optimum value) provided by roof-duality for the quadratic case. On the other hand, more and more applications would demand efficient methods for the minimization of such higher degree pseudo-Boolean functions. This increased interest, in particular in the computer vision community, lead to a systematic study of methods to reduce a higher degree minimization problem to a quadratic one. We report on the most recent techniques and the computational success of those. This is joint research with Alex Fix, Ramin Zabih (Cornell), and Aritanan Gruber (Rutgers).

Friday, November 16, 2012 - 2:30pm

We will explore the development of efficient batch optimization algorithms for solving large-scale statistical learning applications; particularly those that can be formulated as a nonlinear programming problem. We rst investigate smooth, unconstrained problems, with applications in speech recognition. To reduce the computational cost of the optimization process, we will introduce two effective strategies, being: stochastic Hessian information and dynamic gradient sampling. We will then focus on developing second order methods for non-smooth optimization problems, specically in devising a semi-smooth Newton framework that can be used to generate several popular methods for machine learning.

Monday, November 19, 2012 - 2:00pm

A symmetric matrix is called copositive, if it generates a quadratic form taking no negative values over the positive orthant. Contrasting to positive-semideniteness, checking copositivity is NP-hard. In a copositive optimization problem, we have to minimize a linear function of a symmetric matrix over the copositive cone subject to linear constraints. This convex program has no non-global local solutions. On the other hand, there are several hard non-convex programs which can be formulated as copositive programs, among them mixed-binary QPs or Standard QPs. Applications range from machine learning, optimization under uncertainty, to several combinatorial optimization problems, including the maximum-clique problem or the maximum-cut problem.

The dual conic program, unlike the more popular SDP case, involves a different matrix cone, that of completely positive matrices (those which allow for a symmetric, possibly rectangular factorization with no negative entries). This conic optimization technique shifts complexity from global optimization towards sheer feasibility questions. Therefore it is of central importance to devise positive and negative certifcates of copositivity and/or complete positivity.

Thursday, November 8, 2012 - 4:00pm

We study the design of reliably connected networks. Given a graph with arcs that may fail at random, the goal is to select a minimum cost set of arcs such that a connectivity requirement is met with high probability. We first compare this model with a well-known deterministic model of reliable network design: survivable network design. We demonstrate that, if distributional information on arc failures is known, the chance constraint model can yield a significantly richer set of solutions on the efficient frontier of reliability and cost. We then present two solution approaches for our model, which we formulate as a chance-constrained stochastic integer program. The first approach is based on a formulation that uses binary variables to determine if the connectivity requirement is satisfied in each arc failure scenario, and enforces the connectivity requirement in selected scenarios using scenario-based graph cuts. We derive additional classes of valid inequalities for this formulation and study their facet-inducing properties. The second formulation is based on the idea of probabilistic graph cuts, which is an extension of graph cuts to graphs with random arc failures. Inequalities defined by such cuts are sufficient to define the set of feasible solutions and can be separated efficiently at integer solutions, allowing this formulation to be solved by a branch-and-cut algorithm. Computational results will be presented.

This is joint work with Yongjia Song.
 

Wednesday, December 5, 2012 - 4:00pm

The role of hospital bed management staff and processes has gained increased attention in recent years due to the impact of bed management practices on hospital performance metrics including average boarding time, patient safety, overflow rate, and patient diversions. One of the key tasks of the bed manager is to balance the available capacity with competing requests for beds, accounting for the unique requirements and clinical characteristics of the patients, and expectations about future demand and patient discharges. Due to the constantly changing system state, the process of matching patients with beds is similar to a dynamic assignment problem. We identify structural properties of the bed assignment process, by modeling the hospital system as a tandem queueing network with multiple customer classes and cross-trained server pools. Utilizing these properties we develop new algorithms for making dynamic assignment decisions and test the performance against current hospital bed assignment practices with simulation.

Wednesday, September 18, 2013 - 4:00pm

The first part of this talk reviews some modern randomized linear algebra techniques. The goal of these methods is to perform approximate matrix multiplication or matrix factorizations (e.g., SVD) with lower computational cost than conventional methods. We then discuss using these methods inside optimization algorithms. The two main questions are (1) is the randomized approach faster, and (2) does the error from the randomized linear algebra affect the optimization algorithms? There are powerful recent results that bound the error of the linear algebra, but they are rarely applied to obtain rigorous guarantees for an optimization algorithm. We also touch on stochastic gradient descent, and apply all of these topics to some specific matrix recovery problems, such as quantum state tomography and matrix completion of the Netflix dataset.

Pages