For all-quadratic problems (without any linear constraints), it is well known that the semidefinite relaxation coincides basically with the Lagrangian dual problem. Here we study a more general case where the constraints can be either quadratic or linear. To be more precise, we include explicit sign constraints on the problem variables, and study both the full Lagrangian dual as well as the Semi-Lagrangian
relaxation. We show that the stronger Semi-Lagrangian dual bounds coincide with the ones resulting from copositive relaxation. This way, we arrive at a full hierarchy of tractable conic bounds stronger than the usual Lagrangian dual (and thus than the SDP) bounds. We also specify sucient conditions for tightness of the Semi-Lagrangian (i.e. copositive) relaxation and show that copositivity of the slack
matrix guarantees global optimality for KKT points of this problem.
For efficient and physically reliable numerical computations for time dependent differential equations it is very important to find positively invariant subsets of the state space and determine time step sizes of the numerical method that guarantee this. In this talk, we present the state-of-the-art theory and novel approaches beyond it to find positively invariant sets and the suitable time step sizes via optimization techniques.
Bayesian statistical models can be used to represent the beliefs of a decision-maker about an uncertain environment. For example, in revenue management, a seller formulates beliefs about customers' willingness to pay; in energy, we may have a belief about the suitability of a candidate location for a new wind farm. Conjugate priors model the evolution of these beliefs over time as new information is collected, either from stochastic simulation or field experiments. However, there are relatively few of these conjugate models, and they simply do not exist in many problems of interest. We show how approximate Bayesian inference can be used to create computationally tractable, "nearly conjugate" models that optimally approximate the actual belief distributions and enable the use of anticipatory information collection and optimization policies. We consider two broad problem classes: simulation selection with unknown correlation structures, and Bayesian learning in logistic regression.
We provide a new proof of the fact that Mathematical Programming is Turing-complete, and show how it can be useful in the analysis of code, by presenting two applications. The first aims to find hard inputs for given programs. The second finds relaxations of the set of values taken by the program variables during execution, without actually executing the code.
Joint work with Fabrizio Marinelli, Universita Politecnica delle Marche.
A major challenge for machine learning in the next decade is the development of methods that continuously predict and learn from a stream of data. The scale of data and the non-stationary nature of the environment make the prevalent "learn from a batch of i.i.d. data" paradigm inadequate. In this talk, we will formally define the problem of sequential prediction as a (computationally difficult) optimization problem. We will present novel techniques for approximately solving it. These techniques recover many prediction algorithms known in the literature, but also yield new and surprising methods with near-optimal performance. The advances are possible due to a confluence of ideas from empirical process theory, game theory, and optimization. The field of sequential prediction is largely unexplored, and we hope to present a framework that might be of interest to the optimization community.
Over the past 10 years there has been a growing body of work at the intersection of mathematical programming as commonly studied in Operations Research and constraint programming (CP) with its origins in Artificial Intelligence (AI) and programming languages. Much of CP's success in solving challenging combinatorial problems comes from the exploitation of inference as embodied in so-called global constraints. This talk provides a brief introduction to CP and then discusses two patterns of hybridization of mixed integer programming (MIP) and CP that have been proposed. I show how these patterns can be used to solve hard combinatorial problems and that they can also inform each other, leading to a more general view of hybridizations centred around the exploitation of problem structure embodied in global constraints. In particular, I propose that the further development of solving techniques associated with global constraints leads to novel research directions in both mathematical and constraint programming.
The pooling problem is a challenging non-convex optimization problem that is motivated by refinery processes in the petroleum industry, and also finds application in other areas, such as waste water treatment, emissions regulation and agricultural industry among others. In this talk, we will present an analysis of mixed integer linear programming (MILP) based techniques to solve the pooling problem. In particular, the talk will focus on three results: (1) We will analyze the quality of dual bounds produced by solving MILP based relaxations of the pooling problem. These MILP relaxations are derived using piecewise-linear approximations for bilinear terms appearing in the model for the pooling problem. (2) We will present an approximation algorithm for the pooling problem and show that unless NP-hard problems have randomized polynomial time algorithms, this algorithm obtains the best possible approximation ratio. (3) Motivated by the approximation algorithm we will present a MILP based heuristic for the pooling problem. This heuristic is guaranteed to provide solutions within a factor of n, where n is the number of output nodes. On medium and large-scale test instances, this MILP based heuristic performs surprisingly well. In particular, it finds the best known solutions for many instances reported in the literature. This is joint work with Akshay Gupte.
A convex feasibility problem is concerned with finding a point in the intersection of convex sets. These problems are fundamental in optimization because any convex optimization problem can be cast in this form. Elementary algorithms are iterative algorithms for solving convex feasibility problems that involve simple operations such as matrix-vector multiplications, vector updates and separation oracles. The simplicity and low computational cost of these algorithms are desirable features for large-scale problems. A major drawback of traditional elementary algorithms is their slow convergence rate. This talk presents new versions of elementary algorithms that are enhanced via smoothing and dilation techniques. These enhancements yield algorithms that retain the attractive simplicity of elementary algorithms while achieving substantially improved convergence rates.
Sampling large operational datasets such as ISP usage measurements can be effective for reducing storage requirements and execution time, while prolonging the useful life of the data for baselining and retrospective analysis. Sampling needs to mediate between the characteristics data and accuracy need of queries. This talk is about a cost-based formulation to express these opposing priorities, and how it leads to optimal sampling schemes without prior statistical assumptions.