# Upcoming ISE Seminars

Wing shape is a crucial characteristic that has a large impact on aircraft performance. Wing design optimization has been an active area of research for several decades, but achieving practical designs has been a challenge. One of the main challenges is the wing flexibility, which requires the consideration of both aerodynamics and structures. To address this, the simultaneous optimization of the outer mold line of a wing and its structural sizing is proposed. The solution of such design optimization problems is made possible by a framework for high-fidelity aerostructural optimization. This framework combines a three-dimensional CFD solver, a finite-element structural model of the wingbox, a geometry modeler, and a gradient-based optimizer. This framework computes the flying shape of a wing and is able to optimize aircraft configurations with respect to hundreds of aerodynamic shape and internal structural sizes. The theoretical developments include coupled-adjoint sensitivity analysis, and an automatic differentiation adjoint approach. The algorithms resulting from these developments are all implemented to take advantage of massively parallel computers. Applications to the optimization of aircraft configurations demonstrate the effectiveness of these approaches in designing aircraft wings for minimum fuel burn. The results show optimal tradeoffs with respect to wing span and sweep, which was previously not possible with high-fidelity single-discipline models.

We consider a Poisson hail on an infinite d-dimensional ground. In other words, there is a Poisson rain of “hailstones” of a random size (height+width). In the case of a cold ground, we analyze conditions for at most linear growth. In the case of a hot ground (hailstones are melt when touch the ground), we are interested in stability conditions. In the case of a mixed ground, we look at the shapes of growth.

The talk is based on these papers: http://arxiv.org/abs/1105.2070, http://arxiv.org/abs/1403.2166 & http://arxiv.org/abs/1410.0911

Inventory Control in Assemble-to-Order Systems with Identical Lead Times: Lower bound, Control Policies, and Asymptotic Analysis

Assemble-to-order (ATO) is a widely-adopted supply-chain strategy to facilitate product variety, mitigate demand forecasting error, and enhance the overall efficiency of a manufacturing process. A general ATO inventory system serves demands for multiple products, which are assembled from different and overlapping components according to a fixed Bill of Material. Inventories are kept at component level. Component supplies are not subject to capacity constraints but involve positive replenishment lead times. The inventory manager controls the system by deciding how many components of each type to order and which product demands to serve. The two decisions are intertwined with each other and are made continuously (or periodically) over an infinite time horizon. The objective is to minimize the long-run average expected inventory cost, which includes both the cost of backlogging demands and the cost of holding component inventory. Developing an optimal control policy for such systems is known to be difficult, and past works have focused on particular, sub-optimal policy types and/or systems with special structures and restrictive parameter values. In this talk, I will present a new approach that uses stochastic program (SP) as a proxy model to set a lower bound on the inventory cost and to define dynamic inventory control policies. I will describe the application of this approach to an important special case, ATO inventory systems with identical component lead times, and present an asymptotic analysis that proves that our approach is optimal on the diffusion scale, i.e., as the lead time extends, the percentage difference of the long-run average inventory cost under our policies from its lower bound converges to zero.

This paper discusses a number of important spatial optimization problems, including path routing and location planning, highlighting how they have evolved from simplified expressions to more advanced formalizations. Computing, enhanced data and GIS (geographic information systems) are shown to be central in this evolution. Trends suggest continued advancement, but also the need for further technical and theoretical development.

Step Down Units (SDUs) provide an intermediate level of care between the Intensive Care Units (ICUs) and the general medical-surgical wards. Because SDUs are less richly staffed than ICUs, they are less costly to operate; however, they also are unable to provide the level of care required by the sickest patients. There is an ongoing debate in the medical community as to whether and how SDUs should be used. On one hand, an SDU alleviates ICU congestion by providing a safe environment for post-ICU patients before they are stable enough to be transferred to the general wards. On the other hand, an SDU can take capacity away from the already over-congested ICU. In this work, we propose a queueing model to capture the dynamics of patient flows through the ICU and SDU in order to determine how to size the ICU and SDU. We account for the fact that patients may abandon if they have to wait too long for a bed, while others may get bumped out of a bed if a new patient is more critical. Using fluid and diffusion analysis, we examine the tradeoff between reserving capacity in the ICU for the most critical patients versus gaining additional capacity achieved by allocating nurses to the SDUs due to the lower staffing requirement. Despite the complex patient flow dynamics, we leverage a state-space collapse result in our diffusion analysis to establish the optimal allocation of nurses to units. We find that under some circumstances the optimal size of the SDU is zero, while in other cases, having a sizable SDU may be beneficial. The insights from our work provide justification for the variation in SDU use seen in practice.

Joint work with Carri Chan and Bo Zhu.

Flexibility from energy storage and flexible load aggregations is essential to renewable energy integration. The broad adoption of storage in power systems is hindered by its cost and awkward regulatory rules. In this talk, we present a new financial mechanism that widens the economic viability of energy storage.

We begin with the question: Should energy storage buy and sell power at wholesale prices like utilities and generators, or should its physical and financial operation be asynchronous as with transmission lines? In the first case, storage straightforwardly profits through intertemporal arbitrage, also known as load shifting and peak shaving. In this talk, we consider the latter case, which we refer to as passive storage. Because passive storage does not make nodal price transactions, new mechanisms are necessary for its integration into electricity markets.

This issue is addressed by defining financial rights for storage. Like financial transmission rights, the new financial storage rights redistribute the system operator's merchandising surplus and enable risk-averse market participants to hedge against nodal price volatility resulting from storage congestion.

In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros (300GB) in 2 hours on a large memory node with 24 cores.

In the interconnected world of today, large-scale networked systems are ubiquitous. Some examples include communication networks, electricity grid and sensor networks. In this talk, we describe two recent results related to these networked systems. In the first part, we present a fast distributed asynchronous Alternating Direction Method of Multipliers (ADMM) based method for solving general separable convex problems in large-scale systems, which can be applied to the LASSO and many other machine learning problems. We show that this method has convergence guarantee and the best known rate of convergence. In the second part, we discuss our model on the competitive equilibrium in electricity markets where price fluctuation imposes difficulties in budgeting and planning. We introduce an explicit penalty on the price volatility and establish that price volatility penalty can be implemented via the use of storage.

Decision-making problems that arise in complex systems (e.g., power grids, emergency logistics, communications networks and supply chains) invariably involve uncertainty and risk. These problems are further complicated by the combinatorial nature of the decisions involved. First, we consider multi-stage linear optimization problems under reliability or quality of service considerations, which we model as chance-constrained linear programs. At the first stage, decisions must be made when some parameters are uncertain. As the uncertainty unfolds, recourse decisions are made at later stages to mitigate the risk. Using an example from inventory control, we demonstrate that significant cost savings are observed at desirable service levels when the decisions are adaptive to the realizations of uncertain data. We propose branch-and-cut and decomposition algorithms to solve the resulting large-scale mixed-integer programs. Next, we address the additional challenges with two-stage stochastic programs when there are discrete variables at both stages. We give finitely convergent decomposition algorithms based on the convexification of the second-stage integer programs. Our computational experiments show that our methods scale well with increasing number of scenarios.

Massive graph datasets are used operationally by providers of internet, social network and search services. Sampling can reduce storage requirements as well as query execution times, while prolonging the useful life of the data for baselining and retrospective analysis. Sampling must mediate between the characteristics of the data, the available resources, and the accuracy needs of queries. This talk concerns a cost-based formulation to express these opposing priorities, and how this formulation leads to optimal sampling schemes without prior statistical assumptions. The talk concludes with a discussion of open technical problems and potential applications of the methods beyond the Internet.