Upcoming ISE Seminars

In this talk we present an asynchronous multistart algorithm for identifying high-quality local minima. We highlight strong theoretical results that limit the number of local optimization runs under reasonable assumptions. Though the results are valid whether or not the derivative of the objective function exists, the method's efficient use of previously evaluated points makes it well-suited for finding minima when the objective is expensive to evaluate.

All are welcome to attend! Come and ask questions to the best in the field!

Our Advisory Council Members include:
Ray Hoving '69, '71G - Formerly of Bernard Hodes Group
Richard Simek '94, '95G - Hypertherm, Inc.
Karen LaRochelle '99 - LaRochelle Advisors
Ray Pressburger '05 - Accenture
Kathleen Taylor '87 - Johnson & Johnson
Ray Trakimas '76, '77 MBA - IBM
Kurt Lesker IV '05 - Kurt J. Lesker Company
Ray Glemser '83, '84G & '91 Ph.D. - Glemser Technologies
Charles Searight '73 - Vector Growth Partners
Lane Jorgensen '64, '65G - Stonebridge Group
Jennifer Kennedy '02 - Full Circle Systems (Amazon)
Scott Nestler '89 - The University of Notre Dame
Ravi Kulasekaran - '88G - Colabus & Stridus
Sunil Misser '88G - AccountAbility
Jason Lambert '99 - Sikorsky Aircraft of United Tech. Corp.

Please register here: https://www.eventville.com/catalog/eventregistration1.asp?eventid=1011619

Lehigh’s Industrial and Systems Engineering (ISE) Council will be holding their sixth ISE Career Fair on September 16, 2015, a day before the Lehigh University Career Fair on September 17. Employers and students will be able to meet in a personal setting that and discuss the company’s internship/co-op/job opportunities! *This event is for both ISE & HSE (Healthcare Systems Engineering) students. All companies attending will receive a resume book. Registration fees are per company. Sponsor - $130 Sponsors of the Networking event will get a 5 minute presentation during the first half hour of the event. Sponsorship is limited! Regular - $80 Government - $50 Non-Profit - $30 (must send a copy of 501 (c) 3)

Wing shape is a crucial characteristic that has a large impact on aircraft performance. Wing design optimization has been an active area of research for several decades, but achieving practical designs has been a challenge. One of the main challenges is the wing flexibility, which requires the consideration of both aerodynamics and structures. To address this, the simultaneous optimization of the outer mold line of a wing and its structural sizing is proposed. The solution of such design optimization problems is made possible by a framework for high-fidelity aerostructural optimization. This framework combines a three-dimensional CFD solver, a finite-element structural model of the wingbox, a geometry modeler, and a gradient-based optimizer. This framework computes the flying shape of a wing and is able to optimize aircraft configurations with respect to hundreds of aerodynamic shape and internal structural sizes. The theoretical developments include coupled-adjoint sensitivity analysis, and an automatic differentiation adjoint approach. The algorithms resulting from these developments are all implemented to take advantage of massively parallel computers. Applications to the optimization of aircraft configurations demonstrate the effectiveness of these approaches in designing aircraft wings for minimum fuel burn. The results show optimal tradeoffs with respect to wing span and sweep, which was previously not possible with high-fidelity single-discipline models.

We consider a Poisson hail on an infinite d-dimensional ground. In other words, there is a Poisson rain of “hailstones” of a random size (height+width). In the case of a cold ground, we analyze conditions for at most linear growth. In the case of a hot ground (hailstones are melt when touch the ground), we are interested in stability conditions. In the case of a mixed ground, we look at the shapes of growth.

The talk is based on these papers: http://arxiv.org/abs/1105.2070, http://arxiv.org/abs/1403.2166 & http://arxiv.org/abs/1410.0911

Inventory Control in Assemble-to-Order Systems with Identical Lead Times: Lower bound, Control Policies, and Asymptotic Analysis
Assemble-to-order (ATO) is a widely-adopted supply-chain strategy to facilitate product variety, mitigate demand forecasting error, and enhance the overall efficiency of a manufacturing process. A general ATO inventory system serves demands for multiple products, which are assembled from different and overlapping components according to a fixed Bill of Material. Inventories are kept at component level. Component supplies are not subject to capacity constraints but involve positive replenishment lead times. The inventory manager controls the system by deciding how many components of each type to order and which product demands to serve. The two decisions are intertwined with each other and are made continuously (or periodically) over an infinite time horizon. The objective is to minimize the long-run average expected inventory cost, which includes both the cost of backlogging demands and the cost of holding component inventory. Developing an optimal control policy for such systems is known to be difficult, and past works have focused on particular, sub-optimal policy types and/or systems with special structures and restrictive parameter values. In this talk, I will present a new approach that uses stochastic program (SP) as a proxy model to set a lower bound on the inventory cost and to define dynamic inventory control policies. I will describe the application of this approach to an important special case, ATO inventory systems with identical component lead times, and present an asymptotic analysis that proves that our approach is optimal on the diffusion scale, i.e., as the lead time extends, the percentage difference of the long-run average inventory cost under our policies from its lower bound converges to zero.

This paper discusses a number of important spatial optimization problems, including path routing and location planning, highlighting how they have evolved from simplified expressions to more advanced formalizations. Computing, enhanced data and GIS (geographic information systems) are shown to be central in this evolution. Trends suggest continued advancement, but also the need for further technical and theoretical development.

Step Down Units (SDUs) provide an intermediate level of care between the Intensive Care Units (ICUs) and the general medical-surgical wards. Because SDUs are less richly staffed than ICUs, they are less costly to operate; however, they also are unable to provide the level of care required by the sickest patients. There is an ongoing debate in the medical community as to whether and how SDUs should be used. On one hand, an SDU alleviates ICU congestion by providing a safe environment for post-ICU patients before they are stable enough to be transferred to the general wards. On the other hand, an SDU can take capacity away from the already over-congested ICU. In this work, we propose a queueing model to capture the dynamics of patient flows through the ICU and SDU in order to determine how to size the ICU and SDU. We account for the fact that patients may abandon if they have to wait too long for a bed, while others may get bumped out of a bed if a new patient is more critical. Using fluid and diffusion analysis, we examine the tradeoff between reserving capacity in the ICU for the most critical patients versus gaining additional capacity achieved by allocating nurses to the SDUs due to the lower staffing requirement. Despite the complex patient flow dynamics, we leverage a state-space collapse result in our diffusion analysis to establish the optimal allocation of nurses to units. We find that under some circumstances the optimal size of the SDU is zero, while in other cases, having a sizable SDU may be beneficial. The insights from our work provide justification for the variation in SDU use seen in practice.
Joint work with Carri Chan and Bo Zhu.

Flexibility from energy storage and flexible load aggregations is essential to renewable energy integration. The broad adoption of storage in power systems is hindered by its cost and awkward regulatory rules. In this talk, we present a new financial mechanism that widens the economic viability of energy storage.

We begin with the question: Should energy storage buy and sell power at wholesale prices like utilities and generators, or should its physical and financial operation be asynchronous as with transmission lines? In the first case, storage straightforwardly profits through intertemporal arbitrage, also known as load shifting and peak shaving. In this talk, we consider the latter case, which we refer to as passive storage. Because passive storage does not make nodal price transactions, new mechanisms are necessary for its integration into electricity markets.

This issue is addressed by defining financial rights for storage. Like financial transmission rights, the new financial storage rights redistribute the system operator's merchandising surplus and enable risk-averse market participants to hedge against nodal price volatility resulting from storage congestion.

In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex function and a simple separable convex function. The theoretical speedup, as compared to the serial method, and referring to the number of iterations needed to approximately solve the problem with high probability, is a simple expression depending on the number of parallel processors and a natural and easily computable measure of separability of the smooth component of the objective function. In the worst case, when no degree of separability is present, there may be no speedup; in the best case, when the problem is separable, the speedup is equal to the number of processors. Our analysis also works in the mode when the number of blocks being updated at each iteration is random, which allows for modeling situations with busy or unreliable processors. We show that our algorithm is able to solve a LASSO problem involving a matrix with 20 billion nonzeros (300GB) in 2 hours on a large memory node with 24 cores.

Pages