Hybrid Strategies using Piecewise-Linear Decision Rules
07-28, 19:30–20:00 (UTC), JuMP Track

In this talk, we discuss planned extensions to the features provided by JuMPeR via the following three attributes: (1) introduction of new policy type to the adaptive decisions, (2) introduction of the stochastic programming objective function paradigm and (3) introduction of moving/folding horizon simulator features to assess the robust/stochastic affine policies. The third attribute is closely related to what is known as pareto optimality of robust adaptive solutions.


Decision rules offer a rich and tractable framework for solving certain classes of multistage adaptive optimization problems. Recent literature has shown the promise of using linear and nonlinear decision rules in which wait-and-see decisions are represented as functions, whose parameters are decision variables to be optimized, of the underlying uncertain parameters. Despite this growing success, solving real-world stochastic optimization problems can become computationally prohibitive when using nonlinear decision rules, and in some cases, linear ones. Consequently, decision rules that offer a competitive trade-off between solution quality and computational time become more attractive. Whereas the extant research has always used homogeneous (i.e., either linear or piecewise-linear) decision rules, the major contribution of this paper is a computational exploration of hybrid decision rules combining the benefits of the two classes of decision rules. We also demonstrate a case where, unexpectedly, a linear decision rule is superior to a more complex piecewise-linear decision rule within a simulator. This observation bolsters the need to assess the quality of decision rules obtained from a look-ahead model within a simulator rather than just using the optimal look-ahead objective function value.