A Derivative-Free Local Optimizer for Multi-Objective Problems
2021-07-30 , JuMP Track

In real-world applications, optimization problems might arise where there is more than one objective.
Additionally, some objectives could be computationally expensive to evaluate, with no gradient information available.
I present a derivative-free local optimizer (written in Julia) aimed at such problems. It employs a trust-region strategy and local surrogate models (e.g., polynomials or radial basis function models) to save function evaluations.


I will revisit the basic concepts of multi-objective optimization and introduce the notion of Pareto optimality and Pareto criticality. Based on this idea, the steepest descent direction for multi-objective problems (MOPs) is derived. When used in conjunction with a trust region strategy, the steepest descent direction can be used to generate iterates converging to first-order critical points.
Besides talking about the mathematical background, I want to describe how local surrogate models are constructed and how we use other available packages (JuMP, NLopt, DynamicPolynomials etc.) in our implementation.
Moreover, I will show the results of a few numerical experiments proving the efficiency of the approach and talk a bit about how the local solver could be embedded in a global(ish) framework.

PhD student at Paderborn University. My main research interest lies in multiobjective (non-linear) optimization.