Relax! The Case For Rockafellian Relaxation In Stochastic Optimization and Learning
Johannes O. Royset, Professor of Operations Research, Naval Postgraduate School
Abstract: We can approach stochastic optimization and learning conservatively through distributionally robust formulations and adversarial training. This is often meaningful, but not always as we illustrate through several examples. Rockafellian relaxation is an alternative technique. It approaches optimization and learning optimistically, which is especially useful in the presence of distributional shifts, label noise, and outliers. Rockafellian relaxation explores a decision space broadly and discovers solutions that remain hidden for conservative, “robust” approaches. We review Rockafellian relaxation, its underpinning Rockafellian functions, and their central role in sensitivity analysis, optimality conditions, algorithmic developments, and duality theory. The theory is illustrated with examples from computer vision and natural language processing with applications to toxic online comments.
Audience
- Faculty/Staff
- Post Docs/Docs
- Graduate Students
Interest
- Academic (general)