Abstracts

Abstract: Probabilistic constraints provide a useful framework for dealing with uncertain parameters in optimization problems. By taking into account information on the distribution of the underlying random parameter one arrives at models which are more robust (or come at considerably lower costs) than simplifying models like those just based on expectations of the random vector (or on the worst case). The aim of this series of lectures is to provide an insight into both the practical use and theoretical properties of probabilistic constraints. The first lecture presents some main models and gives an intuitive idea of them by means of concrete examples (including numerical results) from engineering applications. The second lecture clarifies some important structural properties (continuity, differentiability, convexity) of probabilistic constraints. The third and fourth lectures are devoted to the central issue of stability of such optimization problems under perturbations of the underlying probability measure. First, in the general setting, tools from variational analysis are used in order to derive qualitative and quantitative stability of (localized) optimal solution sets and of the optimal value function. Finally, these general results are specified to a convex-like setting with derivations about (unlocalized) solution sets. An application to empirical approximations of probabilistic constraints will round up the presentation.

Abstract.

Lecture 1: Introduction. Two central results of the classical analysis: implicit function theorem and the Sard theorem. A brief introduction into variational analysis. Regularity and stability. Slopes and regularity criteria. Subdifferentials.

Lecture 2. Semi-algebraic sets and functions. O-minimality. Semi-linear and semi-algebraic sets and mappings. Tarski-Seidenberg theorem and stability with respect to functional operations. Basic facts: monotonicity theorem, selection theorem, path selection lemma. Whitney stratification. Dimension of a semi-algebraic set. Generic properties in semi-algebraic geometry. Dimension of graphs of subdifferential functions. Sard theorem for semi-algebraic multifunctions.

Lecture 3. Tame Optimization. Example: semi-definite optimization. Semi-definite representable sets. Critical points and critical values of optimization problems. Natural perturbations: generic normality, generic finiteness of critical points, generic uniqueness of Lagrange multipliers and some other generic properties in tame optimization.

Lecture 4. Curves of maximal and near maximal slope. Classical results on gradient curves: theorems of Lojasiewicz and Kurdyka. What is the gradient descent when gradients do not exist? Existence and length of curves of near maximal slope for continuous semi-algebraic functions.

Abstract: In four or five lectures, we will describe and formulate recents economic equilibrium models for markets that include goods and financial transactions. We will show then how the Variational Analysis allow us to prove existence results for equilibrium points and to develop the sensitivity analysis. In the last lecture we will introduce some new approaches for computing these equilibria.

Abstract.

Lecture 1: Second-order Subdifferential Calculus. This lecture is mainly devoted to basic and recent results on the second-order subdifferential calculus in variational analysis. In contrast to the first-order generalized differential theory, the second-order one is much less developed. Developing a dual-space approach to the first-order and second-order variational analysis, we discuss major motivations, recent advances, and applications of the second-order subdifferential theory.

Lecture 2: Optimal Control of the Sweeping Process. This lecture is devoted to applications of the first-order and second-order generalized differential theory in variational analysis to optimal control of the sweeping (Moreau) process important for the variational theory and mechanical applications. The main results provide verifiable necessary optimality conditions in terms of the initial data for this new class of control problems governed by discontinuous differential inclusions with variable right-hand sides. Our approach is based on the method of discrete approximations and advanced tools of variational analysis and generalized differentiation.

Lecture 3: Variational Analysis in Semi-infinite and Infinite Programming. The lecture concerns the study of new classes of nonlinear and nonconvex optimization problems of the so-called infinite programming that are generally defined on infinite-dimensional spaces of decision variables and contain infinitely many of equality and inequality constraints with arbitrary (may not be compact) index sets. These problems reduce to semi-infinite programs in the case of finite-dimensional spaces of decision variables. We extend the classical Mangasarian-Fromovitz and Farkas-Minkowski constraint qualifications to such infinite and semi-infinite programs. The new qualification conditions are used for efficient computing the appropriate normal cones to sets of feasible solutions for these programs by employing advanced tools of variational analysis and generalized differentiation. In the further development we derive first-order necessary optimality conditions for infinite and semi-infinite programs, which are new in both finite-dimensional and infinite-dimensional settings.

Lecture 4: Generalized Newton Methods for Nonsmooth Equations and Robust Optimization. In this lecture we consider applications of variational analysis to numerical methods of Newton's type for solving nonlinear equations described by nonsmooth continuous functions. Based on advanced techniques of variational analysis and generalized differentiation, we establish the well-posedness of the proposed algorithm involving graphical derivatives, its local superlinear and quadratic convergence, and its global convergence of the Kantorovich type. Some implementations and further developments are given for problems of robust best approximation with interpolation constraints under ellipsoid uncertainty.