Skip to article frontmatterSkip to article content

1.3Continuous optimization problems: general form

As explained before, this course focuses exclusively on continuous optimization problems, where the variables belong to continuous spaces (e.g., X=Rn\mathbb{X} = \mathbb{R}^n), and where tools from calculus and convex analysis are applicable. Throughout, we adopt the presentation and terminology of Boyd & Vandenberghe (2004), which is widespread in the optimization literature.

A general continuous optimization problem can be formulated as

minimizef0(x)subject tofi(x)0i=1,,mhi(x)=0i=1,,p\begin{array}{lll} \text{minimize} & f_0(x) & \\ \text{subject to} & f_i(x) \leq 0 & i = 1, \ldots, m\\ & h_i(x) = 0 & i = 1, \ldots, p \end{array}

This notation describes the problem of finding a vector xx that minimizes the objective function f0(x)f_0(x) among those satisfying fi(x)0f_i(x) \leq 0, i=1,,i=1, \ldots, and hi(x)=0h_i(x) = 0, i=1,,pi=1, \ldots, p.

Terminology: objective, constraints, domain and feasible set

Ω={xDfi(x)0,i=1,,m,  hj(x)=0,j=1,,p}.\Omega = \{ x \in \mathcal{D} \mid f_i(x) \leq 0, i=1, \ldots, m,\; h_j(x) = 0, j=1, \ldots, p \}.

if Ω=\Omega = \emptyset, there are no feasible point and we say the problem is infeasible. Otherwise, if there is at least one feasbile point, the problem is said to be feasible.

Optimal value and optimal points

An optimization problem seeks to find the optimal value of the objective function over the feasible set. Formally, the optimal value pp^\star is defined as:

p=infxΩf0(x),p^\star = \inf_{x \in \Omega} f_0(x),

where Ω\Omega is the feasible set defined in (3). Depending on the problem, three situations may arise:

In the next section, we will see the notion of global and local optima.

About notations and terminology

Many names are used in the literature to refer to the field of optimization, including:

All these terms refer to the study of optimization problems and are used interchangeably.

How to write an optimization problem?

In the literature, you will encounter several equivalent notations for expressing the same optimization problem. In this lecture, we adopt the notation of Boyd & Vandenberghe (2004), which uses the minimize and subject to keywords, for example:

minimizef0(x)subject toxΩ\begin{array}{ll} \minimize & f_0(x) \\ \st & x \in \Omega \end{array}

In more mathematical or theoretical contexts, the same problem is often written as:

infxΩf0(x).\inf_{x \in \Omega} f_0(x).

Some classical textbooks (e.g., Nocedal & Wright, 2006) use the more compact notation:

minxΩf0(x),\min_{x \in \Omega} f_0(x),

which is widely accepted, even though technically the minimum may not exist (e.g., if the problem is unbounded below).

To emphasize interest in actual solutions of the optimization problem, one often writes:

xargminxΩf0(x),x^\star \in \arg\min_{x \in \Omega} f_0(x),

a notation frequently encountered in fields like signal processing.

References
  1. Boyd, S. P., & Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
  2. Nocedal, J., & Wright, S. J. (2006). Numerical optimization (Second Edition). Springer.