Several optimization problems are so frequently encountered that they are given specific names. Recall the most general form of optimization problems encountered in this course:
For specific choices of objective and constraint functions, one obtains different classes of optimization problems.
Some important examples are:
- Linear programming,
- Quadratic programming,
- Convex programming,
- Multi-objective programming, etc.
For each of these cases, we will review the standard form and give some examples.
Linear programming¶
When both the objective function and constraints are affine, one obtains a Linear Program (LP).
LP general form¶
Comments¶
- The constant term does not affect the solution to the optimization problem.
- The feasible set is a polyhedron.
- LPs are frequently encountered in operation research.
- Solving LPs requires specific algorithms (outside of the scope of this course), such as Dantzig’s simplex algorithm.
Example: diet problem¶
This example is reproduced from Boyd & Vandenberghe (2004, Section 4.3.1.).
A healthy diet contains different nutrients in quantities at least equal to . We can compose such a diet by choosing quantities of different foods. One unit of food contains of nutrient and has a cost of .
Question: Write the optimization problem corresponding to choosing the cheapest healthy diet.
Solution
The corresponding optimization problem is
where contains the (non-negative) quantities of each food.
Quadratic programming¶
When the objective function is quadratic and the constraints are affine, we obtain a Quadratic Program (QP).
QP general form¶
where is a symmetric matrix (but not necessarily positive semidefinite).
Comments¶
- Like in LPs, the feasible set is a polyhedron.
- QPs are fundamental for many fields, ranging from signal processing, parameter estimation, regression, machine learning, econometrics, etc.
- Have a direct connection with least squares problems that we will consider later on.
Example: Gaussian maximum likelihood estimation¶
We want to estimate the mean of a Gaussian random variable , with known covariance matrix , from i.i.d. measurements .
Recall that
Maximum likelihood (ML) principle
The function is called the likelihood function. The goal of ML estimation is to find the parameter for which the data has the highest joint probability.
Question: what is the corresponding optimization problem?
Solution
Convex programming¶
LPs and QPs (under some conditions) are instances of convex optimization problems.
In the next chapter, we will precisely define this notion and see why it plays such an important role in optimization problems.
General form of a convex problem¶
where and are convex functions.
Comments¶
- in convex problems, equality constraints must be affine functions, i.e., of the form . Proof in exercise!
- CPs have the important property that any local optima is also a global optima.
- Many practical problems in engineering, economics, and data science can be formulated as convex programs.
- The theory of convex optimization provides strong guarantees on solution existence, uniqueness, and stability.
In this course, we will mostly focus on convex optimization problems.
- Boyd, S. P., & Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.