Convex Eval

1 Convex Sets

1.1 Generalities

1.1.1 Definitions and first examples

Definition 1.1.1
#

The set \(C \subset \mathbb {R}^n\) is said to be convex if \( \alpha x + (1-\alpha )x'\) is in \(C\) whenever \(x\) and \(x'\) are in \(C\), and \(\alpha \in ]0,1[\) (or equivalently \(\alpha \in [0,1]\)).

1.1.2 Convexity preserving operations on sets

Proposition 1.1.1
#

Let \(\{ C_j\} _{j\in J}\) be an arbitrary family of convex sets. Then

\[ C:=\bigcap \{ C_j:\; j\in J\} \]

is convex.

Proof

Immediate from the very Definition 1.1.1.

Proposition 1.1.2

For \(i=1,\dots ,k\), let \(C_i\subset \mathbb {R}^{n_i}\) be convex sets. Then \(C_1\times \cdots \times C_k\) is a convex set of \(\mathbb {R}^{n_1}\times \cdots \times \mathbb {R}^{n_k}\).

Proof

Straightforward.

Proposition 1.1.3
#

Let \(A:\mathbb {R}^n\to \mathbb {R}^m\) be an affine mapping and \(C\) a convex set of \(\mathbb {R}^n\). The image \(A(C)\) of \(C\) under \(A\) is convex in \(\mathbb {R}^m\).

If \(D\) is a convex set of \(\mathbb {R}^m\), the inverse image

\[ A^{-1}(D):=\{ x\in \mathbb {R}^n:\; A(x)\in D\} \]

is convex in \(\mathbb {R}^n\).

Proof

For \(x\) and \(x'\) in \(\mathbb {R}^n\), the image under \(A\) of the segment \([x,x']\) is clearly the segment \([A(x),A(x')]\subset \mathbb {R}^m\). This proves the first claim, but also the second: indeed, if \(x\) and \(x'\) are such that \(A(x)\) and \(A(x')\) are both in the convex set \(D\), then every point of the segment \([x,x']\) has its image in \([A(x),A(x')]\subset D\).

Proposition 1.1.4
#

If \(C\) is convex, so are its interior \(\operatorname {int}C\) and its closure \(\overline{C}\).

Proof

For given different \(x\) and \(x'\), and \(\alpha \in ]0,1[\), we set \(x''=\alpha x+(1-\alpha )x'\in ]x,x'[\).

Take first \(x\) and \(x'\) in \(\operatorname {int}C\). Choosing \(\delta {\gt}0\) such that \(B(x',\delta )\subset C\), we show that \(B(x'',(1-\alpha )\delta )\subset C\). As often in convex analysis, it is probably best to draw a picture. The ratio \(\| x''-x\| /\| x'-x\| \) being precisely \(1-\alpha \), Fig. 1.2.3 clearly shows that \(B(x'',(1-\alpha )\delta )\) is just the set \(\alpha x+(1-\alpha )B(x',\delta )\), obtained from segments with endpoints in \(\operatorname {int}C\): \(x''\in \operatorname {int}C\).

Now, take \(x\) and \(x'\) in \(\operatorname {cl}C\): we select in \(C\) two sequences \(\{ x_k\} \) and \(\{ x_k'\} \) converging to \(x\) and \(x'\) respectively. Then, \(\alpha x_k + (1-\alpha )x_k'\) is in \(C\) and converges to \(\alpha x + (1-\alpha )x'\), which is therefore in \(\operatorname {cl}C\).

1.1.3 Convex combinations and convex hulls

Definition 1.1.2
#

A convex combination of elements \(x_1,\ldots ,x_k\) in \(\mathbb {R}^n\) is an element of the form

\[ \sum _{i=1}^k \alpha _i x_i \]

where

\[ \sum _{i=1}^k \alpha _i = 1 \quad \text{and}\quad \alpha _i \ge 0\ \text{for } i=1,\ldots ,k. \]
Proposition 1.1.5
#

A set \(C\subset \mathbb {R}^n\) is convex if and only if it contains every convex combination of its elements.

Proof

The condition is sufficient: convex combinations of two elements just make up the segment joining them. To prove necessity, take \(x_1,\dots ,x_k\) in \(C\) and \(\alpha =(\alpha _1,\dots ,\alpha _k)\in \Delta _k\). One at least of the \(\alpha _i\)’s is positive, say \(\alpha _1{\gt}0\). Then form

\[ y_2:=\frac{\alpha _1}{\alpha _1+\alpha _2}x_1+\frac{\alpha _2}{\alpha _1+\alpha _2}x_2 \quad \left[= \frac{1}{\alpha _1+\alpha _2}(\alpha _1x_1+\alpha _2x_2)\right] \]

which is in \(C\) by Definition 1.1.1 itself. Therefore,

\[ y_3:=\frac{\alpha _1+\alpha _2}{\alpha _1+\alpha _2+\alpha _3}y_2+\frac{\alpha _3}{\alpha _1+\alpha _2+\alpha _3}x_3 \quad \left[= \frac{1}{\sum _{i=1}^3\alpha _i}\sum _{i=1}^3\alpha _i x_i\right] \]

is in \(C\) for the same reason; and so on until

\[ y_k:=\frac{\alpha _1+\cdots +\alpha _{k-1}}{1}y_{k-1}+\frac{\alpha _k}{1}x_k \quad \left[= \tfrac {1}{1}\sum _{i=1}^k\alpha _i x_i\right]. \]
Proposition 1.1.6
#

The convex hull can also be described as the set of all convex combinations:

\[ \operatorname {co}S:=\bigcap \{ C: C\ \text{is convex and contains }S\} = \Big\{ x\in \mathbb {R}^n : \text{for some }k\in \mathbb {N}_*,\ \text{there exist }x_1,\dots ,x_k\in S\ \text{and } \alpha =(\alpha _1,\dots ,\alpha _k)\in \Delta _k\ \text{such that }\sum _{i=1}^k\alpha _i x_i=x\Big\} . \tag {1.3.2} \]
Proof

Call \(T\) the set described in the rightmost side of (1.3.2). Clearly, \(T\supset S\). Also, if \(C\) is convex and contains \(S\), then it contains all convex combinations of elements

For this, take two points \(x\) and \(y\) in \(T\), characterized respectively by \((x_{1},\alpha _{1}),\dots ,\) \((x_{k},\alpha _{k})\) and by \((y_{1},\beta _{1}),\dots ,(y_{\ell },\beta _{\ell })\); take also \(\lambda \in [0,1]\). Then \(\lambda x+(1-\lambda )y\) is a certain combination of \(k+\ell \) elements of \(S\); this combination is convex because its coefficients \(\lambda \alpha _{i}\) and \((1-\lambda )\beta _{j}\) are nonnegative, and their sum is

\[ \lambda \sum _{i=1}^{k}\alpha _{i}+(1-\lambda )\sum _{j=1}^{\ell }\beta _{j}=\lambda +1-\lambda =1. \]
Theorem 1.1.1
#

(C. Carathéodory) Any \(x\in \operatorname {co}S\subset \mathbb {R}^n\) can be represented as a convex combination of \(n+1\) elements of \(S\).

Proof

Take an arbitrary convex combination \(x=\sum _{i=1}^k \alpha _i x_i\), with \(k{\gt}n+1\). We will show that one of the \(x_i\)’s can be assigned a \(0\)-coefficient without changing \(x\). For this, assume that all coefficients \(\alpha _i\) are positive (otherwise we are done).

The \(k{\gt}n+1\) elements \(x_i\) are certainly affinely dependent: (1.3.1) tells us that we can find \(\delta _1,\dots ,\delta _k\), not all zero, such that

\[ \sum _{i=1}^k \delta _i x_i = 0 \quad \text{and}\quad \sum _{i=1}^k \delta _i = 0. \]

There is at least one positive \(\delta _i\); and we can set \(\alpha _i':=\alpha _i-t^*\delta _i\) for \(i=1,\dots ,k\), where

\[ t^*:=\max \{ t\ge 0:\ \alpha _i-t\delta _i\ge 0\ \text{for }i=1,\dots ,k\} =\min _{\delta _j{\gt}0}\frac{\alpha _j}{\delta _j}. \]

Clearly enough,

\[ \alpha _i'\ge 0\quad \text{for }i=1,\dots ,k \qquad [\text{automatic if }\delta _i\le 0,\ \text{by construction of }t^*\text{ if }\delta _i{\gt}0] \]
\[ \sum _{i=1}^k \alpha _i'=\sum _{i=1}^k \alpha _i - t^*\sum _{i=1}^k \delta _i =1; \]
\[ \sum _{i=1}^k \alpha _i' x_i = x - t^* \sum _{i=1}^k \delta _i x_i = x; \]

and there exists \(i_0\) such that \(\alpha _{i_0}'=0\).   [by construction of \(t^*\)]

In other words, we have expressed \(x\) as a convex combination of \(k-1\) among the \(x_i\)’s; our claim is proved.

Now, if \(k-1=n+1\), the proof is finished. If not, we can apply the above construction to the convex combination \(x=\sum _{i=1}^{k-1}\alpha _i' x_i\) and so on. The process can be continued until there remain only \(n+1\) elements (which may be affinely independent).

Theorem 1.1.2
#

(W. Fenchel and L. Bunt) If \(S\subset \mathbb {R}^n\) has no more than \(n\) connected components (in particular, if \(S\) is connected), then any \(x\in \operatorname {co}S\) can be expressed as a convex combination of \(n\) elements of \(S\).

1.1.4 Closed convex sets and hulls

Definition 1.1.3
#

The closed convex hull of a nonempty set \(S\subset \mathbb {R}^n\) is the intersection of all closed convex sets containing \(S\). It will be denoted by \(\overline{\operatorname {co}}\, S\).

Proposition 1.1.7

The closed convex hull \(\overline{\operatorname {co}}S\) of Definition 1.4.1 is the closure \(\operatorname {cl}(\operatorname {co}S)\) of the convex hull of \(S\).

Proof

Because \(\operatorname {cl}(\operatorname {co}S)\) is a closed convex set containing \(S\), it contains \(\overline{\operatorname {co}}S\) as well. On the other hand, take a closed convex set \(C\) containing \(S\); being convex, \(C\) contains \(\operatorname {co}S\); being closed, it contains also the closure of \(\operatorname {co}S\). Since \(C\) was arbitrary, we conclude \(\bigcap C \supset \operatorname {cl}\operatorname {co}S\).

Theorem 1.1.3
#

If \(S\) is bounded [resp. compact], then \(\operatorname {co}S\) is bounded [resp. compact].

Proof

Let \(x=\sum _{i=1}^{n+1}\alpha _i x_i\in \operatorname {co}S\). If \(S\) is bounded, say by \(M\), we can write

\[ \| x\| \le \sum _{i=1}^{n+1}\alpha _i\| x_i\| \le M\sum _{i=1}^{n+1}\alpha _i = M. \]

Now take a sequence \(\{ x^k\} \subset \operatorname {co}S\). For each \(k\) we can choose

\[ x^k_1,\dots ,x^k_{n+1}\in S\qquad \text{and}\qquad \alpha ^k=(\alpha ^k_1,\dots ,\alpha ^k_{n+1})\in \Delta _{n+1} \]

such that \(x^k=\sum _{i=1}^{n+1}\alpha ^k_i x^k_i\). Note that \(\Delta _{n+1}\) is compact. If \(S\) is compact, we can extract a subsequence as many times as necessary (not more than \(n+2\) times) so that \(\{ \alpha ^k\} \) and each \(\{ x^k_i\} \) converge: we end up with an index set \(K\subset \mathbb {N}\) such that, when \(k\to +\infty \),

\[ \{ x^k_i\} _{k\in K}\to x_i\in S\qquad \text{and}\qquad \{ \alpha ^k\} _{k\in K}\to \alpha \in \Delta _{n+1}. \]

Passing to the limit for \(k\in K\), we see that \(\{ x^k\} _{k\in K}\) converges to a point \(x\), which can be expressed as a convex combination of points of \(S\): \(x\in \operatorname {co}S\), whose compactness is thus established.

Definition 1.1.4
#

A conical combination of elements \(x_1,\dots ,x_k\) is an element of the form \(\sum _{i=1}^k \alpha _i x_i\), where the coefficients \(\alpha _i\) are nonnegative.

The set of all conical combinations from a given nonempty \(S\subset \mathbb {R}^n\) is the conical hull of \(S\). It is denoted by \(\operatorname {cone} S\).

Definition 1.1.5
#

The closed conical hull (or rather closed convex conical hull) of a nonempty set \(S\subset \mathbb {R}^n\) is

\[ \overline{\operatorname {cone} S}:=\operatorname {cl}\big(\operatorname {cone} S\big) =\overline{\Big\{ \sum _{i=1}^k \alpha _i x_i:\ \alpha _i\ge 0,\ x_i\in S\ \text{ for } i=1,\dots ,k\Big\} }. \]
Proposition 1.1.8

Let \(S\) be a nonempty compact set such that \(0\notin \operatorname {co}S\). Then

\[ \overline{\operatorname {cone}S}=\mathbb {R}^{+}(\operatorname {co}S)\quad [=\operatorname {cone}S]. \]
Proof

The set \(C:=\operatorname {co}S\) is compact and does not contain the origin; we prove that \(\mathbb {R}^{+}C\) is closed. Let \(\{ t_kx_k\} \subset \mathbb {R}^{+}C\) converge to \(y\); extracting a subsequence if necessary, we may suppose \(x_k\to x\in C\); note: \(x\neq 0\). We write

\[ t_k\frac{x_k}{\| x_k\| }\longrightarrow \frac{y}{\| x\| }, \]

which implies \(t_k\to \| y\| /\| x\| =:t\ge 0\). Then, \(t_kx_k\to tx=y\), which is thus in \(\mathbb {R}^{+}C\).

1.2 Convex Sets attached to a Convex Set

1.2.1 Relative interior

Definition 1.2.1
#

The relative interior \(\operatorname {ri} C\) (or \(\operatorname {relint} C\)) of a convex set \(C \subset \mathbb {R}^n\) is the interior of \(C\) for the topology relative to the affine hull of \(C\). In other words: \(x \in \operatorname {ri} C\) if and only if

\[ x \in \operatorname {aff} C \quad \text{and}\quad \exists \delta {\gt}0\ \text{such that}\ (\operatorname {aff} C)\cap B(x,\delta )\subset C . \]

The dimension of a convex set \(C\) is the dimension of its affine hull, that is to say the dimension of the subspace parallel to \(\operatorname {aff} C\).

Theorem 1.2.1
#

If \(C\neq \varnothing \), then \(\operatorname {ri}C\neq \varnothing \). In fact, \(\dim (\operatorname {ri}C)=\dim C\).

Proof

Let \(k:=1+\dim C\). Since \(\operatorname {aff}C\) has dimension \(k-1\), \(C\) contains \(k\) elements affinely independent \(x_1,\dots ,x_k\). Call \(\Delta :=\operatorname {co}\{ x_1,\dots ,x_k\} \) the simplex that they generate; see fig. 2.1.1; \(\operatorname {aff}\Delta =\operatorname {aff}C\) because \(\Delta \subset C\) and \(\dim \Delta =k-1\). The proof will be finished if we show that \(\Delta \) has nonempty relative interior.

Take \(\bar{x}:=1/k\sum _{i=1}^k x_i\) (the “center” of \(\Delta \)) and describe \(\operatorname {aff}\Delta \) by points of the form

\[ \bar{x}+y=\bar{x}+\sum _{i=1}^k \alpha _i(y)x_i =\sum _{i=1}^k\Big[\tfrac {1}{k}+\alpha _i(y)\Big]x_i, \]

where \(\alpha (y)=(\alpha _1(y),\dots ,\alpha _k(y))\in \mathbb R^k\) solves

\[ \sum _{i=1}^k \alpha _i x_i = y,\qquad \sum _{i=1}^k \alpha _i = 0. \]

Because this system has a unique solution, the mapping \(y\mapsto \alpha (y)\) is (linear and) continuous: we can find \(\delta {\gt}0\) such that \(\| y\| \le \delta \) implies

\[ |\alpha _i(y)|\le 1/k\quad \text{for }i=1,\dots ,k, \]

hence \(\bar x+y\in \Delta \).

In other words, \(\bar x\in \operatorname {ri}\Delta \subset \operatorname {ri}C\).

It follows in particular \(\dim \operatorname {ri}C=\dim \Delta =\dim C\).

Lemma 1.2.1

Let \(x\in \operatorname {cl}C\) and \(x'\in \operatorname {ri}C\). Then the half-open segment

\[ ]x,x']=\{ \alpha x+(1-\alpha )x':\; 0\le \alpha {\lt}1\} \]

is contained in \(\operatorname {ri}C\).

Proof

Take \(x''=\alpha x+(1-\alpha )x'\), with \(1{\gt}\alpha \ge 0\). To avoid writing “\(\operatorname {ri}C\)” every time, we assume without loss of generality that \(\operatorname {aff}C=\mathbb {R}^n\).

Since \(x\in \operatorname {cl}C\), for all \(\varepsilon {\gt}0\), \(x\in C+B(0,\varepsilon )\) and we can write

\[ B(x'',\varepsilon ) = \alpha x+(1-\alpha )x' + B(0,\varepsilon ) = \alpha C + (1-\alpha )x' + (1+\alpha )B(0,\varepsilon ) = \alpha C + (1-\alpha )\{ x'+B\bigl(0,\tfrac {1+\alpha }{1-\alpha }\varepsilon \bigr)\} . \]

Since \(x'\in \operatorname {int}C\), we can choose \(\varepsilon \) so small that \(x'+B\bigl(0,\tfrac {1+\alpha }{1-\alpha }\varepsilon \bigr)\subset C\). Then we have

\[ B(x'',\varepsilon )\subset \alpha C + (1-\alpha )C = C \]

(where the last equality is just the definition of a convex set).

Proposition 1.2.1
#

The three convex sets \(\operatorname {ri}C\), \(C\) and \(\operatorname {cl}C\) have the same affine hull (and hence the same dimension), the same relative interior and the same closure (and hence the same relative boundary).

Proof

The case of the affine hull was already seen in Theorem 2.1.3. For the others, the key result is Lemma 2.1.6 (as well as for most other properties involving closures and relative interiors). We illustrate it by restricting our proof to one of the properties, say: \(\operatorname {ri}C\) and \(C\) have the same closure.

Thus, we have to prove that \(\operatorname {cl}C\subset \operatorname {cl}(\operatorname {ri}C)\). Let \(x\in \operatorname {cl}C\) and take \(x'\in \operatorname {ri}C\) (it is possible by virtue of Theorem 2.1.3). Because \([x,x']\subset \operatorname {ri}C\) (Lemma 2.1.6), we do have that \(x\) is a limit of points in \(\operatorname {ri}C\) (and even a "radial" limit); hence \(x\) is in the closure of \(\operatorname {ri}C\).

Proposition 1.2.2
#

Let the two convex sets \(C_1\) and \(C_2\) satisfy \(\operatorname {ri}C_1\cap \operatorname {ri}C_2\neq \varnothing \). Then

\begin{align} \operatorname {ri}(C_1\cap C_2) & = \operatorname {ri}C_1\cap \operatorname {ri}C_2 \label{eq:2.1.1}\\ \operatorname {cl}(C_1\cap C_2) & = \operatorname {cl}C_1\cap \operatorname {cl}C_2 .\label{eq:2.1.2} \end{align}
Proof

First we show that \(\operatorname {cl}C_1\cap \operatorname {cl}C_2 \subset \operatorname {cl}(C_1\cap C_2)\) (the converse inclusion is always true). Given \(x\in \operatorname {cl}C_1\cap \operatorname {cl}C_2\), we pick \(x'\) in the nonempty \(\operatorname {ri}C_1\cap \operatorname {ri}C_2\). From Lemma 2.1.6 applied to \(C_1\) and to \(C_2\),

\[ ]x,x'[\subset \operatorname {ri}C_1\cap \operatorname {ri}C_2 . \]

Taking the closure of both sides, we conclude

\[ x\in \operatorname {cl}(\operatorname {ri}C_1\cap \operatorname {ri}C_2)\subset \operatorname {cl}(C_1\cap C_2), \]

which proves 2 because \(x\) was arbitrary; the above inclusion is actually an equality.

Now, we have just seen that the two convex sets \(\operatorname {ri}C_1\cap \operatorname {ri}C_2\) and \(C_1\cap C_2\) have the same closure. According to Remark 2.1.9, they have the same relative interior:

\[ \operatorname {ri}(C_1\cap C_2)=\operatorname {ri}(\operatorname {ri}C_1\cap \operatorname {ri}C_2)\subset \operatorname {ri}C_1\cap \operatorname {ri}C_2. \]

It remains to prove the converse inclusion, so let \(y\in \operatorname {ri}C_1\cap \operatorname {ri}C_2\). If we take \(x'\in C_1\) [resp. \(C_2\)], the segment \([x',y]\) is in aff \(C_1\) [resp. aff \(C_2\)] and, by definition of the relative interior, this segment can be stretched beyond \(y\) and yet stay in \(C_1\) [resp. \(C_2\)] (see Fig. 2.1.3). Take in particular \(x'\in \operatorname {ri}(C_1\cap C_2)\), \(x'\neq y\) (if such an \(x'\) does not exist, we are done). The above stretching singles out an \(x\in C_1\cap C_2\) such that \(y\in ]x,x'[:\)

\[ y=\alpha x+(1-\alpha )x' \quad \text{for some }\alpha \in ]0,1[. \]

Then Lemma 2.1.6 applied to \(C_1\cap C_2\) tells us that \(y\in \operatorname {ri}(C_1\cap C_2)\).

Proposition 1.2.3

For \(i=1,\ldots ,k\), let \(C_i\subset \mathbb {R}^{n_i}\) be convex sets. Then

\[ \operatorname {ri}(C_1\times \cdots \times C_k) = (\operatorname {ri} C_1)\times \cdots \times (\operatorname {ri} C_k). \]
Proof

It suffices to apply Definition 2.1.1 alone, observing that

\[ \operatorname {aff}(C_1\times \cdots \times C_k) = (\operatorname {aff} C_1)\times \cdots \times (\operatorname {aff} C_k). \]
Proposition 1.2.4
#

Let \(A:\mathbb {R}^n\to \mathbb {R}^m\) be an affine mapping and \(C\) a convex set of \(\mathbb {R}^n\). Then

\[ \operatorname {ri}[A(C)] = A(\operatorname {ri} C). \tag {2.1.3} \]

If \(D\) is a convex set of \(\mathbb {R}^m\) satisfying \(A^{-1}(\operatorname {ri}D)\neq \varnothing \), then

\[ \operatorname {ri}\bigl[A^{-1}(D)\bigr] = A^{-1}(\operatorname {ri} D). \tag {2.1.4} \]
Proof

First, note that the continuity of \(A\) implies \(A(\operatorname {cl}S)\subset \operatorname {cl}[A(S)]\) for any \(S\subset \mathbb {R}^n\). Apply this result to \(\operatorname {ri}C\), whose closure is \(\operatorname {cl} C\) (Proposition 2.1.8), and use the monotonicity of the closure operation:

\[ A(C)\subset A(\operatorname {cl}C)=A[\operatorname {cl}(\operatorname {ri}C)]\subset \operatorname {cl}[A(\operatorname {ri}C)]\subset \operatorname {cl}[A(C)]; \]

the closed set \(\operatorname {cl}[A(\operatorname {ri}C)]\) is therefore \(\operatorname {cl}[A(C)]\). Because \(A(\operatorname {ri}C)\) and \(A(C)\) have the same closure, they have the same relative interior (Remark 2.1.9):

\[ \operatorname {ri}\, A(C)=\operatorname {ri}[A(\operatorname {ri}C)]\subset A(\operatorname {ri}C). \]

To prove the converse inclusion, let \(w=A(y)\in A(\operatorname {ri}C)\), with \(y\in \operatorname {ri}C\). We choose \(z'=A(x')\in \operatorname {ri}A(C)\), with \(x'\in C\) (we assume \(z'\neq w\), hence \(x'\neq y\)).

Using in \(C\) the same stretching mechanism as in Fig. 2.1.3, we single out \(x\in C\) such that \(y\in ]x,x'[\), to which corresponds \(z=A(x)\in A(C)\). By affinity, \(A(y)\in ]A(x),A(x')[=]z,z'[\). Thus, \(z\) and \(z'\) fulfill the conditions of Lemma 2.1.6 applied to the convex set \(A(C)\): \(w\in \operatorname {ri}A(C)\), and (2.1.3) is proved.

The proof of (2.1.4) uses the same technique.

1.2.2 The asymptotic cone

Proposition 1.2.5
#

The closed convex cone \(C_{\infty }(x)\) does not depend on \(x\in C\).

Proof

See Theorem I.2.3.1 and the pantographic Figure I.2.3.1. Take two different points \(x_{1}\) and \(x_{2}\) in \(C\); it suffices to prove one inclusion, say \(C_{\infty }(x_{1})\subset C_{\infty }(x_{2})\). Let \(d\in C_{\infty }(x_{1})\) and \(t{\gt}0\), we have to prove \(x_{2}+td\in C\). With \(\varepsilon \in ]0,1[\), consider the point

\[ \bar{x}_{\varepsilon }:=x_{1}+td+(1-\varepsilon )(x_{2}-x_{1}). \]

Writing it as

\[ \bar{x}_\varepsilon = \varepsilon \bigl(x_1 + \tfrac {t}{\varepsilon } d\bigr) + (1-\varepsilon )x_2, \]

we see that \(\bar{x}_\varepsilon \in C\) (use the definitions of \(C_\infty (x_1)\) and of a convex set). On the other hand,

\[ x_2 + t d = \lim _{\varepsilon \downarrow 0}\bar{x}_\varepsilon \in \overline{C}. \]
Definition 1.2.2
#

The asymptotic cone, or recession cone of the closed convex set \(C\) is the closed convex cone \(C_{\infty }\) defined by (2.2.1) or (2.2.2), in which Proposition 2.2.1 is exploited.

Proposition 1.2.6
#

A closed convex set \(C\) is compact if and only if \(C_\infty =\{ 0\} \).

Proof

If \(C\) is bounded, it is clear that \(C_\infty \) cannot contain any nonzero direction.

Conversely, let \(\{ x_k\} \subset C\) be such that \(\| x_k\| \to +\infty \) (we assume \(x_k\neq 0\)). The sequence \(\{ d_k:=x_k/\| x_k\| \} \) is bounded, extract a convergent subsequence: \(d=\lim _{k\in K}d_k\) with \(K\subset \mathbb N\) (\(\| d\| =1\)). Now, given \(x\in C\) and \(t{\gt}0\), take \(k\) so large that \(\| x_k\| \ge t\). Then, we see that

\[ x+td=\lim _{k\in K}\Big[(1-\tfrac {t}{\| x_k\| })x+\tfrac {t}{\| x_k\| }x_k\Big] \]

is in the closed convex set \(C\), hence \(d\in C_\infty \).

Proposition 1.2.7
#
  • If \(\{ C_j\} _{j\in J}\) is a family of closed convex sets having a point in common, then

    \[ \left(\bigcap _{j\in J} C_j\right)_\infty = \bigcap _{j\in J}(C_j)_\infty . \]
  • If, for \(j=1,\dots ,m\), \(C_j\) are closed convex sets in \(\mathbb {R}^{n_j}\), then

    \[ (C_1\times \cdots \times C_m)_\infty = (C_1)_\infty \times \cdots \times (C_m)_\infty . \]
  • Let \(A:\mathbb {R}^n\to \mathbb {R}^m\) be an affine mapping. If \(C\) is closed convex in \(\mathbb {R}^n\) and \(A(C)\) is closed, then

    \[ A(C_\infty )\subset [A(C)]_\infty . \]
  • If \(D\) is closed convex in \(\mathbb {R}^m\) with nonempty inverse image, then

    \[ \bigl[\; A^{-1}(D)\; \bigr]_\infty = A^{-1}(D_\infty ). \]

1.2.3 Extreme points

Definition 1.2.3
#

We say that \(x\in C\) is an extreme point of \(C\) if there are no two different points \(x_1\) and \(x_2\) in \(C\) such that \(x=\tfrac {1}{2}(x_1+x_2)\).

Proposition 1.2.8
#

If \(C\) is compact, then \(\operatorname {ext}C\neq \varnothing \).

Proof

Because \(C\) is compact, there is \(\bar x\in C\) maximizing the continuous function \(x\mapsto \| x\| ^2\). We claim that \(\bar x\) is extremal. In fact, suppose that there are \(x_1\) and \(x_2\) in \(C\) with \(\bar x=\tfrac 12(x_1+x_2)\). Then, with \(x_1\neq x_2\) and using (2.3.1), we obtain the contradiction

\[ \| \bar x\| ^2=\Big\| \tfrac 12(x_1+x_2)\Big\| ^2{\lt}\tfrac 12\big(\| x_1\| ^2+\| x_2\| ^2\big)\le \tfrac 12\big(\| \bar x\| ^2+\| \bar x\| ^2\big)=\| \bar x\| ^2. \]
Theorem 1.2.2
#

(H. Minkowski) Let \(C\) be compact, convex in \(\mathbb {R}^n\). Then \(C\) is the convex hull of its extreme points: \(C=\operatorname {co}(\operatorname {ext}C)\).

Definition 1.2.4
#

A nonempty convex subset \(F\subset C\) is a face of \(C\) if it satisfies the following property: every segment of \(C\), having in its relative interior an element of \(F\), is entirely contained in \(F\). In other words,

\[ \left. \begin{array}{c} (x_1,x_2)\in C\times C\\[4pt] \exists \alpha \in ]0,1[:\ \alpha x_1+(1-\alpha )x_2\in F \end{array} \right\} \implies [x_1,x_2]\subset F. \tag {2.3.2} \]
Proposition 1.2.9
#

Let \(F\) be a face of \(C\). Then any extreme point of \(F\) is an extreme point of \(C\).

Proof

Take \(x\in F\subset C\) and assume that \(x\) is not an extreme point of \(C\): there are different \(x_1,x_2\) in \(C\) and \(\alpha \in \, ]0,1[\) such that \(x=\alpha x_1+(1-\alpha )x_2\in F\). From the very definition (2.3.2) of a face, this implies that \(x_1\) and \(x_2\) are in \(F\): \(x\) cannot be an extreme point of \(F\). \(\square \)

1.2.4 Exposed faces

Definition 1.2.5 Supporting Hyperplane
#

An affine hyperplane \(H_{s,r}\) is said to support the set \(C\) when \(C\) is entirely contained in one of the two closed half-spaces delimited by \(H_{s,r}\): say

\[ \langle s,y\rangle \le r\qquad \text{for all }y\in C. \tag {2.4.1} \]

It is said to support \(C\) at \(x\in C\) when, in addition, \(x\in H_{s,r}\); (2.4.1) holds, as well as

\[ \langle s,x\rangle = r. \]
Definition 1.2.6
#

The set \(F\subset C\) is an exposed face of \(C\) if there is a supporting hyperplane \(H_{s,r}\) of \(C\) such that \(F=C\cap H_{s,r}\).

An exposed point, or vertex, is a \(0\)-dimensional exposed face, i.e. a point \(x\in C\) at which there is a supporting hyperplane \(H_{s,r}\) of \(C\) such that \(H_{s,r}\cap C\) reduces to \(\{ x\} \).

Proposition 1.2.10
#

An exposed face is a face.

Proof

Let \(F\) be an exposed face, with its associated support \(H_{s,r}\). Take \(x_1\) and \(x_2\) in \(C\):

\begin{equation} \langle s,x_i\rangle \le r \quad \text{for } i=1,2; \label{2.4.2} \end{equation}
3

take also \(\alpha \in ]0,1[\) such that \(\alpha x_1+(1-\alpha )x_2\in F\subset H_{s,r}\):

\[ \langle s,\alpha x_1+(1-\alpha )x_2\rangle = r. \]

Suppose that one of the relations 3 holds as strict inequality. By convex combination, we obtain \((0{\lt}\alpha {\lt}1!)\)

\[ \langle s,\alpha x_1+(1-\alpha )x_2\rangle {\lt} r, \]

a contradiction.

Proposition 1.2.11
#

Let \(C\) be convex and compact. For \(s\in \mathbb {R}^n\), there holds

\[ \max _{x\in C}\langle s,x\rangle =\max _{x\in \operatorname {ext} C}\langle s,x\rangle . \]

Furthermore, the solution-set of the first problem is the convex hull of the solution-set of the second:

\[ \operatorname {Argmax}_{x\in C}\langle s,x\rangle = \operatorname {co}\{ \operatorname {Argmax}_{x\in \operatorname {ext}C}\langle s,x\rangle \} . \]
Proof

Because \(C\) is compact, \(\langle s,\cdot \rangle \) attains its maximum on \(F_C(s)\). The latter set is convex and compact, and as such is the convex hull of its extreme points (Minkowski’s Theorem 2.3.4); these extreme points are also extreme in \(C\) (Proposition 2.3.7 and Remark 2.4.4).

1.3 Projection Onto Closed Convex Set

1.3.1 The projection operator

Theorem 1.3.1
#

A point \(y_x\in C\) is the projection \(p_C(x)\) if and only if

\[ \langle x-y_x,\; y-y_x\rangle \le 0\qquad \text{for all }y\in C . \tag {3.1.3} \]
Proof

Call \(y_x\) the solution of (3.1.1); take \(y\) arbitrary in \(C\), so that \(y_x+\alpha (y-y_x)\in C\) for any \(\alpha \in ]0,1[\). Then we can write with the notation (3.1.2)

\[ f_x(y_x)\le f_x\big(y_x+\alpha (y-y_x)\big)=\tfrac 12\| y_x-x+\alpha (y-y_x)\| ^2. \]

Developing the square, we obtain after simplification

\[ 0\le \alpha \langle y_x-x,\; y-y_x\rangle +\tfrac 12\alpha ^2\| y-y_x\| ^2. \]

Divide by \(\alpha \ ({\gt}0)\) and let \(\alpha \downarrow 0\) to obtain (3.1.3).

Conversely, suppose that \(y_x\in C\) satisfies (3.1.3). If \(y_x=x\), then \(y_x\) certainly solves (3.1.1). If not, write for arbitrary \(y\in C\):

\[ 0\ge \langle x-y_x,\; y-y_x\rangle =(x-y_x,\; y-x+x-y_x) \]
\[ =\| x-y_x\| ^2+\langle x-y_x,\; y-x\rangle \ge \| x-y_x\| ^2-\| x-y\| \; \| x-y_x\| , \]

where the Cauchy-Schwarz inequality is used. Divide by \(\| x-y_x\| {\gt}0\) to see that \(y_x\) solves (3.1.1).

Proposition 1.3.1
#

For all \((x_1,x_2)\in \mathbb {R}^n\times \mathbb {R}^n\), there holds

\[ \| p_C(x_1)-p_C(x_2)\| ^2 \le \langle p_C(x_1)-p_C(x_2),\, x_1-x_2\rangle . \]
Proof

Write (3.1.3) with \(x=x_1\), \(y=p_C(x_2)\in C\):

\[ \langle p_C(x_2)-p_C(x_1),\, x_1-p_C(x_1)\rangle \le 0; \]

likewise,

\[ \langle p_C(x_1)-p_C(x_2),\, x_2-p_C(x_2)\rangle \le 0, \]

and conclude by addition

\[ \langle p_C(x_1)-p_C(x_2),\, x_2-x_1 + p_C(x_1)-p_C(x_2)\rangle \le 0. \]

1.3.2 Projection onto a closed convex cone

Definition 1.3.1
#

Let \(K\) be a convex cone, as defined in Example 1.1.4. The polar cone of \(K\) (called negative polar cone by some authors) is:

\[ K^{\circ } := \{ s \in \mathbb {R}^n : \langle s,x\rangle \le 0 \text{ for all } x \in K \} . \]
Proposition 1.3.2
#

Let \(K\) be a closed convex cone. Then \(y_x = p_K(x)\) if and only if

\[ y_x \in K,\qquad x-y_x\in K^\circ ,\qquad \langle x-y_x,y_x\rangle =0. \tag {3.2.1} \]
Proof

We know from Theorem 3.1.1 that \(y_x = p_K(x)\) satisfies

\[ \langle x-y_x,y-y_x\rangle \le 0\qquad \text{for all }y\in K. \tag {3.2.2} \]

Taking \(y=\alpha y_x\), with arbitrary \(\alpha \ge 0\), this inequality implies

\[ (\alpha -1)\langle x-y_x,y_x\rangle \le 0\qquad \text{for all }\alpha \ge 0. \]

Since \(\alpha -1\) can have either sign, this implies \(\langle x-y_x,y_x\rangle =0\) and (3.2.2) becomes

\[ \langle y,x-y_x\rangle \le 0\quad \text{for all }y\in K,\qquad \text{i.e.}\quad x-y_x\in K^\circ . \]

Conversely, let \(y_x\) satisfy (3.2.1). For arbitrary \(y\in K\), use the notation (3.1.2):

\[ f_x(y)=\tfrac 12\| x-y_x+y_x-y\| ^2\ge f_x(y_x)+\langle x-y_x,y_x-y\rangle ; \]

but (3.2.1) shows that

\[ \langle x-y_x,y_x-y\rangle =-\langle x-y_x,y\rangle \ge 0, \]

hence \(f_x(y)\ge f_x(y_x)\): \(y_x\) solves (3.1.1).

Theorem 1.3.2 J.-J. Moreau
#

Let \(K\) be a closed convex cone. For the three elements \(x,x_1\) and \(x_2\) in \(\mathbb {R}^n\), the properties below are equivalent:

  1. \(x=x_1+x_2\) with \(x_1\in K\), \(x_2\in K^\circ \) and \(\langle x_1,x_2\rangle =0\);

  2. \(x_1=\mathrm{p}_K(x)\) and \(x_2=\mathrm{p}_{K^\circ }(x)\).

Proof

Straightforward, from (3.2.3) and the characterization (3.2.1) of \(x_1=\mathrm{p}_K(x)\).

1.4 Separation and Applications

1.4.1 Separation between convex sets

Theorem 1.4.1
#

Let \(C\subset \mathbb {R}^n\) be nonempty closed convex, and let \(x\notin C\). Then there exists \(s\in \mathbb {R}^n\) such that

\[ \langle s,x\rangle {\gt} \sup _{y\in C}\langle s,y\rangle . \tag {4.1.1} \]
Proof

Set \(s := x - p_C(x) \neq 0\). We write (3.1.3) as

\[ 0 \ge (s,y-x+s) = (s,y) - (s,x) + \| s\| ^2. \]

Thus we have

\[ (s,x) - \| s\| ^2 \ge (s,y)\qquad \text{for all }y\in C, \]

and our \(s\) is a convenient answer for (4.1.1).

Corollary 1.4.1 Strict Separation of Convex Sets
#

Let \(C_1,\, C_2\) be two nonempty closed convex sets with \(C_1\cap C_2=\varnothing \). If \(C_2\) is bounded, there exists \(s\in \mathbb {R}^n\) such that

\[ \sup _{y\in C_1}\langle s,y\rangle \; {\lt}\; \min _{y\in C_2}\langle s,y\rangle . \tag {4.1.2} \]
Proof

The set \(C_1-C_2\) is convex (Proposition 1.2.4) and closed (because \(C_2\) is compact). To say that \(C_1\) and \(C_2\) are disjoint is to say that \(0\notin C_1-C_2\), so we have by Theorem 4.1.1 an \(s\in \mathbb {R}^n\) separating \(\{ 0\} \) from \(C_1-C_2\):

\[ \sup \{ \langle s,y\rangle : y\in C_1-C_2\} {\lt}\langle s,0\rangle =0. \]

This means:

\[ 0 \; {\gt}\; \sup _{y_1\in C_1}\langle s,y_1\rangle +\sup _{y_2\in C_2}\langle s,-y_2\rangle =\sup _{y_1\in C_1}\langle s,y_1\rangle -\inf _{y_2\in C_2}\langle s,y_2\rangle . \]

Because \(C_2\) is bounded, the last infimum (is a min and) is finite and can be moved to the left-hand side.

Theorem 1.4.2 Proper Separation of Convex Sets
#

If the two nonempty convex sets \(C_1\) and \(C_2\) satisfy \(\operatorname {ri}C_1\cap (\operatorname {ri}C_2)=\varnothing \), they can be properly separated.

1.4.2 First consequences of the separation properties

Lemma 1.4.1
#

Let \(x\in \partial C\), where \(C\neq \varnothing \) is convex in \(\mathbb {R}^n\) (naturally \(C\neq \mathbb {R}^n\)). There exists a hyperplane supporting \(C\) at \(x\).

Proof

Because \(C\), \(\operatorname {cl}C\) and their complements have the same boundary (remember Remark 2.1.9), a sequence \(\{ x_k\} \) can be found such that

\[ x_k\notin C\quad \text{for }k=1,2,\dots \quad \text{and}\quad \lim _{k\to +\infty }x_k=x. \]

For each \(k\) we have by Theorem 4.1.1 some \(s_k\) with \(\| s_k\| =1\) such that

\[ \langle s_k,\; x_k-y\rangle {\gt}0\quad \text{for all }y\in C\subset \operatorname {cl}C . \]

Extract a subsequence if necessary so that \(s_k\to s\) (note: \(s\neq 0\)) and pass to the limit to obtain

\[ \langle s,\; x-y\rangle \ge 0\quad \text{for all }y\in C, \]

which is the required result \(\langle s,x\rangle =r\ge \langle s,y\rangle \) for all \(y\in C\).

Proposition 1.4.1
#

Let \(S\subset \mathbb {R}^n\) and \(C:=\operatorname {co}S\). Any \(x\in C\cap \operatorname {bd}C\) can be represented as a convex combination of \(n\) elements of \(S\).

Proof

Because \(x\in \operatorname {bd}C\), there is a hyperplane \(H_{s,r}\) supporting \(C\) at \(x\): for some \(s\neq 0\) and \(r\in \mathbb {R}\),

\[ \langle s,x\rangle - r = 0\qquad \text{and}\qquad \langle s,y\rangle - r \le 0\ \text{ for all } y\in C. \tag {4.2.1} \]

On the other hand, Carathéodory’s Theorem 1.3.6 implies the existence of points \(x_1,\dots ,x_{n+1}\) in \(S\) and convex multipliers \(\alpha _1,\dots ,\alpha _{n+1}\) such that \(x=\sum _{i=1}^{n+1}\alpha _i x_i\); and each \(\alpha _i\) can be assumed positive (otherwise the proof is finished).

Setting successively \(y=x_i\) in (4.2.1), we obtain by convex combination

\[ 0=\langle s,x\rangle - r = \sum _{i=1}^{n+1}\alpha _i\bigl(\langle s,x_i\rangle - r\bigr)\le 0, \]

so each \(\langle s,x_i\rangle - r\) is actually \(0\). Each \(x_i\) is therefore not only in \(S\), but also in \(H_{s,r}\), a set whose dimension is \(n-1\). It follows that our starting \(x\), which is in \(\operatorname {co}\{ x_1,\dots ,x_{n+1}\} \), can be described as the convex hull of only \(n\) among these \(x_i\)’s.

Theorem 1.4.3
#

Let \(\varnothing \neq C\subseteq \mathbb {R}^n\) be convex. The set \(C^*\) defined above is the closure of \(C\).

Proof

By construction, \(C^*\supseteq \operatorname {cl}C\). Conversely, take \(x\notin \operatorname {cl}C\); we can separate \(x\) and \(\operatorname {cl}C\): there exists \(s_0\neq 0\) such that

\[ \langle s_0,x\rangle {\gt}\sup _{y\in C}\langle s_0,y\rangle =:\, r_0. \]

Then \((s_0,r_0)\in \Sigma _C\); but \(x\notin H_{s_0,r_0}\), hence \(x\notin C^*\).

Corollary 1.4.2

The data \((s_j,r_j)\in \mathbb {R}^n\times \mathbb {R}\) for \(j\) in an arbitrary index set \(J\) is equivalent to the data of a closed convex set \(C\) via the relation

\[ C=\bigcap _{j\in J}\{ x\in \mathbb {R}^n:\langle s_j,x\rangle \le r_j\} . \]
Proof

If \(C\) is given, define \(\{ (s_j,r_j)\} _{J}:=\Sigma _C\) as in (4.2.2). If \(\{ (s_j,r_j)\} _J\) is given, the intersection of the corresponding half-spaces is a closed convex set. Note here that we can define at the same time the whole of \(\mathbb {R}^n\) and the empty sets as two extreme cases.

Definition 1.4.1 Polyhedral Sets
#

A closed convex polyhedron is an intersection of finitely many half-spaces. Take \((s_1,r_1),\ldots ,(s_m,r_m)\) in \(\mathbb {R}^n\times \mathbb {R}\), with \(s_i\neq 0\) for \(i=1,\ldots ,m\); then define

\[ P := \{ x\in \mathbb {R}^n : \langle s_j,x\rangle \le r_j\ \text{ for } j=1,\ldots ,m\} , \]

or in matrix notations (assuming the dot-product for \(\langle \cdot ,\cdot \rangle \)),

\[ P = \{ x\in \mathbb {R}^n : Ax \le b\} , \]

if \(A\) is the matrix whose rows are \(s_j\) and \(b\in \mathbb {R}^m\) has coordinates \(r_j\).

A closed convex polyhedral cone is the special case where \(b=0\).

Proposition 1.4.2
#

Let \(K\) be a convex cone with polar \(K^{\circ }\); then, the polar \(K^{\circ \circ }\) of \(K^{\circ }\) is the closure of \(K\).

Proof

We exploit Remark 4.1.2: due to its conical character \((\alpha x\in K\) if \(\alpha \in K\) and \(\alpha {\gt}0)\), \(\operatorname {cl}K\) has a very special support function:

\[ \sigma _{\operatorname {cl}K}(s)= \begin{cases} \langle s,0\rangle =0 & \text{if }\langle s,x\rangle \le 0\text{ for all }x\in \operatorname {cl}K,\\[4pt] +\infty & \text{otherwise}. \end{cases} \]

In other words, \(\sigma _{\operatorname {cl}K}\) is \(0\) on \(K^{\circ }\), \(+\infty \) elsewhere. Thus, the characterization

\[ x\in \operatorname {cl}K \iff \langle \cdot ,x\rangle \le \sigma _{\operatorname {cl}K}(\cdot ) \]

becomes

\[ x\in \operatorname {cl}K \iff \begin{cases} \langle s,x\rangle \le 0 & \text{for all }s\in K^{\circ }\\[4pt] (\langle s,x\rangle \text{ arbitrary}) & \text{for }s\notin K^{\circ }, \end{cases} \]

in which we recognize the definition of \(K^{\circ \circ }\).

1.4.3 The lemma of Minkowski and Farkas

Lemma 1.4.2 Farkas I
#

Let \(b,s_1,\dots ,s_m\) be given in \(\mathbb {R}^n\). The set

\[ \{ x\in \mathbb {R}^n:\ \langle s_j,x\rangle \le 0\quad \text{for }j=1,\dots ,m\} \tag {4.3.1} \]

is contained in the set

\[ \{ x\in \mathbb {R}^n:\ \langle b,x\rangle \le 0\} \tag {4.3.2} \]

if and only if (see Definition 1.4.5 of a conical hull)

\[ b\in \operatorname {cone}\{ s_1,\dots ,s_m\} . \tag {4.3.3} \]
Lemma 1.4.3 Farkas II
#

Let \(b,s_1,\dots ,s_m\) be given in \(\mathbb {R}^n\). Then exactly one of the following statements is true.

  1. ?? has a solution \(\alpha \in \mathbb {R}^n\).

  2. \[ \left\{ \begin{aligned} & \langle b,x\rangle {\gt}0,\\ & \langle s_j,x\rangle \le 0\quad \text{for }j=1,\dots ,m \end{aligned} \right. \]

    has a solution \(x\in \mathbb {R}^n\).

Lemma 1.4.4 Farkas III
#

Let \(s_1,\dots ,s_m\) be given in \(\mathbb {R}^n\). Then the convex cone

\[ K:=\operatorname {cone}\{ s_1,\dots ,s_m\} =\Big\{ \sum _{j=1}^m\alpha _j s_j:\ \alpha _j\ge 0\ \text{for }j=1,\dots ,m\Big\} \]

is closed.

Proof

It is quite similar to that of Carathéodory’s Theorem 1.3.6. First, the proof is easy if the \(s_j\)’s are linearly independent: then, the convergence of

\begin{equation} \label{eq:4.3.5} x^k=\sum _{j=1}^m\alpha _j^k s_j\quad \text{for }k\to \infty \end{equation}
8

is equivalent to the convergence of each \(\{ \alpha _j^k\} \) to some \(\alpha _j\), which must be nonnegative if each \(\alpha _j^k\) in 8 is nonnegative.

Suppose, on the contrary, that the system \(\sum _{j=1}^m\beta _j s_j=0\) has a nonzero solution \(\beta \in \mathbb {R}^m\) and assume \(\beta _j{\lt}0\) for some \(j\) (change \(\beta \) to \(-\beta \) if necessary). As in the proof of Theorem 1.3.6, write each \(x\in K\) as

\[ x=\sum _{j=1}^m\alpha _j s_j=\sum _{j=1}^m[\alpha _j+t^*(x)\beta _j]s_j=\sum _{j\ne i(x)}\alpha '_j s_j, \]

where

\[ i(x)\in \operatorname {Argmin}_{\beta _j{\lt}0}\frac{-\alpha _j}{\beta _j},\qquad t^*(x):=\frac{-\alpha _{i(x)}}{\beta _{i(x)}}, \]

so that each \(\alpha '_j=\alpha _j+t^*(x)\beta _j\) is nonnegative. Letting \(x\) vary in \(K\), we thus construct a decomposition

\[ K=\bigcup \{ K_i:\; i=1,\ldots ,m\} , \]

where \(K_i\) is the conical hull of the \(m-1\) generators \(s_j,\; j\neq i\).

Now, if there is some \(i\) such that the generators of \(K_i\) are linearly dependent, we repeat the argument for a further decomposition of this \(K_i\). After finitely many such operations, we end up with a decomposition of \(K\) as a finite union of polyhedral convex cones, each having linearly independent generators. All these cones are therefore closed (first part of the proof), so \(K\) is closed as well.

Theorem 1.4.4 Generalized Farkas

Let be given \((b,r)\) and \((s_j,\rho _j)\) in \(\mathbb {R}^n\times \mathbb {R}\), where \(j\) varies in an (arbitrary) index set \(J\). Suppose that the system of inequalities

\[ \langle s_j,x\rangle \le \rho _j\qquad \text{for all }j\in J \tag {4.3.6} \]

has a solution \(x\in \mathbb {R}^n\) (the system is consistent). Then the following two properties are equivalent:

(i) \(\langle b,x\rangle \le r\) for all \(x\) satisfying (4.3.6);

(ii) \((b,r)\) is in the closed convex conical hull of \(S:=\{ (0,1)\} \cup \{ (s_j,\rho _j)\} _{j\in J}\).

(ii) \(\Rightarrow \) (i)

Let first \((b,r)\) be in \(K:=\operatorname {cone}S\). In other words, there exists a finite set \(\{ 1,\dots ,m\} \subset J\) and nonnegative \(\alpha _0,\alpha _1,\dots ,\alpha _m\) such that (we adopt the convention \(\sum \emptyset =0\))

\[ b=\sum _{j=1}^m\alpha _j s_j\qquad \text{and}\qquad r=\alpha _0+\sum _{j=1}^m\alpha _j\rho _j. \]

For each \(x\) satisfying (4.3.6) we can write

\[ \langle b,x\rangle \le r-\alpha _0\le r. \tag {4.3.7} \]

If, now, \((b,r)\) is in the closure of \(K\), pass to the limit in (4.3.7) to establish the required conclusion (i) for all \((b,r)\) described by (ii).

[(i) \(\Rightarrow \) (ii)] If \((b,r)\notin \operatorname {cl}K\), separate \((b,r)\) from \(\operatorname {cl}K\): equipping \(\mathbb {R}^n\times \mathbb {R}\) with the scalar product

\[ \langle \! \langle (b,r),(d,t)\rangle \! \rangle :=\langle b,d\rangle +rt, \]

there exists \((d,-t)\in \mathbb {R}^n\times \mathbb {R}\) such that

\[ \sup _{(s,\rho )\in K}\; [\langle s,d\rangle - \rho t] {\lt} \langle b,d\rangle - r t . \tag {4.3.8} \]

It follows first that the left-hand supremum is a finite number \(\kappa \). Then the conical character of \(K\) implies \(\kappa \le 0\), because \(\alpha \kappa \le \kappa \) for all \(\alpha {\gt}0\); actually \(\kappa =0\) because \((0,0)\in K\). In summary, we have singled out \((d,t)\in \mathbb {R}^n\times \mathbb {R}\) such that

\[ t \ge 0 \qquad [\text{take }(0,1)\in K] \]
\[ (\ast )\qquad \langle s_j,d\rangle - \rho _j t \le 0\ \text{ for all } j\in J \qquad [\text{take }(s_j,\rho _j)\in K] \]
\[ (\ast \ast )\qquad \langle b,d\rangle - r t {\gt} 0. \qquad [\text{don't forget (4.3.8)}] \]

Now consider two cases:

  • If \(t{\gt}0\), divide \((\ast )\) and \((\ast \ast )\) by \(t\) to exhibit the point \(x=d/t\) violating (i).

  • If \(t=0\), take \(x_0\) satisfying (4.3.6). Observe from \((\ast )\) that, for all \(\alpha {\gt}0\), the point \(x(\alpha )=x_0+\alpha d\) satisfies (4.3.6) as well. Yet, let \(\alpha \to +\infty \) in

    \[ \langle b,x(\alpha )\rangle = \langle b,x_0\rangle + \alpha \langle b,d\rangle \]

    to realize from \((\ast \ast )\) that \(x(\alpha )\) violates (i) if \(\alpha \) is large enough.

Thus we have proved in both cases that "not (ii) \(\Rightarrow \) not (i)".

1.5 Conical approximations of convex sets

1.5.1 Convenient definitions of tangent cones

Definition 1.5.1
#

Let \(S\subset \mathbb {R}^n\) be nonempty. We say that \(d\in \mathbb {R}^n\) is a direction tangent to \(S\) at \(x\in S\) when there exists a sequence \(\{ x_k\} \subset S\) and a sequence \(\{ t_k\} \) such that, when \(k\to +\infty \),

\[ x_k\to x,\qquad t_k\downarrow 0,\qquad \frac{x_k-x}{t_k}\to d. \tag {5.1.1} \]

The set of all such directions is called the tangent cone (also called the contingent cone, or Bouligand’s cone) to \(S\) at \(x\in S\), denoted by \(T_S(x)\).

Proposition 1.5.1
#

A direction \(d\) is tangent to \(S\) at \(x\in S\) if and only if

\[ \exists (d_k)\to d,\ \exists (t_k)\downarrow 0\quad \text{such that}\quad x+t_kd_k\in S\ \text{for all }k. \]
Proposition 1.5.2
#

The tangent cone is closed.

Proof

Let \(\{ d_\ell \} \subset T_S(x)\) be converging to \(d\); for each \(\ell \) take sequences \(\{ x_{\ell ,k}\} _k\) and \(\{ t_{\ell ,k}\} _k\) associated with \(d_\ell \) in the sense of Definition 5.1.1. Fix \(\ell {\gt}0\): we can find \(k_\ell \) such that

\[ \left\| \frac{x_{\ell ,k_\ell }-x}{t_{\ell ,k_\ell }}-d_\ell \right\| \le \frac{1}{\ell }. \]

Letting \(\ell \to \infty \), we then obtain the sequences \(\{ x_\ell ,t_\ell \} _\ell \) and \(\{ t_\ell ,k_\ell \} _\ell \) which define \(d\) as an element of \(T_S(x)\).

1.5.2 The tangent cones and normal cones to a convex set

Proposition 1.5.3
#

The tangent cone to \(C\) at \(x\) is the closure of the cone generated by \(C-\{ x\} \):

\begin{align} T_C(x) & =\overline{\operatorname {cone}}(C-x)=\overline{\operatorname {cl}}\mathbb {R}^+(C-x)\nonumber \\ & =\overline{\{ d\in \mathbb {R}^n:\; d=\alpha (y-x),\; y\in C,\; \alpha \ge 0\} }. \tag {5.2.1} \end{align}
Proof

We have just said that \(C-\{ x\} \subset T_C(x)\). Because \(T_C(x)\) is a closed cone (Proposition 5.1.3), it immediately follows that \(\overline{\mathbb {R}^+}(C-x)\subset T_C(x)\). Conversely, for \(d\in T_C(x)\), take \(\{ x_k\} \) and \(\{ t_k\} \) as in the definition (5.1.1): the point \((x_k-x)/t_k\) is in \(\mathbb {R}^+(C-x)\), hence its limit \(d\) is in the closure of this latter set.

Definition 1.5.2
#

The direction \(s\in \mathbb {R}^n\) is said normal to \(C\) at \(x\in C\) when

\[ \langle s,\, y-x\rangle \le 0\quad \text{for all }y\in C . \tag {5.2.2} \]

The set of all such directions is called normal cone to \(C\) at \(x\), denoted by \(N_C(x)\).

Proposition 1.5.4
#

The normal cone is the polar of the tangent cone.

Proof

If \(\langle s,d\rangle \le 0\) for all \(d\in C-x\), the same holds for all \(d\in \mathbb R^+(C-x)\), as well as for all \(d\) in the closure \(T_C(x)\) of the latter. Thus, \(N_C(x)\subset [T_C(x)]^\circ \).

Conversely, take \(s\) arbitrary in \([T_C(x)]^\circ \). The relation \(\langle s,d\rangle \le 0\), which holds for all \(d\in T_C(x)\), a fortiori holds for all \(d\in C-x\subset T_C(x)\); this is just (5.2.2).

Corollary 1.5.1
#

The tangent cone is the polar of the normal cone:

\[ T_C(x)=\{ d\in \mathbb {R}^n:\ \langle s,d\rangle \le 0\ \text{for all }s\in N_C(x)\} . \]

1.5.3 Some properties of tangent and normal cones

Proposition 1.5.5
#

Here, the \(C\)’s are nonempty closed convex sets.

  1. For \(x\in C_1\cap C_2\), there holds

    \[ T_{C_1\cap C_2}(x)\subset T_{C_1}(x)\cap T_{C_2}(x) \qquad \text{and}\qquad N_{C_1\cap C_2}(x)\supset N_{C_1}(x)+N_{C_2}(x). \]
  2. With \(C_i\subset \mathbb R^{n_i}\), \(i=1,2\) and \((x_1,x_2)\in C_1\times C_2\),

    \[ T_{C_1\times C_2}(x_1,x_2)=T_{C_1}(x_1)\times T_{C_2}(x_2), \qquad N_{C_1\times C_2}(x_1,x_2)=N_{C_1}(x_1)\times N_{C_2}(x_2). \]
  3. With an affine mapping \(A(x)=y_0+A_0x\) (\(A_0\) linear) and \(x\in C\),

    \[ T_{A(C)}[A(x)]=\operatorname {cl}[A_0T_C(x)] \qquad \text{and}\qquad N_{A(C)}[A(x)]=A_0^{-*}[N_C(x)]. \]
  4. In particular (start from (ii), (iii) and proceed as when proving (1.2.2)):

    \[ T_{C_1+C_2}(x_1+x_2)=\operatorname {cl}[T_{C_1}(x_1)+T_{C_2}(x_2)], \qquad N_{C_1+C_2}(x_1+x_2)=N_{C_1}(x_1)\cap N_{C_2}(x_2). \]
Proposition 1.5.6
#

For \(x\in C\) and \(s\in \mathbb {R}^n\), the following properties are equivalent:

  1. \(s\in N_C(x)\);

  2. \(x\) is in the exposed face \(F_C(s)\): \(\langle s,x\rangle =\max _{y\in C}\langle s,y\rangle \);

  3. \(x=p_C(x+s)\).

Proof

Nothing really new: everything comes from the definitions of normal cones, supporting hyperplanes, exposed faces, and the characteristic property (3.1.3) of the projection operator.

Proposition 1.5.7

For given \(x\in C\) and \(d\in \mathbb {R}^n\), there holds

\[ \lim _{t\downarrow 0}\frac{\operatorname {P}_{C}(x+td)-x}{t}=\operatorname {P}_{T_C(x)}(d). \tag {5.3.3} \]
Proof

HINT. Start from the characterization (3.1.3) of a projection, to observe that the difference quotient \([\operatorname {P}_C(x+td)-x]/t\) is the projection of \(d\) onto \((C-x)/t\). Then let \(t\downarrow 0\); the result comes as well with the help of (5.1.4) and Remark 5.2.2.