OK, so it’s been a painfully long time since I wrote, and I really don’t have any good excuse for that, sigh. I will try to change this. Anyways, in this post, I want to talk about LPs (*linear programs*, known in more modern terms as *linear optimization problems*). More specifically, I want to talk about duality. Before I start with the interesting stuff, however, I want to establish the setup of the problem, which I’ll try to do as concisely as possible, only introducing the concepts we will need.

**Setup and terminology**. In general, a linear optimization problem is concerned with optimizing a *linear* function , called the *objective function*, subject to some finite number (say ) of *linear* non-strict inequalities (or equations), called the* constraints*. To be more precise, we want to find a point , satisfying all the constraints, for which the objective function has maximum (or minimum, depending on the problem) value. Such an is known as an *optimal solution*. In general, any which satisfies the constraints is known as a *feasible solution*. Note that optimal solutions may not always exist (let alone be unique), and there might not even *be* any feasible solutions! If we have two vectors then we will write to mean that for all — in other words, the components of are all the corresponding ones of . This clearly isn’t a total order, but that’s OK.

**A concise way of expressing the problem**. In general, a problem like I described above will look something like this:

Maximize subject to the constraints , .

Our hope, however, is to convert the problem to *standard form*, that is, write it as

Maximize subject to the constraints , .

This might mean converting some constraints into constraints, by multiplying by or something. Also, notice how we’ve encoded the objective function as the action of a covector on the vector . Similarly, we’ve conveniently “collapsed” all of the linear inequalities into the concise equation where is an real matrix and .

**Geometric intuition**. With a bit of thought, we notice that each of the constraints “splits” the whole space into two parts: the vectors which satisfy that constraint, and those that don’t. These “constraint regions”, as I will call them, turn out to actually be *half-spaces*. The “feasible region” (that is, the space of all *feasible solutions*) we speak of is merely the intersection of all the separate constraint regions. The problem then reduces to finding the *optimal* real number such that the feasible region has nontrivial intersection with the level set . Each level set will always be an affine space, obtained by translating , which (due to rank-nullity) is a hyperplane, ignoring the degenerate case . In other words, we’re “sliding” along an uncountable stack of hyperplanes sitting in , by optimizing as much as we can until our hyperplane is *just touching the edge*, in some sense, of the feasible region.

Supplemental readings: http://www.math.uwaterloo.ca/~harvey/F10/

It might be my geometric intuition that’s messing me up, but I assume that is analogous to somewhat of a quotient space in where we are just quotienting out some one dimensional space to obtain a stack of hyperplanes. By intersecting these half spaces (the constraints) with the hyperplanes, it seems that now we have have a set of “half-hyper-planes” or some lower dimensional variant.

Is there anything that we can use to determine whether our space of feasible solutions is one, two, or n-dimensional? Or that there is only one unique solution? Or the solution is just constrained as a “line” segment?

… Whoops. I think I answered my own question. I forgot that intersecting spaces can only either lower the dimension or keep it the same.

Well, if you identify points that share a hyperplane, effectively becomes a one-dimensional space, i.e. the orthogonal complement (which in this case coincides with the quotient space) of some -dimensional subspace, namely the kernel of that linear functional. I don’t think viewing it like that really helps at all though, since you care about whether or not each hyperplane intersects the feasible region.

The feasible region is just the set of points in that satisfy all the constraints (i.e. lie in all of what I call the “constraint regions” which are half-spaces). This region will be convex since half-spaces are convex and intersection of a family of convex sets is again convex. Also, it could be empty, or unbounded (in the sense of MATH 247: for every there is some point so that ). Also it’s pretty improbable that two (let alone more) half-spaces intersect in a subspace (in fact the two half-spaces would have to be closed, and perfectly complemented, from my own geometric intuition).

@ 2nd paragraph: Hmm, in regards to that unbounded definition, are you saying that we can have some intersection of half spaces that will give us a “space” that is the complement of a closed ball (with radius ) centered at the origin? In that case, it would really be a convex set… then again, since I haven’t taken 247 yet, I might be just misinterpreting the wording.

@1st paragraph: What do you mean “identify points that share a hyperplane”? Do you mean that if you find points that satisfy the criterion that they are exclusively in one of the for some , then for any we have equal to some translated version of ?

Anyways, I can see the idea that is just isomorphic to the orthogonal complement and can be viewed as a quotient space of sorts, but something I don’t get from the original post is what you mean by “just touching the edge”. Are talking about choosing such that the hyper-plane is tangental to the feasible solution space?

First question: First off, the complement of a closed ball is not going to be convex. The possible “shapes” of the feasible region to an LP problem is significantly impacted by the fact that we are only allowing finitely many constraints. Therefore, the feasible region is an intersection of finitely many half-spaces. For example, I know of no way to generate a ball as a finite intersection of half-spaces, although with an infinite (probably uncountable) intersection I think it could be done.

Second question: When I say “identify” a collection of objects, I mean “consider everything in that collection to be the same”.

Third question: It doesn’t really have much to do with tangency, but more to do with the fact that we want to choose an “optimal” (the meaning of “optimal” depends on whether it is a maximization or minimization problem) such that the hyperplane has nontrivial intersection with the feasible region. If we can do this, then we can just pluck some that sits both on that “optimal hyperplane” and in the feasible region. Such an will then be an optimal solution to the problem (since the objective function’s value is optimized there) if you think about it.

@First: Whoops, a typo, it really should be “wouldn’t”. To add to this, would it no longer be called linear programming if we had a sequence of constraints rather than some finite collection?

@Third: Oh, okay. The choice of would depend on the type of optimization problem that we’re inspecting.

To come back to your first question above: it turns out that much of the rich theory (in particular, the extremely useful so-called simplex algorithm) of linear programming relies on the fact that feasible regions of linear programs are polyhedra in . That is, they are sets of the form where is an matrix and .

If you have infinitely many constraints, then you can’t write it as a matrix equation anymore. I guess you’d have to write things like for your constraint system. It’s interesting, but yeah, it basically ceases to be considered linear programming at that point.