Linear Programming is a branch of applied mathematics that deals with finding the optimum solutions for problems, where the out put is constrained by the inputs.
I took a course in it years ago…I never have used it.
I recall that it was quite interesting, but again, I never have seen a problem (in the real world) where it was actually used.
Is it something that is mainly theoretical, with limited (practical) use?
15 years ago, I am pretty sure it was used quite a bit for Manufacturing/Production types of problems. I am not sure what the state of it is today, as I have not been involved in those areas.
I am interested in hearing other answers, as, like you, I always enjoyed it, but don’t know how useful it is today.
Linear programming and its relatives, especially semidefinite programming or generally everything falling under the ‘convex optimization’-umbrella, are quite copiously used in quantum information theory; I use it, for instance, to find ways to maximally violate certain inequalities. Of course, how much one considers this a ‘real world’ application may be up for debate…
It was used at my university to assign students to freshman writing seminar classes (and we did this experiment as a final project and got roughly the same statistics as the real assignments).
Nowadays it would not surprise me if there are applications for manufacturing and production that do linear programming “under the hood”, but I doubt many industrial engineers are frequently coding up custom linear programs.
Although I can’t recall ever using linear programming in my engineering career, my impression was the opposite, that it is still frequently used in a wide variety of problems. The simplex method used to find optimal strategies for a 2-person game is a form of linear programming. (I was intrigued recently to learn that it was Joseph Fourier who first invented the simplex method.)
When I teach it in algebra, it is as an application of solving a system of equations since the optimum points are at the intersections. While I’m sure there are uses for linear programming, the constraints have to be so contrived in even a basic problem that I’m sure for most real-life applications that there are better methods.
A certain subset of linear algebra - shortest path problems - are incredibly useful.
Not just the obvious stuff like your GPS finding the best way to the new Culver’s, but phone lines and computer networking.
There, you’re not so concerned with getting the shortest path as getting a path, but it’s still mostly modifications and enhancements of Dykstra’s algorithm.
The Dorigo ant colony method has some limited applications, too, for fleets of LTL vehicles, travelling salesman problems, etc.
It is useful in portfolio optimization, industrial design, and project management. In fact, I have met people in these fields who think that “simplex” is the sum total of all optimization techniques known to humanity.
It’s really difficult to overstate the importance of linear programming and related methods. Rather than give a laundry list of applications, I’ll just note that the simplex method made SIAM’s list of the top ten algorithms of the twentieth century.
It is/was used in the cement industry. You have a number of raw materials of known chemistry which are mixed together to provide a raw meal to specified targets or constraints. The raw meal is then burnt in a kiln to produce a clinker of required quality.
We use it lot, lot, lot in my industry - electric utility, to find the best ways to meet load demand using various generation capabilities over the course of several years, and to build upon that info for projecting fuel inventory requirements, long and short power positions, emissions outputs, when to build or retire units, etc.
We combine the LP with stochastics, but it is hugely important for us, and we use it for some other stuff too (like how optimize fuel deliveries when considering contract and transport constraints).
It sounds like what you taught in algebra was the graphical approach, where you’re actually drawing the feasible region and finding intersection points of the boundary lines. In that approach, you’re limited to two independent variables. But linear programming in general, using the simplex method, has no such limitation.
In fact, the problems that people are solving with linear programs these days can involve about a hundred thousand variables and constraints.
Does the term “linear programming” apply to all optimization algorithms, or just some subset? Because in my field, nobody ever uses simplexes any more-- It seems like it’s all Monte Carlo Markov chains of various sorts.
A linear program is a problem where you’re trying to minimize c[sup]T[/sup]x subject to the restrictions Ax = b and x > 0. There are other forms, but they’re all equivalent to this one. The simplex algorithm is an algorithm for solving problems of this form, although it’s not the only one. If you don’t have linearity, you need a different algorithm.
(Truth be told, the relevant distinction is between convex and non-convex problems, but that’s not something that the average student is going to encounter in their math classes.)
I’ve seen tons of papers using linear programming for engineering optimization problems, and know of one EDA product which makes use of the method, so it is not just theoretical. I’ve used it only once myself. Linear programming capability is build into a of math packages so it must be useful - the market speaks!
It’s worth of billions of dollars to businesses worldwide. Production planning, vehicle routing and crew scheduling (I doubt any airline in the world doesn’t use it). I should add there is an important “extension” to linear programming called integer programming (or integer linear programming, and often with a mixed in front). This states that the variables must be whole numbers. This might not seem like a big distinction, but it allows you to solve things like the knapsack problem and the travelling salesman problem using linear programming. The travelling salesman problem corresponding to the US state capitals was solved by hand using the simplex algorithm in the 50s. Even if a problem can’t be expressed as a linear program you might be able to express a relaxation of the problem as one and use the information from solving this relaxation to greatly help you find better solutions (which is what integer programming is).
It’s a core application for animal nutritionists formulating least cost feed rations for intensive livestock, particularly pig and poultry.
I use it for several logistics models in the FMCG business I’m currently with.
Even built a model of the state’s court system with integer programming when we were doing a project on debottlenecking.
In Operations Research (business course), we had to use it to solve simultaneous equations - where you have multiple variables that are the same in each equation, but the constant/coefficient is different for each variable in each equation.
IIRC we took each coefficient and plugged it into a cell of the matrix. Then the computer would iterate (multiply) the matrix by itself a certain number of times (which I guess was set arbitrarily). If the matrix reached a steady state (i.e., further iterations produced the same result), then the solution would be along the diagonal of the matrix I think. If there was no steady state, then, the set of equations had no solution.
The only constraint was that you needed as many equations as there were variables (or maybe # var +1, not sure).
Anyway, that always seemed like magic to me. I eventually took a course in linear algebra but the prof never finished the syllabus so it still seemed like magic.
Is it possible to explain how having a steady state relates to the existence of a solution? I know this is a bit off topic and I can put it in a new thread if appropriate. Thanks.