Lagrange Multipliers
Turning a constrained problem into an unconstrained one
The basic idea behind the method of Lagrange multipliers is to convert a constrained optimization problem into a form where the usual derivative-based tests for maxima and minima can still be applied.
Suppose we want to maximize or minimize a function subject to one or more equality constraints:
(1) |
Instead of solving this directly, we define a new function—called the **Lagrangian**:
(2) |
where is the inner product. In simpler cases, this reduces to a dot product:
(3) |
To find optimal points, we look for stationary points of . That means we solve:
(4) | ||||
(5) |
Equation (5) is equivalent to enforcing the original constraint:
(6) |
while (4) ensures the gradients of and are aligned:
(7) |
This approach avoids needing to parameterize the constraint explicitly, and the solution to the original problem turns out to be a saddle point of the Lagrangian function. In more advanced settings, checking definiteness via the bordered Hessian can confirm the nature of the point.
This method also extends beyond equality constraints using the more general Karush–Kuhn–Tucker (KKT) conditions, which can handle inequalities like .
Example
Let’s find the maximum and minimum values of:
subject to:
This keeps us on the unit circle.
Setting up the Lagrangian and equations
We define the Lagrangian:
(8) |
Take partial derivatives and set them to zero:
(9) | ||||
(10) | ||||
(11) |
Solving the system
This gives two cases:
-
•
-
•
Checking which ones are max or min
Evaluate at each critical point:
Therefore: Maximum value: Minimum value: