[latexpage]The Lagrangian is another mathematical tool of Kuhn and Tucker for solving linear and nonlinear constrained maximization program. Instead of maximizing $U$, it maximizes $\Lambda$. Considering the program,
\[
\left\{
\begin{array}{c}
\max_{k_{t+1},h_{t}}\sum\nolimits_{0}^{\infty }\beta ^{t}\left( \ln
c_{t}+A\ln \left( 1-h_{t}\right) \right) \\
sc:f\left( \lambda _{t},k_{t},h_{t}\right) =\lambda _{t}\left( k_{t}\right)
^{\theta }\left( h_{t}\right) ^{1-\theta } \\
sc:k_{t+1}=\left( 1-\delta \right) k_{t}+i_{t} \\
sc:f\left( \lambda _{t},k_{t},h_{t}\right) \geq c_{t}+i_{t}
\end{array}
\right.
\] We write the Lagrangian as the sum of the objective function and the constraints weighted by a Lagrange multiplier associated with each of these constraints. As the optimization is also an infinity of time, the Lagrangian is dynamic. The constraint (1) and (3) can be reduced by one to simplify the calculations. We must therefore adapt our maximization program by summing the constraints with two Lagrange multipliers $\zeta _{t}$ et $\upsilon _{t}$,
\[
\Lambda \left( \zeta _{t},\upsilon _{t}\right)
=\max_{k_{t+1},h_{t}}\sum\nolimits_{0}^{\infty }\beta ^{t}\left(
\begin{array}{c}
\left( \ln c_{t}+A\ln \left( 1-h_{t}\right) \right) \\
-\zeta _{t}\left[ \lambda _{t}\left( k_{t}\right) ^{\theta }\left(
h_{t}\right) ^{1-\theta }-\left( c_{t}+i_{t}\right) \right] \\
-\upsilon _{t}\left[ \left( 1-\delta \right) k_{t}+i_{t}-k_{t+1}\right] \end{array}%
\right)
\] We must now determine the gradients (or extremas), according to the Lagrangian variables to maximize,
\begin{eqnarray*}
\quicklatex{size=16}
FOC &:&\frac{\partial \Lambda \left( \zeta _{t},\upsilon _{t}\right) }{%
\partial k_{t+1}}=0 \\
&\Leftrightarrow &-\beta ^{t}\upsilon _{t}-\beta ^{t+1}\left( \zeta
_{t+1}\theta \lambda _{t+1}\left( \frac{k_{t+1}}{h_{t+1}}\right) ^{\theta
-1}+\upsilon _{t+1}\left( 1-\delta \right) \right) =0 \\
&\Leftrightarrow &-\zeta _{t+1}r_{t}+\upsilon _{t+1}\left( 1-\delta \right) =%
\frac{\upsilon _{t}}{\beta }
\end{eqnarray*}
\begin{eqnarray*}
FOC &:&\frac{\partial \Lambda \left( \zeta _{t},\upsilon _{t}\right) }{%
\partial h_{t}}=0 \\
&\Leftrightarrow &\beta ^{t}\left( A\frac{-1}{1-h_{t}}-\zeta _{t}\left(
1-\theta \right) \lambda _{t}\left( k_{t}\right) ^{\theta }\left(
h_{t}\right) ^{-\theta }\right) =0 \\
&\Leftrightarrow &A\frac{-1}{1-h_{t}}=\zeta _{t}\left( 1-\theta \right)
\lambda _{t}\left( \frac{k_{t}}{h_{t}}\right) ^{\theta } \\
&\Leftrightarrow &A\frac{-1}{1-h_{t}}=\zeta _{t}w_{t}
\end{eqnarray*}
The Lagrangian has two additional variables to be determined, it is also much more gradients than there are constraints,
\begin{eqnarray*}
FOC &:&\frac{\partial \Lambda \left( \zeta _{t},\upsilon _{t}\right) }{
\partial c_{t}}=0 \\
&\Leftrightarrow &\beta ^{t}\left( \frac{1}{c_{t}}+\zeta _{t}\right) =0 \\
&\Leftrightarrow &\frac{1}{c_{t}}=-\zeta _{t}
\end{eqnarray*}
\begin{eqnarray*}
FOC &:&\frac{\partial \Lambda \left( \zeta _{t},\upsilon _{t}\right) }{
\partial i_{t}}=0 \\
&\Leftrightarrow &\beta ^{t}\left( -\zeta _{t}-\upsilon _{t}\right) =0 \\
&\Leftrightarrow &-\zeta _{t}=\upsilon _{t}=\frac{1}{c_{t}}
\end{eqnarray*}
Our four first-order conditions, after a little algebra, lead us to find,
\[
\left\{
\begin{array}{c}
\frac{c_{t+1}}{\beta c_{t}}r_{t}+\left( 1-\delta \right) \\
Ac_{t}=\left( 1-h_{t}\right) w_{t}%
\end{array}%
\right.
\] If the Lagrangian is slightly longer from a pure mathematical point of view, it has the advantage of simplifying the optimization problem. From experience, calculation errors are much less frequent using the Lagrangian with respect to partial derivatives.