MLEF as a non-differentiable minimization algorithm

Wednesday, March 5, 2014

The MLEF can be derived without common differentiability and linearity assumptions (Zupanski et al. 2008). As a consequence, non-differentiable minimization algorithms can be derived as generalization of gradient-based methods, such as the nonlinear conjugate gradient (CG) and quasi-Newton (QN) methods. The non-differentiable aspect of the MLEF algorithm is illustrated in an example with one-dimensional Burgers model and simulated observations. A comparison between the generalized non-differentiable CG and the standard differentiable CG methods is shown, in an example with cubic non-differentiable observation operator. Both the cost function and the gradient norm show a superior performance of the MLEF-based non-differentiable minimization algorithm. These results indicate important advantage of the MLEF for assimilation of cloud observations and processes, which are particularly challenging due to their discontinuous nature.

Images: The cost function (left panel) and the gradient norm (right panel) for non-differentiable CG (solid blue line) and for the standard, differentiable CG (dashed red line) method.