DAKOTA OPT++ Finite Differences Newton (FDN)

Description
Newton method assumes that the objective function can be approximated as a quadratic function around the local optimum. This algorithm uses a finite difference scheme to compute gradients and the 2nd derivative matrix (Hessian). Constraints are handled by an interior-point penalty function.

References
DAKOTA Version 5.0 Reference Manual

Control Parameters

CenteringParameter: This option is a parameter (between 0 and 1) that controls how closely the algorithm should follow the CentralPath. The larger the value, the more closely the algorithm follows the central path, which results in small steps. A value of 0 indicates that the algorithm will take a pure Newton step. Default values are .2, .2, and .1 for the ElBakry, ArgaezTapia, and VanderbeiShanno MeritFunction, respectively. The value should be between 0 and 1.

CentralPath: Represents a measure of proximity to the central path and specifies an update strategy for the perturbation parameter mu. Valid options are, again, ElBakry, ArgaezTapia, or VanderbeiShanno. The default value for CentralPath is the value of MeritFunction.

MaxStep: Specifies the maximum step that can be taken when computing a change in the current design point (e.g., limiting the Newton step computed from current gradient and Hessian information). It is equivalent to a move limit or a maximum trust region size. Option accepts a positive value.

MeritFunction: A function in constrained optimization that attempts to provide joint progress toward reducing the objective function and satisfying the constraints. Different selections available are as follows:

  • The ElBakry merit function is the L2-norm of the first order optimality conditions for the nonlinear programming problem. The cost per line search iteration is N+1 function evaluations.
  • The ArgaezTapia merit function can be classified as a modified augmented Lagrangian function. The augmented Lagrangian is modified by adding to its penalty term a potential reduction function to handle the perturbed complementary condition. The cost per line search iteration is one function evaluation.
  • The VanderbeiShanno merit function can be classified as a penalty function for the logarithmic barrier formulation of the nonlinear programming problem. The cost per line search iteration is one function evaluation.

SearchMethod: The GradientBasedLineSearch option satisfies sufficient decrease and curvature conditions; whereas, ValueBaseLineSearch only satisfies the sufficient decrease condition. At each line search iteration the GradientBasedLineSearch method computes the function and gradient at the trial point. Consequently, given expensive function evaluations, the ValueBasedLineSearch method is preferred to the GradientBasedLineSearch method. Algorithm additionally supports the TrustRegionPatternSearch selection for unconstrained problems. This option performs a robust trust region search using pattern search techniques. Use of a line search is the default for bound-constrained and generally-constrained problems, and use of a TrustRegion search method is the default for unconstrained problems.

SteplengthToBoundary: This is a parameter (between 0 and 1) that controls how close to the boundary of the feasible region the algorithm is allowed to move. A value of 1 means that the algorithm is allowed to take steps that may reach the boundary of the feasible region. If the user wishes to maintain strict feasibility of the design parameters this value should be less than 1. Default values are .8, .99995, and .95 for the ElBakry, ArgaezTapia, and VanderbeiShanno merit functions, respectively. The value should be between 0 and 1.

ConvergenceTolerance: Defines the threshold value on relative change in the objective function that indicates convergence. Option must have a value greater than 0.

GradientTolerance: Defines the threshold value on the L2 norm of the objective function gradient that indicates convergence to an unconstrained minimum (no active constraints). Option must have a value greater than 0.

MaxFunctionEvaluations: Function evaluation is the call to Analyzer to evaluate the objective function at the specified points. The maximum number of function evaluations is an integer limit for evaluations that the algorithm can attain. Algorithm can terminate with this criterion if no other criteria are satisfied. A single iteration can contain multiple function evaluations like evaluating neighboring points in the sub-region. MaxFunctionEvaluations must be a positive integer value.

MaxIterations: A single iteration can have multiple function evaluations. This is the integer limit on number of iterations the algorithm can actually run. Option must have a positive integer value.

IntermediateFilesPath: User can specify the location where the intermediate files of optimization should be generated. By default files are written to the user's temporary directory.

Output: This option controls the level of verbosity of messages user can receive from DAKOTA. The options go from Silent to Debug with increasing amount of messages returned from the infrastructure. View Output > Details should show the messages from algorithm. The user can see the objective function values, design variable values, finite difference gradients, etc. for all iterations when Debug is selected for this option. The detailed information can help user analyze the design space and algorithm convergence better.

TabularGraphicsData: Turning this option to true generates a file named dakota_tabular in IntermediateFilesPath directory. This file has the values of design variables, constraints and objective function for each evaluation stored in a tabular format.