DAKOTA Asynchronous Parallel Pattern Search Algorithm (APPS)
Description
The Asynchronous Parallel Pattern Search (APPS) is a non-gradient based optimization algorithm, which is a variation of the Hooke-Jeeves Pattern Search. In its non-blocking (asynchronous) mode, pattern search moves to the next iteration as soon as a new trial point improves the current design. In its blocking mode, all designs in the search pattern is evaluated before moving to the next iteration. APPS uses a penalty function to handle nonlinear constraints.
More Information
This algorithm supports an asynchronous pattern search technique where the search along each offset direction continues without waiting for searches along other directions to finish. The algorithm is a derivative-free optimization method used to solve single objective optimization problems. The variable bounds are directly considered in search direction, while general nonlinear constraints are handled by a penalty function.
As the algorithm moves along search directions by a step length, the step length is contracted in case of unsuccessful iterate. Constraint penalty and constraint tolerance factors can be specified to adjust constraint handling. The algorithm terminates when convergence tolerance is reached or maximum function evaluations or maximum iterations is reached.
References
- DAKOTA Version 5.0 Reference Manual
- "Nonlinearly-constrained optimization using heuristic penalty methods and asynchronous parallel generating set search," J. D. Griffin and T. G. Kolda, Applied Mathematics Research eXpress 25(5):36-62, October 2010.
- APPSPACK 5.0.1: Asynchronous Parallel Pattern Search
- APPSPACK 5.0.1: cddlib
Control Parameters
ConstraintPenalty: This is the multiplier used on constraint violations. The violations are then added onto the objective function in case of a minimization problem. Higher values of penalty would ensure minimum constraint violation by the algorithm. Value must be positive real number.
ConstraintTolerance: This specification determines the maximum allowable value of infeasibility that any constraint in an optimization problem may possess and still be considered to be satisfied. If a constraint function is greater than this value then it is considered to be violated by the algorithm. This specification gives some control over how tightly the constraints will be satisfied at convergence of the algorithm. However, if the value is set too small the algorithm may terminate with one or more constraints being violated. It is specified as a positive real value.
ContractionFactor: APPS algorithm searches in directions decided based on a step length. This option is the factor by which the algorithm rescales its step length after unsuccessful iterates, i.e. no improvement. The higher the value, faster the algorithm would reach ThresholdDelta.Acceptable range for ContractionFactor is strictly between 0 and 1.
InitialDelta: APPS algorithm uses a set of offsets from the current iterate to locate improved points in the design space. Initial step length is determined by scaling, (InitialDelta)*variable range. Acceptable range for InitialDelta is (0, 1] only.
MeritFunction: APPS uses a penalty function to handle nonlinear constraints. There are several penalty functions that can be specified. Some of them are variants that use smoothing functions to help convergence in the case of continuous objective function.
MeritMax
: Maximum of constraint violations (L-infinite norm)MeritMaxSmooth
: L-infinite norm with smoothingMerit1
: Sum of constraint violations (L-1 norm)Merit1Smooth
: L-1 norm with smoothingMerit2
: Square root of sum of squares of constraint violations (L-2 norm)Merit2Smooth
: L-2 norm with smoothingMerit2Squared
: Sum of squares of constraint violations (Square of L-2 norm)
SmoothingFactor: This is a positive number that controls the smoothness of the penalty function. If this value is close to zero, the penalty function is close to the original function (e.g., less smooth). The default value is 1.
Synchronization: The synchronization specification can be used to specify the use of either blocking or non-blocking schedulers for APPS. The blocking option causes APPS to behave as a synchronous algorithm, as in all points in the pattern are evaluated and the best of them, if any, becomes the next iterate. In non blocking case, the first improving point becomes the next iterate.
ThresholdDelta: This is the threshold size of the step length at which the algorithm can terminate. Reducing the value of this option may lead to better answers but could increase the function evaluations and optimization effort. Use this option to compute the final step length by multiplying ThresholdDelta with variable range. Acceptable range is (0, 1) but it is highly advisable to use a value lower than at least 0.5 times InitialDelta at all times. Algorithm will reduce initial step length by multiplying it with the ContractionFactor on each non improving iterate. The step length value continues to be reduced until it is less than (ThresholdDelta) * (variable range).
ConvergenceTolerance: The convergence tolerance specification provides a real value for controlling the termination of iteration. This option defines the threshold value on relative change in the objective function that indicates convergence for the problem. This must be a positive value.
MaxFunctionEvaluations: Function evaluation is the call to Analyzer to evaluate the objective function at the specified points. The maximum number of function evaluations is an integer limit for evaluations that the algorithm can attain. Algorithm can terminate with this criterion if no other criteria are satisfied. A single iteration can contain multiple function evaluations. Value must be an integer greater than 0.
MaxIterations: A single iteration will have multiple offset evaluations. This is the integer limit on number of iterations the algorithm can actually run. Values must be a positive integer value.
SolutionTarget: This is the termination criterion that can be used when the solution to the optimization problem is known. The algorithm will terminate when function value falls/rises above this value depending on minimizing/maximizing the objective function.
Output: This option controls the level of verbosity of messages user can receive from DAKOTA. The options go from Silent
to Debug
with increasing amount of messages returned from the infrastructure. View Output > Details should show the messages from DAKOTA infrastructure. The user can see the final direction, methods by which directions were selected, reason for convergence and co-ordinates of design variables for each iteration.
IntermediateFilesPath: User can specify the location where the intermediate files of optimization should be generated. By default files are written to the user's temporary directory.
TabularGraphicsData: Turning this option to true generates a file named dakota_tabular in IntermediateFilesPath directory. This file has the values of design variables, constraints and objective function for each evaluation stored in a tabular format.