OPTLIB Gradient Optimizer
Description
Gradient optimizer uses a Sequential Quadratic Programming (SQP) algorithm to solve constrained optimization problems. The objective function and constraints are assumed to be continuous and have continuous first and second derivatives.
More Information
Non-differentiable functions can cause unpredictable performance for SQP. If objectives or constraints have any of the following characteristics, this algorithm may not perform very well.
- Non-smooth interpolation of data
- Iterative procedures inside the function evaluation process
- Discontinuous behavior caused by branching or IF tests
- Non-differentiable functions such as ABS, MAX, or MIN
Trade Study Resume
OPTLIB supports continuing a trade study after a halt or crash. Last best design will be restored as the start value, and the optimization will continue from there. See the general Optimization Tool documentation for more general information on resuming Trade Studies.
Control Parameters
Maximum number of function evaluations: a single function evaluation is a validation of the necessary Components to compute values for the constraints and objectives given a set of design variable values. Since this could be expensive to compute, depending on the run time of the Components required, this option offers a cap on the number of evaluations to perform before giving up. The maximum value this option can take on is 99,999.
Maximum number of iterations: This setting controls the maximum number of iterations the algorithm is allowed to go through before giving up. The Gradient Optimizer algorithm works by:
- evaluating the objective function and constraints at a given starting point
- using finite differences to compute the gradients of the objective and constraints at the starting point
- choosing a search direction that will minimize/maximize the objective while satisfying the constraints
- moving along the search direction until an improved design is found
- using the improved design as a starting point to begin another iteration
Number of optimization runs: by default, one optimization run will be performed beginning from the provided starting point. It is possible, however, to tell the algorithm to perform multiple optimization runs, each from a different starting point. In this case, the first run will begin from the user supplied starting point, and subsequent runs will be initiated from random starting points. This approach (sometimes called "multi-start" optimization) is well suited for design spaces with lots of local extrema. By starting the optimizer from different locations in the design space, the probability of "getting stuck" in a local extremum is minimized (at the expense of additional function evaluations).
Finite difference step size for calculating gradients: Relative perturbation size for forward difference derivatives.
Tolerance on objective function: Relative convergence tolerance on objective function value.
Tolerance on constraint feasibility: Constraints with a relative constraint violation less than control will be considered feasible.
Tolerance on projected gradient: Convergence tolerance on the projected gradient.