Goal of the optimization task is to find maximum or minimum of the specified goal
function.
Formally, we have a set of variables:
x1, x2, .. xn
and restrictions:
a1<= x1<= b1, a2<= x2<= b2, ... an<= xn<= bn.
Each variable and restriction corresponds to the variation. (If the variation is of a Move to any direction type, then it corresponds to two variables).
Set of all possible variable values that meet the restrictions is called the search space or the optimization domain.
Goal of the optimization is to find a maximum or minimum of function f(x1, x2, .. xn), where f is specified by the goal value. If the goal type is Close to, then the goal function is distance between the target value and the calculated value.
There are two main types of optimization: local optimization and global optimization.
Local optimization means that we look for the maximum or minimum of the goal function in the neighbourhood of some point in the search space.
Global optimization means that we look for the maximum or minimum of the
goal function within the full range of the search space.
Current version of the LabelMover do the local optimization only.
Current version of the LabelMover uses the Nelder-Mead method (Amoeba method) for the multi-dimensional optimization. The method is slightly modified to increase its peformance for non-smooth goal functions.
For one-dimensional optimization, the Brent method is used.
Both methods require relatively few calls to the goal function, so it is possible to find an optimal solution quickly enough even for complex QuickField problems.
1. You can try optimization with several modifications of the base probem, if it is supposed to be several local minimum in the search space.
Optimization uses the base problem as a starting point for the optimization search. It means that if there are several local minimums in the search domain, then it makes sense to try optimization with several modifications of the base problem.
2. It is better to use optimization which provide high enough accuracy of the field simulation (i.e. with dense finite-element meshes).
For low precision QuickField problem, the random variations of the solution due to simulation errors could be too high, so the optimization process could stop at 'false' local minimum caused by these random variations.
For more information please see
Optimization Main Features