|
|
Home | Main Page | Modules | Namespace List | Class Hierarchy | Alphabetical List | Data Structures | File List | Namespace Members | Data Fields | Globals | Related Pages |
#include <itkAdaptiveStochasticGradientDescentOptimizer.h>
This class implements a gradient descent optimizer with adaptive gain.
If 

![\[ x(k+1) = x(k) - a(t_k) dC/dx \]](form_69.png)
The gain 

![\[ a(t_k) = a / (A + t_k + 1)^\alpha \]](form_77.png)
.
And the time 
![\[ t_{k+1} = [ t_k + sigmoid( -g_k^T g_{k-1} ) ]^+ \]](form_73.png)
where 



This method is described in the following references:
[1] P. Cruz, "Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case." Technical Report, 2005. http://hdl.handle.net/2052/74
[2] S. Klein, J.P.W. Pluim, and M. Staring, M.A. Viergever, "Adaptive stochastic gradient descent optimisation for image registration," International Journal of Computer Vision, vol. 81, no. 3, pp. 227-239, 2009. http://dx.doi.org/10.1007/s11263-008-0168-y
It is very suitable to be used in combination with a stochastic estimate of the gradient 

NewSamplesEveryIteration to "true" to achieve this effect. For more information on this strategy, you may have a look at:
Definition at line 72 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
Inheritance diagram for itk::AdaptiveStochasticGradientDescentOptimizer:Static Public Member Functions | |
| static Pointer | New () |
Static Public Member Functions inherited from itk::StandardGradientDescentOptimizer | |
| static Pointer | New () |
Static Public Member Functions inherited from itk::GradientDescentOptimizer2 | |
| static Pointer | New () |
Static Public Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer | |
| static Pointer | New () |
Protected Member Functions | |
| AdaptiveStochasticGradientDescentOptimizer () | |
| void | UpdateCurrentTime () override |
| ~AdaptiveStochasticGradientDescentOptimizer () override=default | |
Protected Member Functions inherited from itk::StandardGradientDescentOptimizer | |
| virtual double | Compute_a (double k) const |
| StandardGradientDescentOptimizer () | |
| ~StandardGradientDescentOptimizer () override=default | |
Protected Member Functions inherited from itk::GradientDescentOptimizer2 | |
| GradientDescentOptimizer2 () | |
| void | PrintSelf (std::ostream &os, Indent indent) const override |
| ~GradientDescentOptimizer2 () override=default | |
Protected Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer | |
| virtual void | GetScaledDerivative (const ParametersType ¶meters, DerivativeType &derivative) const |
| virtual MeasureType | GetScaledValue (const ParametersType ¶meters) const |
| virtual void | GetScaledValueAndDerivative (const ParametersType ¶meters, MeasureType &value, DerivativeType &derivative) const |
| void | PrintSelf (std::ostream &os, Indent indent) const override |
| ScaledSingleValuedNonLinearOptimizer () | |
| void | SetCurrentPosition (const ParametersType ¶m) override |
| virtual void | SetScaledCurrentPosition (const ParametersType ¶meters) |
| ~ScaledSingleValuedNonLinearOptimizer () override=default | |
Protected Attributes | |
| DerivativeType | m_PreviousGradient {} |
Protected Attributes inherited from itk::StandardGradientDescentOptimizer | |
| double | m_CurrentTime { 0.0 } |
| bool | m_UseConstantStep { false } |
Protected Attributes inherited from itk::GradientDescentOptimizer2 | |
| DerivativeType | m_Gradient {} |
| DerivativeType | m_SearchDirection {} |
| StopConditionType | m_StopCondition { MaximumNumberOfIterations } |
Protected Attributes inherited from itk::ScaledSingleValuedNonLinearOptimizer | |
| ScaledCostFunctionPointer | m_ScaledCostFunction {} |
| ParametersType | m_ScaledCurrentPosition {} |
Private Attributes | |
| double | m_SigmoidMax { 1.0 } |
| double | m_SigmoidMin { -0.8 } |
| double | m_SigmoidScale { 1e-8 } |
| bool | m_UseAdaptiveStepSizes { true } |
| using itk::AdaptiveStochasticGradientDescentOptimizer::ConstPointer = SmartPointer<const Self> |
Definition at line 82 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
| using itk::AdaptiveStochasticGradientDescentOptimizer::Pointer = SmartPointer<Self> |
Definition at line 81 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
| using itk::AdaptiveStochasticGradientDescentOptimizer::Self = AdaptiveStochasticGradientDescentOptimizer |
Standard ITK.
Definition at line 78 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
| using itk::AdaptiveStochasticGradientDescentOptimizer::Superclass = StandardGradientDescentOptimizer |
Definition at line 79 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
|
protected |
|
overrideprotecteddefault |
|
virtual |
Run-time type information (and related methods).
Reimplemented from itk::StandardGradientDescentOptimizer.
Reimplemented in elastix::AdaptiveStochasticGradientDescent< TElastix >.
|
virtual |
|
virtual |
|
virtual |
|
virtual |
| itk::AdaptiveStochasticGradientDescentOptimizer::ITK_DISALLOW_COPY_AND_MOVE | ( | AdaptiveStochasticGradientDescentOptimizer | ) |
|
static |
Method for creation through the object factory.
|
virtual |
Set/Get the maximum of the sigmoid. Should be >0. Default: 1.0
|
virtual |
Set/Get the maximum of the sigmoid. Should be <0. Default: -0.8
|
virtual |
Set/Get the scaling of the sigmoid width. Large values cause a more wide sigmoid. Default: 1e-8. Should be >0.
|
virtual |
Set/Get whether the adaptive step size mechanism is desired. Default: true
|
overrideprotectedvirtual |
Function to update the current time If UseAdaptiveStepSizes is false this function just increments the CurrentTime by 
time = max[ 0, time + sigmoid( -gradient*previousgradient) ]
In that case, also the m_PreviousGradient is updated.
Reimplemented from itk::StandardGradientDescentOptimizer.
|
protected |
The PreviousGradient, necessary for the CruzAcceleration
Definition at line 134 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
|
private |
Definition at line 139 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
|
private |
Definition at line 140 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
|
private |
Definition at line 141 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
|
private |
Settings
Definition at line 138 of file itkAdaptiveStochasticGradientDescentOptimizer.h.
Generated on 1739326392 for elastix by 1.9.8 |