|
|
Home | Main Page | Modules | Namespace List | Class Hierarchy | Alphabetical List | Data Structures | File List | Namespace Members | Data Fields | Globals | Related Pages |
#include <itkAdaptiveStochasticVarianceReducedGradientOptimizer.h>
This class implements a gradient descent optimizer with adaptive gain.
If 

![\[ x(k+1) = x(k) - a(t_k) dC/dx \]](form_69.png)
The gain 

![\[ a(t_k) = a / (A + t_k + 1)^alpha \]](form_71.png)
.
And the time 
![\[ t_{k+1} = [ t_k + sigmoid( -g_k^T g_{k-1} ) ]^+ \]](form_73.png)
where 



This method is described in the following references:
[1] P. Cruz, "Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case." Technical Report, 2005. http://hdl.handle.net/2052/74
[2] S. Klein, J.P.W. Pluim, and M. Staring, M.A. Viergever, "Adaptive stochastic gradient descent optimisation for image registration," International Journal of Computer Vision, vol. 81, no. 3, pp. 227-239, 2009. http://dx.doi.org/10.1007/s11263-008-0168-y It is very suitable to be used in combination with a stochastic estimate of the gradient 

NewSamplesEveryIteration to "true" to achieve this effect. For more information on this strategy, you may have a look at:
If 

![\[ x(k+1) = x(k) - a(t_k) dC/dx \]](form_69.png)
The gain 

![\[ a(t_k) = a / (A + t_k + 1)^alpha \]](form_71.png)
.
And the time 
![\[ t_{k+1} = [ t_k + sigmoid( -g_k^T g_{k-1} ) ]^+ \]](form_73.png)
where 



This method is described in the following references:
[1] P. Cruz, "Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case." Technical Report, 2005. http://hdl.handle.net/2052/74
[2] S. Klein, J.P.W. Pluim, and M. Staring, M.A. Viergever, "Adaptive stochastic gradient descent optimisation for image registration," International Journal of Computer Vision, vol. 81, no. 3, pp. 227-239, 2009. http://dx.doi.org/10.1007/s11263-008-0168-y
It is very suitable to be used in combination with a stochastic estimate of the gradient 

NewSamplesEveryIteration to "true" to achieve this effect. For more information on this strategy, you may have a look at:
Definition at line 70 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
Inheritance diagram for itk::AdaptiveStochasticVarianceReducedGradientOptimizer:Public Member Functions | |
| virtual const char * | GetClassName () const |
| virtual double | GetSigmoidMax () const |
| virtual double | GetSigmoidMin () const |
| virtual double | GetSigmoidScale () const |
| virtual bool | GetUseAdaptiveStepSizes () const |
| ITK_DISALLOW_COPY_AND_MOVE (AdaptiveStochasticVarianceReducedGradientOptimizer) | |
| virtual void | SetSigmoidMax (double _arg) |
| virtual void | SetSigmoidMin (double _arg) |
| virtual void | SetSigmoidScale (double _arg) |
| virtual void | SetUseAdaptiveStepSizes (bool _arg) |
Public Member Functions inherited from itk::StandardStochasticVarianceReducedGradientOptimizer | |
| void | AdvanceOneStep () override |
| virtual double | GetCurrentTime () const |
| virtual double | GetInitialTime () const |
| virtual double | GetParam_a () const |
| virtual double | GetParam_A () const |
| virtual double | GetParam_alpha () const |
| virtual double | GetParam_beta () const |
| ITK_DISALLOW_COPY_AND_MOVE (StandardStochasticVarianceReducedGradientOptimizer) | |
| virtual void | ResetCurrentTimeToInitialTime () |
| virtual void | SetInitialTime (double _arg) |
| virtual void | SetParam_a (double _arg) |
| virtual void | SetParam_A (double _arg) |
| virtual void | SetParam_alpha (double _arg) |
| virtual void | SetParam_beta (double _arg) |
| void | StartOptimization () override |
Public Member Functions inherited from itk::StochasticVarianceReducedGradientDescentOptimizer | |
| virtual unsigned int | GetCurrentInnerIteration () const |
| virtual unsigned int | GetCurrentIteration () const |
| virtual const DerivativeType & | GetGradient () |
| virtual unsigned int | GetLBFGSMemory () const |
| virtual const double & | GetLearningRate () |
| virtual const unsigned long & | GetNumberOfInnerIterations () |
| virtual const unsigned long & | GetNumberOfIterations () |
| virtual const DerivativeType & | GetPreviousGradient () |
| virtual const ParametersType & | GetPreviousPosition () |
| virtual const DerivativeType & | GetSearchDir () |
| virtual const StopConditionType & | GetStopCondition () |
| virtual const double & | GetValue () |
| ITK_DISALLOW_COPY_AND_MOVE (StochasticVarianceReducedGradientDescentOptimizer) | |
| virtual void | MetricErrorResponse (ExceptionObject &err) |
| virtual void | ResumeOptimization () |
| virtual void | SetLearningRate (double _arg) |
| virtual void | SetNumberOfIterations (unsigned long _arg) |
| void | SetNumberOfWorkUnits (ThreadIdType numberOfThreads) |
| virtual void | SetPreviousGradient (DerivativeType _arg) |
| virtual void | SetPreviousPosition (ParametersType _arg) |
| virtual void | SetUseEigen (bool _arg) |
| virtual void | SetUseMultiThread (bool _arg) |
| void | StartOptimization () override |
| virtual void | StopOptimization () |
Public Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer | |
| const ParametersType & | GetCurrentPosition () const override |
| virtual bool | GetMaximize () const |
| virtual const ScaledCostFunctionType * | GetScaledCostFunction () |
| virtual const ParametersType & | GetScaledCurrentPosition () |
| bool | GetUseScales () const |
| virtual void | InitializeScales () |
| ITK_DISALLOW_COPY_AND_MOVE (ScaledSingleValuedNonLinearOptimizer) | |
| virtual void | MaximizeOff () |
| virtual void | MaximizeOn () |
| void | SetCostFunction (CostFunctionType *costFunction) override |
| virtual void | SetMaximize (bool _arg) |
| virtual void | SetUseScales (bool arg) |
Static Public Member Functions | |
| static Pointer | New () |
Static Public Member Functions inherited from itk::StandardStochasticVarianceReducedGradientOptimizer | |
| static Pointer | New () |
Static Public Member Functions inherited from itk::StochasticVarianceReducedGradientDescentOptimizer | |
| static Pointer | New () |
Static Public Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer | |
| static Pointer | New () |
Protected Member Functions | |
| AdaptiveStochasticVarianceReducedGradientOptimizer () | |
| void | UpdateCurrentTime () override |
| ~AdaptiveStochasticVarianceReducedGradientOptimizer () override=default | |
Protected Member Functions inherited from itk::StandardStochasticVarianceReducedGradientOptimizer | |
| virtual double | Compute_a (double k) const |
| virtual double | Compute_beta (double k) const |
| StandardStochasticVarianceReducedGradientOptimizer () | |
| ~StandardStochasticVarianceReducedGradientOptimizer () override=default | |
Protected Member Functions inherited from itk::StochasticVarianceReducedGradientDescentOptimizer | |
| void | PrintSelf (std::ostream &os, Indent indent) const override |
| StochasticVarianceReducedGradientDescentOptimizer () | |
| ~StochasticVarianceReducedGradientDescentOptimizer () override=default | |
Protected Member Functions inherited from itk::ScaledSingleValuedNonLinearOptimizer | |
| virtual void | GetScaledDerivative (const ParametersType ¶meters, DerivativeType &derivative) const |
| virtual MeasureType | GetScaledValue (const ParametersType ¶meters) const |
| virtual void | GetScaledValueAndDerivative (const ParametersType ¶meters, MeasureType &value, DerivativeType &derivative) const |
| void | PrintSelf (std::ostream &os, Indent indent) const override |
| ScaledSingleValuedNonLinearOptimizer () | |
| void | SetCurrentPosition (const ParametersType ¶m) override |
| virtual void | SetScaledCurrentPosition (const ParametersType ¶meters) |
| ~ScaledSingleValuedNonLinearOptimizer () override=default | |
Private Attributes | |
| double | m_SigmoidMax { 1.0 } |
| double | m_SigmoidMin { -0.8 } |
| double | m_SigmoidScale { 1e-8 } |
| bool | m_UseAdaptiveStepSizes { true } |
Additional Inherited Members | |
Protected Types inherited from itk::StochasticVarianceReducedGradientDescentOptimizer | |
| using | ThreadInfoType = MultiThreaderBase::WorkUnitInfo |
| using itk::AdaptiveStochasticVarianceReducedGradientOptimizer::ConstPointer = SmartPointer<const Self> |
Definition at line 80 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
| using itk::AdaptiveStochasticVarianceReducedGradientOptimizer::Pointer = SmartPointer<Self> |
Definition at line 79 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
| using itk::AdaptiveStochasticVarianceReducedGradientOptimizer::Self = AdaptiveStochasticVarianceReducedGradientOptimizer |
Standard ITK.
Definition at line 76 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
| using itk::AdaptiveStochasticVarianceReducedGradientOptimizer::Superclass = StandardStochasticVarianceReducedGradientOptimizer |
Definition at line 77 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
|
protected |
|
overrideprotecteddefault |
|
virtual |
Run-time type information (and related methods).
Reimplemented from itk::StandardStochasticVarianceReducedGradientOptimizer.
Reimplemented in elastix::AdaptiveStochasticVarianceReducedGradient< TElastix >.
|
virtual |
|
virtual |
|
virtual |
|
virtual |
| itk::AdaptiveStochasticVarianceReducedGradientOptimizer::ITK_DISALLOW_COPY_AND_MOVE | ( | AdaptiveStochasticVarianceReducedGradientOptimizer | ) |
|
static |
Method for creation through the object factory.
|
virtual |
Set/Get the maximum of the sigmoid. Should be >0. Default: 1.0
|
virtual |
Set/Get the maximum of the sigmoid. Should be <0. Default: -0.8
|
virtual |
Set/Get the scaling of the sigmoid width. Large values cause a more wide sigmoid. Default: 1e-8. Should be >0.
|
virtual |
Set/Get whether the adaptive step size mechanism is desired. Default: true
|
overrideprotectedvirtual |
Function to update the current time If UseAdaptiveStepSizes is false this function just increments the CurrentTime by 
time = max[ 0, time + sigmoid( -gradient*previousgradient) ]
In that case, also the m_PreviousGradient is updated.
Reimplemented from itk::StandardStochasticVarianceReducedGradientOptimizer.
|
protected |
The PreviousGradient, necessary for the CruzAcceleration
Definition at line 132 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
|
private |
Definition at line 137 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
|
private |
Definition at line 138 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
|
private |
Definition at line 139 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
|
private |
Settings
Definition at line 136 of file itkAdaptiveStochasticVarianceReducedGradientOptimizer.h.
Generated on 1739326392 for elastix by 1.9.8 |