opengm Package

class opengm.ExplicitFunction((object)arg1, (ExplicitFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape [, (float)value=0.0]) -> object :

Construct an explicit function from shape and an optional value to fill the function with

Args :

shape : shape of the function

value : value to fill the function with (default : 0.0)

Examples:

>>> import opengm
>>> f=opengm.ExplicitFunction(shape=[2,3,4],1.0)

Notes :

Instead of adding an explicit function directly to the graphical model one can add a numpy ndarray to the gm, which will be converted to an explicit function. But it might be faster to add the explicit function directly.
__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (ExplicitFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape [, (float)value=0.0]) -> object :

Construct an explicit function from shape and an optional value to fill the function with

Args :

shape : shape of the function

value : value to fill the function with (default : 0.0)

Examples:

>>> import opengm
>>> f=opengm.ExplicitFunction(shape=[2,3,4],1.0)

Notes :

Instead of adding an explicit function directly to the graphical model one can add a numpy ndarray to the gm, which will be converted to an explicit function. But it might be faster to add the explicit function directly.
dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

class opengm.SparseFunction((object)arg1, (SparseFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape [, (float)defaultValue=0.0]) -> object :

Construct a sparse function from shape and an optional value to fill the function with

Args :

shape : shape of the function

defaultValue : default value of the sparse function (default : 0.0)

Examples:

>>> import opengm
>>> f=opengm.SparseFunction(shape=[2,3,4],0.0)
>>> len(f.container)
0
>>> f[0,1,0]=1.0
>>> len(f.container)
0

Notes :

Instead of adding an explicit function directly to the graphical model one can add a numpy ndarray to the gm, which will be converted to an explicit function. But it might be faster to add the explicit function directly.
__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (SparseFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape [, (float)defaultValue=0.0]) -> object :

Construct a sparse function from shape and an optional value to fill the function with

Args :

shape : shape of the function

defaultValue : default value of the sparse function (default : 0.0)

Examples:

>>> import opengm
>>> f=opengm.SparseFunction(shape=[2,3,4],0.0)
>>> len(f.container)
0
>>> f[0,1,0]=1.0
>>> len(f.container)
0

Notes :

Instead of adding an explicit function directly to the graphical model one can add a numpy ndarray to the gm, which will be converted to an explicit function. But it might be faster to add the explicit function directly.
defaultValue

Default value of the sparse function

Example:
>>> import opengm
>>> f=opengm.SparseFunction([2,2],1.0)
>>> f.defaultValue
1.0
dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

class opengm.PottsFunction((object)arg1, (PottsFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)valueEqual, (float)valueNotEqual) -> object :

Construct a PottsFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

valueEqual: value of the functions where labels are equal (on diagnal of the function as f[0,0],``f[1,1]``)

valueNotEqual: value of the functions where labels differ (off diagnal of the function as f[1,0],``f[0,1]``)

Example:

Construct a PottsFunction

>>> f=opengm.PottsFunction([2,2],1.0,0.0)
>>> f[0,0]
0.0
>>> f[0,1]
1.0

Note:

__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (PottsFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)valueEqual, (float)valueNotEqual) -> object :

Construct a PottsFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

valueEqual: value of the functions where labels are equal (on diagnal of the function as f[0,0],``f[1,1]``)

valueNotEqual: value of the functions where labels differ (off diagnal of the function as f[1,0],``f[0,1]``)

Example:

Construct a PottsFunction

>>> f=opengm.PottsFunction([2,2],1.0,0.0)
>>> f[0,0]
0.0
>>> f[0,1]
1.0

Note:

dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

class opengm.PottsNFunction((object)arg1, (PottsNFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)valueEqual, (float)valueNotEqual) -> object :

Construct a PottsNFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

valueEqual: value of the functions where labels are equal (on diagnal of the function as f[0,0],``f[1,1]``)

valueNotEqual: value of the functions where labels differ (off diagnal of the function as f[1,0],``f[0,1]``)

Example:

Construct a PottsFunction

>>> f=opengm.PottsNFunction([4,4,4],1.0,0.0)
>>> f[3,3,3]
0.0
>>> f[0,1,1]
1.0

Note:

__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (PottsNFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)valueEqual, (float)valueNotEqual) -> object :

Construct a PottsNFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

valueEqual: value of the functions where labels are equal (on diagnal of the function as f[0,0],``f[1,1]``)

valueNotEqual: value of the functions where labels differ (off diagnal of the function as f[1,0],``f[0,1]``)

Example:

Construct a PottsFunction

>>> f=opengm.PottsNFunction([4,4,4],1.0,0.0)
>>> f[3,3,3]
0.0
>>> f[0,1,1]
1.0

Note:

dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

class opengm.PottsGFunction((object)arg1, (PottsGFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape [, (object)values=()]) -> object :

Construct a PottsGFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

values: TODO!!!!!

Note:

__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (PottsGFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape [, (object)values=()]) -> object :

Construct a PottsGFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

values: TODO!!!!!

Note:

dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

class opengm.TruncatedAbsoluteDifferenceFunction((object)arg1, (TruncatedAbsoluteDifferenceFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)truncate [, (float)weight=1.0]) -> object :

Construct a TruncatedAbsoluteDifferenceFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

truncate : truncate the function at a given value

weight: weight of the function (default : 1.0)

Example:

>>> f=opengm.TruncatedAbsoluteDifferenceFunction(shape=[255,255],truncate=20.0,weight=2.0)

Note:

__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (TruncatedAbsoluteDifferenceFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)truncate [, (float)weight=1.0]) -> object :

Construct a TruncatedAbsoluteDifferenceFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

truncate : truncate the function at a given value

weight: weight of the function (default : 1.0)

Example:

>>> f=opengm.TruncatedAbsoluteDifferenceFunction(shape=[255,255],truncate=20.0,weight=2.0)

Note:

dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

class opengm.TruncatedSquaredDifferenceFunction((object)arg1, (TruncatedSquaredDifferenceFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)truncate [, (float)weight=1.0]) -> object :

Construct a TruncatedSquaredDifferenceFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

truncate : truncate the function at a given value

weight: weight of the function (default : 1.0)

Example:

>>> f=opengm.TruncatedSquaredDifferenceFunction(shape=[255,255],truncate=20.0,weight=2.0)

Note:

__getitem__(labels)

get the values of a function for a given labeling

Arg:

labels : labeling has to be as long as the dimension of the function
__init__((object)arg1, (TruncatedSquaredDifferenceFunction)other) → None :

copy constructor

__init__( (object)arg1) -> None :
empty constructor
__init__( (object)arg1, (object)shape, (float)truncate [, (float)weight=1.0]) -> object :

Construct a TruncatedSquaredDifferenceFunction .

Args:

shape: shape of the function (len(shape) must be 2 !)

truncate : truncate the function at a given value

weight: weight of the function (default : 1.0)

Example:

>>> f=opengm.TruncatedSquaredDifferenceFunction(shape=[255,255],truncate=20.0,weight=2.0)

Note:

dimension

get the number of dimensions

ndim

get the number of dimensions (same as dimension)

shape

get the shape of the function

opengm.relabeledPottsFunction(shape, relabelings, valueEqual=0.0, valueNotEqual=1.0, dtype=<type 'numpy.float64'>)

Factory function to construct a numpy array which encodes a potts-function. The labelings on which the potts function is computed are given by relabelings

Keyword arguments:

shape : shape / number of of labels of the potts-function

relabelings : a list of relabelings for the 2 variables

valueEqual : value if labels are equal (default : 0.0)

valueNotEqual : value if labels are not valueEqual (default : 1.0)

dtype : data type of the numpy array (default : value_type)

get a potts-function

>>> import opengm
>>> f=opengm.relabeledPottsFunction(shape=[4,3],relabelings=[[4,2,3,5],[2,4,5]],valueEqual=0.0,valueNotEqual=1.0)
>>> f[0,0] # relabling => 4,2
1.0
>>> f[0,1] # relabling => 4,1
0.0
Returns:
a numpy array with dtype`==value_type
opengm.differenceFunction(shape, norm=2, weight=1.0, truncate=None, dtype=<type 'numpy.float64'>)

Factory function to construct a numpy array which encodes a difference-function. The difference can be of any norm (1,2,...) and can be truncated or untruncated.

Keyword arguments:

shape – shape / number of of labels of the potts-function

weight – weight which is multiplied to the norm

truncate – truncate all values where the norm is bigger than truncate

dtype – data type of the numpy array

Example: ::
>>> import opengm
>>> f=opengm.differenceFunction([2,4],weight=0.5,truncate=5)
opengm.relabeledDifferenceFunction(shape, relabelings, norm=2, weight=1.0, truncate=None, dtype=<type 'numpy.float64'>)

Factory function to construct a numpy array which encodes a difference-function. The difference can be of any norm (1,2,...) and can be truncated or untruncated. The labelings on which the potts function is computed are given by relabelings

Keyword arguments:

shape – shape / number of of labels of the potts-function

weight – weight which is multiplied to the norm

truncate – truncate all values where the norm is bigger than truncate

dtype – data type of the numpy array

get a truncated squared difference function ::
>>> import opengm
>>> f=opengm.relabeledDifferenceFunction([2,4],[[1,2],[2,3,4,5]],weight=0.5,truncate=5)
opengm.pottsFunction(shape, valueEqual=0.0, valueNotEqual=1.0)

factory function to generate a potts-function

Args:

shape : shape of the potts-functions

valueEqual : value if all labels are valueEqual

valueNotEqual : value if not all labels are valueEqual

Returns:

opengm.PottsFunction if len(shape) == 2

opengm.PottsNFunction if len(shape) > 2

Example:

>>> import opengm
>>> f = opengm.pottsFunction(shape=[2,2],valueEqual=0.0,valueNotEqual=1.0)
>>> print "f[0,0]=%.1f" % (f[0,0],)
f[0,0]=0.0
>>> print "f[1,0]=%.1f" % (f[1,0],)
f[1,0]=1.0
>>> print "f[0,1]=%.1f" % (f[0,1],)
f[0,1]=1.0
>>> print "f[1,1]=%.1f" % (f[1,1],)
f[1,1]=0.0
>>> f = opengm.pottsFunction(shape=[3,3,3],valueEqual=0.0,valueNotEqual=1.0)
>>> print "f[0,0,0]=%.1f" % (f[0,0,0],)
f[0,0,0]=0.0
>>> print "f[1,0,0]=%.1f" % (f[1,0,0],)
f[1,0,0]=1.0
>>> print "f[0,1,0]=%.1f" % (f[0,1,0],)
f[0,1,0]=1.0
>>> print "f[1,1,2]=%.1f" % (f[1,1,2],)
f[1,1,2]=1.0
>>> print "f[2,2,2]=%.1f" % (f[2,2,2],)
f[2,2,2]=0.0

See also

opengm.PottsFunction ,:class:opengm.PottsNFunction

class opengm.ExplicitFunctionVector((object)arg1) → None
__init__((object)arg1) → None
append((ExplicitFunctionVector)arg1, (object)arg2) → None
extend((ExplicitFunctionVector)arg1, (object)arg2) → None
class opengm.SparseFunctionVector((object)arg1) → None
__init__((object)arg1) → None
append((SparseFunctionVector)arg1, (object)arg2) → None
extend((SparseFunctionVector)arg1, (object)arg2) → None
class opengm.PottsFunctionVector((object)arg1) → None
__init__( (object)arg1, (object)numberOfLabels1, (object)numberOfLabels2, (object)valueEqual, (object)valueNotEqual) -> object :
TODO
__init__((object)arg1) → None
__init__( (object)arg1, (object)numberOfLabels1, (object)numberOfLabels2, (object)valueEqual, (object)valueNotEqual) -> object :
TODO
append((PottsFunctionVector)arg1, (object)arg2) → None
extend((PottsFunctionVector)arg1, (object)arg2) → None
class opengm.PottsNFunctionVector((object)arg1) → None
__init__((object)arg1) → None
append((PottsNFunctionVector)arg1, (object)arg2) → None
extend((PottsNFunctionVector)arg1, (object)arg2) → None
class opengm.PottsGFunctionVector((object)arg1) → None
__init__((object)arg1) → None
append((PottsGFunctionVector)arg1, (object)arg2) → None
extend((PottsGFunctionVector)arg1, (object)arg2) → None
class opengm.TruncatedAbsoluteDifferenceFunctionVector((object)arg1) → None
__init__((object)arg1) → None
append((TruncatedAbsoluteDifferenceFunctionVector)arg1, (object)arg2) → None
extend((TruncatedAbsoluteDifferenceFunctionVector)arg1, (object)arg2) → None
class opengm.TruncatedSquaredDifferenceFunctionVector((object)arg1) → None
__init__((object)arg1) → None
append((TruncatedSquaredDifferenceFunctionVector)arg1, (object)arg2) → None
extend((TruncatedSquaredDifferenceFunctionVector)arg1, (object)arg2) → None

Factoy functions to generate some classes:

opengm.gm(numberOfLabels, operator='adder', reserveNumFactorsPerVariable=0)

Factory function to construct a graphical model.

Args:

numberOfLabels : number of label sequence (can be a list or a 1d numpy.ndarray)

operator : operator of the graphical model. Can be ‘adder’ or ‘multiplier’ (default: ‘adder’)

Construct a gm with 'adder' as operator::
>>> import opengm
>>> gm=opengm.graphicalModel([2,2,2,2,2],operator='adder')
>>> # or just
>>> gm=opengm.graphicalModel([2,2,2,2,2])

Construct a gm with 'multiplier' as operator:

gm=opengm.graphicalModel([2,2,2,2,2],operator='multiplier')
opengm.graphicalModel(numberOfLabels, operator='adder', reserveNumFactorsPerVariable=0)

Factory function to construct a graphical model.

Args:

numberOfLabels : number of label sequence (can be a list or a 1d numpy.ndarray)

operator : operator of the graphical model. Can be ‘adder’ or ‘multiplier’ (default: ‘adder’)

Construct a gm with 'adder' as operator::
>>> import opengm
>>> gm=opengm.graphicalModel([2,2,2,2,2],operator='adder')
>>> # or just
>>> gm=opengm.graphicalModel([2,2,2,2,2])

Construct a gm with 'multiplier' as operator:

gm=opengm.graphicalModel([2,2,2,2,2],operator='multiplier')
opengm.movemaker(gm, labels=None)
opengm.shapeWalker(shape)

generator obect to iterate over a multi-dimensional factor / value table

Args:
shape : shape of the factor / value table
Yields:
coordinate as list of integers

Example:

>>> import opengm
>>> import numpy
>>> # some graphical model 
>>> # -with 2 variables with 2 labels.
>>> # -with 1  2-order functions
>>> # -connected to 1 factor
>>> gm=opengm.gm([2]*2)
>>> f=opengm.PottsFunction(shape=[2,2],valueEqual=0.0,valueNotEqual=1.0)
>>> int(gm.addFactor(gm.addFunction(f),[0,1]))
0
>>> # iterate over all factors  of the graphical model 
>>> # (= 1 factor in this example)
>>> for factor in gm.factors():
...   # iterate over all labelings with a "shape walker"
...   for coord in opengm.shapeWalker(f.shape):
...      pass
...      print "f[%s]=%.1f" %(str(coord),factor[coord])
f[[0, 0]]=0.0
f[[1, 0]]=1.0
f[[0, 1]]=1.0
f[[1, 1]]=0.0

Note :

Only implemented for dimension<=10
opengm.visualizeGm(gm, plotUnaries=True, plotFunctions=False, plotNonShared=False, layout='neato', iterations=1000, show=True, relNodeSize=1.0)

visualize a graphical model with matplotlib , networkx and graphviz

Keyword arguments:
  • plotUnaries – plot unaries (default: True)

  • plotFunctions – plot functions (default: False)

  • plotNonShared – plot non shared functions (default: False )

  • layout – used layout to generate node positions: (default: 'neato' )
    • 'spring' :

      “spring model’’ layout which should be used only for very small graphical models ( | V | + | F | < 50 )

    • 'neato' (needs also python graphviz) :

      “spring model’’ layouts. This is the default tool to use if the graph is not too large ( | V | + | F | < 100 ) and you don’t know anything else about it. Neato attempts to minimize a global energy function, which is equivalent to statistical multi-dimensional scaling.

    • 'fdp' (needs also python graphviz) :

      “spring model’’ layouts similar to those of neato, but does this by reducing forces rather than working with energy.

    • 'sfdp' (needs also python graphviz) :

      multiscale version of fdp for the layout of large graphs.

    • 'twopi' (needs also python graphviz) :

      radial layouts, after Graham Wills 97. Nodes are placed on concentric circles depending their distance from a given root node.

    • 'circo' (needs also python graphviz) :

      circular layout, after Six and Tollis 99, Kauffman and Wiese 02. This is suitable for certain diagrams of multiple cyclic structures, such as certain telecommunications networks.

    • show : show the graph or supress showing (default=True)
    • relNodeSize : relative size of the notes must be between 0 and 1.0 .
class opengm.IndependentFactor((object)arg1) → None
PyIndependentFactor
__init__((object)arg1) → None
copyValuesSwitchedOrder((IndependentFactor)arg1) → object
max((IndependentFactor)arg1, (object)arg2) → IndependentFactor

max( (IndependentFactor)arg1, (tuple)arg2) -> IndependentFactor

max( (IndependentFactor)arg1, (list)arg2) -> IndependentFactor

max( (IndependentFactor)arg1) -> float

maxInplace((IndependentFactor)arg1, (object)arg2) → None

maxInplace( (IndependentFactor)arg1, (tuple)arg2) -> None

maxInplace( (IndependentFactor)arg1, (list)arg2) -> None

min((IndependentFactor)arg1, (object)arg2) → IndependentFactor

min( (IndependentFactor)arg1, (tuple)arg2) -> IndependentFactor

min( (IndependentFactor)arg1, (list)arg2) -> IndependentFactor

min( (IndependentFactor)arg1) -> float

minInplace((IndependentFactor)arg1, (object)arg2) → None

minInplace( (IndependentFactor)arg1, (tuple)arg2) -> None

minInplace( (IndependentFactor)arg1, (list)arg2) -> None

numberOfLabels((IndependentFactor)arg1, (int)arg2) → int
numberOfVariables
product((IndependentFactor)arg1, (object)arg2) → IndependentFactor

product( (IndependentFactor)arg1, (tuple)arg2) -> IndependentFactor

product( (IndependentFactor)arg1, (list)arg2) -> IndependentFactor

product( (IndependentFactor)arg1) -> float

productInplace((IndependentFactor)arg1, (object)arg2) → None

productInplace( (IndependentFactor)arg1, (tuple)arg2) -> None

productInplace( (IndependentFactor)arg1, (list)arg2) -> None

shape

Get the shape of a independent factor, which is a sequence of the number of lables for all variables which are connected to this factor

subFactor(fixedVars, fixedVarsLabels)

get the value table of of a sub-factor where some variables of the factor have been fixed to a given label

Args:

fixedVars : a 1d-sequence of variable indices to fix w.r.t. the factor

fixedVarsLabels : a 1d-sequence of labels for the given indices in fixedVars

Example :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(10, 10,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> factor2Order=gm[100].asIndependentFactor()
>>> int(factor2Order.numberOfVariables)
2
>>> print factor2Order.shape
[4, 4, ]
>>> # fix the second variable index w.r.t. the factor to the label 3
>>> subValueTable = factor2Order.subFactor(fixedVars=[1],fixedVarsLabels=[3])
>>> subValueTable.shape
(4,)
>>> for x in range(4):
...     print factor2Order[x,3]==subValueTable[x]
True
True
True
True
sum((IndependentFactor)arg1, (object)arg2) → IndependentFactor

sum( (IndependentFactor)arg1, (tuple)arg2) -> IndependentFactor

sum( (IndependentFactor)arg1, (list)arg2) -> IndependentFactor

sum( (IndependentFactor)arg1) -> float

sumInplace((IndependentFactor)arg1, (object)arg2) → None

sumInplace( (IndependentFactor)arg1, (tuple)arg2) -> None

sumInplace( (IndependentFactor)arg1, (list)arg2) -> None

opengm.inference Package

class opengm.inference.Icm(gm, accumulator=None, parameter=None)

Icm is a movemaking inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • moveType : default = variable

    moveType can be:

    -'variable' : move only one variable at once optimaly (default)

    -'factor' : move all variable of a factor at once optimaly

Examples:

>>> parameter = opengm.InfParam(moveType='variable')
>>> inference = opengm.inference.Icm(gm=gm,accumulator='minimizer',parameter=parameter)

Guarantees :

optimal within a hamming distance of 1

Limitations :

None, this algorithm can be used for any graphical model

Cite :

    1. Besag: On the Statistical Analysis of Dirty PicturesJournal of the Royal Statistical Society, Series B 48(3):259-302, 1986.

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.

Notes :

See also

opengm.inference.LazyFlipper a generalization of Icm

bound((_Icm)arg1) → float
pythonVisitor((_Icm)arg1, (object)callbackObject[, (int)visitNth=1]) → __IcmPythonVisitor
value((_Icm)arg1) → float
verboseVisitor((_Icm)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __IcmVerboseVisitor
class opengm.inference.LazyFlipper(gm, accumulator=None, parameter=None)

LazyFlipper is a movemaking inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • maxSubgraphSize : default = 2

    maximum subgraph size which is optimized

Examples:

>>> parameter = opengm.InfParam(maxSubgraphSize=2)
>>> inference = opengm.inference.LazyFlippper(gm=gm,accumulator='minimizer',parameter=parameter)

Guarantees :

optimal within a hamming distance of the given subgraph size

Limitations :

None, this algorithm can be used for any graphical model

Cite :

Boern Andres, Joerg H. Kappes, Thorsten Beier, Ullrich Koethe, Fred A. Hamprecht:

The Lazy Flipper: Efficient Depth-Limited Exhaustive Search in Discrete Graphical Models. ECCV (7) 2012: 154-166

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.
bound((_LazyFlipper)arg1) → float
pythonVisitor((_LazyFlipper)arg1, (object)callbackObject[, (int)visitNth=1]) → __LazyFlipperPythonVisitor
value((_LazyFlipper)arg1) → float
verboseVisitor((_LazyFlipper)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __LazyFlipperVerboseVisitor
class opengm.inference.BeliefPropagation(gm, accumulator=None, parameter=None)

BeliefPropagation is a message-passing inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • convergenceBound : default = 0.0

    Convergence bound stops message passing updates when message change is smaller than convergenceBound

  • damping : default = 0.0

    Damping must be in [0,1]

  • isAcyclic : default = Maybe

    isAcyclic can be:

    -'maybe' : if its unknown that the gm is acyclic (default)

    -True : if its known that the gm is acyclic (gm has no loops)

    -False : if its known that the gm is not acyclic (gm has loops)

  • steps : default = 100

    Number of message passing updates

Examples:

>>> parameter = opengm.InfParam(steps=100,damping=0.5)
>>> inference = opengm.inference.TreeReweightedBp(gm=gm,accumulator='minimizer',parameter=parameter)

Limitations :

None, this algorithm can be used for any graphical model

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.
bound((_BeliefPropagation)arg1) → float
factorMarginals(*args, **kwargs)

get the marginals for a subset of variable indices

Args:
fis : factor indices (for highest performance use a numpy.ndarray with opengm.index_type as dtype)
Returns :
a N-d numpy.ndarray where the first axis iterates over the factors passed by fis
Notes :
All factors in fis must have the same number of variables and shape
marginals(*args, **kwargs)

get the marginals for a subset of variable indices

Args:
vis : variable indices (for highest performance use a numpy.ndarray with opengm.index_type as dtype)
Returns :
a 2d numpy.ndarray where the first axis iterates over the variables passed by vis
Notes :
All variables in vis must have the same number of labels
pythonVisitor((_BeliefPropagation)arg1, (object)callbackObject[, (int)visitNth=1]) → __BeliefPropagationPythonVisitor
value((_BeliefPropagation)arg1) → float
verboseVisitor((_BeliefPropagation)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __BeliefPropagationVerboseVisitor
class opengm.inference.TreeReweightedBp(gm, accumulator=None, parameter=None)

TreeReweightedBp is a message-passing inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • convergenceBound : default = 0.0

    Convergence bound stops message passing updates when message change is smaller than convergenceBound

  • damping : default = 0.0

    Damping must be in [0,1]

  • isAcyclic : default = Maybe

    isAcyclic can be:

    -'maybe' : if its unknown that the gm is acyclic (default)

    -True : if its known that the gm is acyclic (gm has no loops)

    -False : if its known that the gm is not acyclic (gm has loops)

  • steps : default = 100

    Number of message passing updates

Examples:

>>> parameter = opengm.InfParam(steps=100,damping=0.5)
>>> inference = opengm.inference.TreeReweightedBp(gm=gm,accumulator='minimizer',parameter=parameter)

Limitations :

None, this algorithm can be used for any graphical model

Dependencies :

This algorithm needs the Trws library from ??? , compile OpenGM with CMake-Flag WITH_TRWS set to ON
bound((_TreeReweightedBp)arg1) → float
factorMarginals(*args, **kwargs)

get the marginals for a subset of variable indices

Args:
fis : factor indices (for highest performance use a numpy.ndarray with opengm.index_type as dtype)
Returns :
a N-d numpy.ndarray where the first axis iterates over the factors passed by fis
Notes :
All factors in fis must have the same number of variables and shape
marginals(*args, **kwargs)

get the marginals for a subset of variable indices

Args:
vis : variable indices (for highest performance use a numpy.ndarray with opengm.index_type as dtype)
Returns :
a 2d numpy.ndarray where the first axis iterates over the variables passed by vis
Notes :
All variables in vis must have the same number of labels
pythonVisitor((_TreeReweightedBp)arg1, (object)callbackObject[, (int)visitNth=1]) → __TreeReweightedBpPythonVisitor
value((_TreeReweightedBp)arg1) → float
verboseVisitor((_TreeReweightedBp)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __TreeReweightedBpVerboseVisitor
class opengm.inference.DynamicProgramming(gm, accumulator=None, parameter=None)

DynamicProgramming is a dynamic-programming inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

Examples:

>>> inference = opengm.inference.DynamicProgramming(gm=gm,accumulator='minimizer')

Guarantees :

global optimal

Limitations :

graphical model must be a tree / must not have loops

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.
bound((_DynamicProgramming)arg1) → float
pythonVisitor((_DynamicProgramming)arg1, (object)callbackObject[, (int)visitNth=1]) → __DynamicProgrammingPythonVisitor
value((_DynamicProgramming)arg1) → float
verboseVisitor((_DynamicProgramming)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __DynamicProgrammingVerboseVisitor
class opengm.inference.AStar(gm, accumulator=None, parameter=None)

AStar is a searching inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • numberOfOpt : default = 1

    Select which n best states should be searched for while inference:

  • heuristic : default = default

    heuristic can be:

    -'default' :default AStar heuristc (default)

    -'standart' : standart AStar heuristic

    -'fast' : fast AStar heuristic for second order gm’s

  • maxHeapSize : default = 3000000

    Maximum size of the heap which is used while inference

  • obectiveBound : default = -inf

    AStar objective bound.

    A good bound will speedup inference

Examples:

>>> parameter = opengm.InfParam(heuristic='fast')
>>> inference = opengm.inference.AStar(gm=gm,accumulator='minimizer',parameter=parameter)

Guarantees :

global optimal

Limitations :

graphical model must be small

Cite :

Kappes, J. H. :”Inference on Highly-Connected Discrete Graphical Models with Applications to Visual Object Recognition ”,Ph.D. Thesis 2011.

Bergtholdt, M. & Kappes, J. H. & Schnoerr, C.:”Learning of Graphical Models and Efficient Inference for Object Class Recognition”, DAGM 2006

Bergtholdt, M. & Kappes, J. H. & Schmidt, S. & Schnoerr, C.: “A Study of Parts-Based Object Class Detection Using Complete Graphs”, DAGM 2006

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.

Notes :

The AStar-Algo transform the problem into a shortest path problem in an exponentially large graph.

Due to the problem structure, this graph can be represented implicitly!

To find the shortest path we perform a best first search and use a admissable tree-based heuristic

to underestimate the cost to a goal node.

This lower bound allows us to reduce the search to an manageable

subspace of the exponentially large search-space.

bound((_AStar)arg1) → float
pythonVisitor((_AStar)arg1, (object)callbackObject[, (int)visitNth=1]) → __AStarPythonVisitor
value((_AStar)arg1) → float
verboseVisitor((_AStar)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __AStarVerboseVisitor
class opengm.inference.GraphCut(gm, accumulator=None, parameter=None)

GraphCut is a graphCut inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • scale : default = 1.0

    rescale the objective function.

    This is only usefull if the min-st-cut uses

    integral value types.

    This will be supported in one of the next releases.

  • minStCut
    : minStCut implementation of graphcut
    • 'boost-kolmogorov' (default)
    • 'push-relabel'

Guarantees :

optimal

Limitations :

max 2.order, binary labels, must be submodular

Dependencies :

to use 'kolmogorov' as minStCut the kolmogorov max flow library, compile OpenGM with CMake-Flag WITH_CPLEX set to ON
bound((_GraphCut_Boost_Push_Relabel)arg1) → float
value((_GraphCut_Boost_Push_Relabel)arg1) → float
verboseVisitor((_GraphCut_Boost_Push_Relabel)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __GraphCut_push-relabelVerboseVisitor
class opengm.inference.AlphaBetaSwap(gm, accumulator=None, parameter=None)

AlphaBetaSwap is a graphCut / movemaking inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • steps : default = 1000

    steps: Maximum number of iterations

  • minStCut
    : minStCut implementation of graphcut
    • 'boost-kolmogorov' (default)
    • 'push-relabel'

Limitations :

max 2.order, must be submodular

Dependencies :

to use 'kolmogorov' as minStCut the kolmogorov max flow library, compile OpenGM with CMake-Flag WITH_MAXFLOW set to ON
bound((_AlphaBetaSwap_Boost_Kolmogorov)arg1) → float
pythonVisitor((_AlphaBetaSwap_Boost_Kolmogorov)arg1, (object)callbackObject[, (int)visitNth=1]) → __AlphaBetaSwap_boost-kolmogorovPythonVisitor
value((_AlphaBetaSwap_Boost_Kolmogorov)arg1) → float
verboseVisitor((_AlphaBetaSwap_Boost_Kolmogorov)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __AlphaBetaSwap_boost-kolmogorovVerboseVisitor
class opengm.inference.AlphaExpansion(gm, accumulator=None, parameter=None)

AlphaExpansion is a graphCut / movemaking inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

  • subInfParam

    • scale : default = 1.0

      rescale the objective function.

      This is only usefull if the min-st-cut uses

      integral value types.

      This will be supported in one of the next releases.

  • steps : default = 1000

    None

  • minStCut
    : minStCut implementation of graphcut
    • 'boost-kolmogorov' (default)
    • 'push-relabel'

Limitations :

max 2.order, must be submodular

Dependencies :

to use 'kolmogorov' as minStCut the kolmogorov max flow library, compile OpenGM with CMake-Flag WITH_MAXFLOW set to ON
bound((_AlphaExpansion_Boost_Push_Relabel)arg1) → float
pythonVisitor((_AlphaExpansion_Boost_Push_Relabel)arg1, (object)callbackObject[, (int)visitNth=1]) → __AlphaExpansion_push-relabelPythonVisitor
value((_AlphaExpansion_Boost_Push_Relabel)arg1) → float
verboseVisitor((_AlphaExpansion_Boost_Push_Relabel)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __AlphaExpansion_push-relabelVerboseVisitor
class opengm.inference.DualDecompositionSubgradient(gm, accumulator=None, parameter=None)

DualDecompositionSubgradient is a dual-decomposition inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

The parameter object of has internal dependencies:

  • subInference
    : inference algorithms for the sub-blocks
    • 'graph-cut'
    • 'dynamic-programming' (default)

    if subInference == graph-cut :

    • numberOfBlocks : default = 2

      number of blocks for block decomposition

    • decompositionId : default = spanningtrees

      type of decomposition that should be used (independent of model structure) :

      • ‘spanningtrees’
      • ‘trees’
      • ‘blocks’
      • ‘manual’ (not yet implemented in python wrapper)
    • minimalAbsAccuracy : default = 0.0

      the absolut accuracy that has to be guaranteed to stop with an approximate solution (set 0 for optimality)

    • subInfParam

      • scale : default = 1.0

        rescale the objective function.

        This is only usefull if the min-st-cut uses

        integral value types.

        This will be supported in one of the next releases.

    • stepsizeStride : default = 1.0

      stride stepsize

    • minimalRelAccuracy : default = 0.0

      the relative accuracy that has to be guaranteed to stop with an approximate solution (set 0 for optimality)

    • maximalDualOrder : default = 18446744073709551615

      maximum order of dual variables (order of the corresponding factor)

    • maximalNumberOfIterations : default = 100

      maximum number of dual iterations

    • subProbParam : default = (False, False)

      a tuple with two Bools:

      • subProbParam[0] is useAdaptiveStepsize
      • subProbParam[1] is useProjectedAdaptiveStepsize
    • stepsizeExponent : default = 0.5

      stepize exponent

    • stepsizeScale : default = 1.0

      scale of the stepsize

    • numberOfThreads : default = 1

      number of threads for primal problems

    • stepsizeMin : default = 0.0

      minimum stepsize

    • stepsizeMax : default = inf

      maximum stepzie

    if subInference == dynamic-programming :

    • numberOfBlocks : default = 2

      number of blocks for block decomposition

    • decompositionId : default = spanningtrees

      type of decomposition that should be used (independent of model structure) :

      • ‘spanningtrees’
      • ‘trees’
      • ‘blocks’
      • ‘manual’ (not yet implemented in python wrapper)
    • minimalAbsAccuracy : default = 0.0

      the absolut accuracy that has to be guaranteed to stop with an approximate solution (set 0 for optimality)

    • subInfParam

    • stepsizeStride : default = 1.0

      stride stepsize

    • minimalRelAccuracy : default = 0.0

      the relative accuracy that has to be guaranteed to stop with an approximate solution (set 0 for optimality)

    • maximalDualOrder : default = 18446744073709551615

      maximum order of dual variables (order of the corresponding factor)

    • maximalNumberOfIterations : default = 100

      maximum number of dual iterations

    • subProbParam : default = (False, False)

      a tuple with two Bools:

      • subProbParam[0] is useAdaptiveStepsize
      • subProbParam[1] is useProjectedAdaptiveStepsize
    • stepsizeExponent : default = 0.5

      stepize exponent

    • stepsizeScale : default = 1.0

      scale of the stepsize

    • numberOfThreads : default = 1

      number of threads for primal problems

    • stepsizeMin : default = 0.0

      minimum stepsize

    • stepsizeMax : default = inf

      maximum stepzie

Limitations :

None, this algorithm can be used for any graphical model

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.
bound((_DualDecompositionSubgradient_GraphCutBoostKolmogorov)arg1) → float
pythonVisitor((_DualDecompositionSubgradient_GraphCutBoostKolmogorov)arg1, (object)callbackObject[, (int)visitNth=1]) → __DualDecompositionSubgradient_graph-cutPythonVisitor
value((_DualDecompositionSubgradient_GraphCutBoostKolmogorov)arg1) → float
verboseVisitor((_DualDecompositionSubgradient_GraphCutBoostKolmogorov)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __DualDecompositionSubgradient_graph-cutVerboseVisitor
class opengm.inference.Bruteforce(gm, accumulator=None, parameter=None)

Bruteforce is a searching inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

Examples:

>>> inference = opengm.inference.Bruteforce(gm=gm,accumulator='minimizer')

Guarantees :

global optimal

Limitations :

graphical model must be very small

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.

Notes :

See also

opengm.inference.AStar an global optimal solver for small graphical models

bound((_Bruteforce)arg1) → float
value((_Bruteforce)arg1) → float
verboseVisitor((_Bruteforce)arg1[, (int)printNth=1[, (bool)multiline=True]]) → __BruteforceVerboseVisitor
class opengm.inference.PartitionMove(gm, accumulator=None, parameter=None)

PartitionMove is a multicut inference algorithm

Args :

gm : the graphical model to infere / optimize

accumulator : accumulator used for inference can be:

-'minimizer' (default : if gm.operator is 'adder'==True:)

-'maximizer' (default : if gm.operator is 'multiplier'==True:)

-'integrator'

Not any accmulator can be used for any solver. Which accumulator can be used will be in the documentation soon.

parameter : parameter object of the solver

Parameter :

Limitations :

2-order potts model without unaries where each variable must have as many labels as there are variables

Dependencies :

None, this inference algorithms is implemented in OpenGM by default.
bound((_PartitionMove)arg1) → float
value((_PartitionMove)arg1) → float

opengm.adder Package

class opengm.adder.GraphicalModel((object)arg1) → None :

The central class of opengm which holds the factor graph and functions of the graphical model

Construct an empty graphical model with no variables Example:

Construct an empty

>>> gm=opengm.addder.GraphicalModel()
>>> int(gm.numberOfVariables)
0
__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from any iterable python object where the iterabe object holds the number of labels for each variable.

The gm will have as many variables as the length of the iterable sequence

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(2 for x in xrange(100))
>>> int(gm.numberOfVariables)
100
>>> int(gm.numberOfLabels(0))
2

Construct a gm from list and tuples

>>> gm=opengm.adder.GraphicalModel( [3]*10 )
>>> int(gm.numberOfVariables)
10
>>> int(gm.numberOfLabels(0))
3
>>> gm=opengm.adder.GraphicalModel( (2,4,6) )
>>> int(gm.numberOfVariables)
3
>>> int(gm.numberOfLabels(0))
2
>>> int(gm.numberOfLabels(1))
4
>>> int(gm.numberOfLabels(2))
6

And factors can be reserved for the varialbes

>>> gm=opengm.adder.GraphicalModel(numberOfLabels=[2]*10,reserveNumFactorsPerVariable=5)
>>> gm=opengm.adder.GraphicalModel(numberOfLabels=(2,2,2),reserveNumFactorsPerVariable=3)

Note:

__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from a 1d numpy ndarray which holds the number of labels for each variable.

The gm will have as many variables as the length of the ndarray

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64)*4)
>>> int(gm.numberOfVariables)


100
>>> int(gm.numberOfLabels(0))
4

And factors can be reserved for the varialbes

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64,reserveNumFactorsPerVariable=3) )

Note:

__init__( (object)arg1, (int)numberOfVariables, (int)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm where each variable will have the same number of labels

The gm will have as many variables as given by numberOfVariables Args:

numberOfVariables: is the number of varables for the gm

numberOfLabels: is the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe.

This can speedup adding factors.

Example:

Construct a gm with 10 variables each having 2 possible labels:

>>> gm=opengm.addder.GraphicalModel(numberOfVariables=10,numberOfLabels=2)
>>> gm.numberOfVariables
10
__init__( (object)arg1, (IndexVector)arg2, (int)numberOfLabels) -> object :

Construct a gm from a gm from a opengm.IndexVector

Args:

numberOfLabels: holds the number of labels for each variable
__init__((object)arg1) → None :

Construct an empty graphical model with no variables Example:

Construct an empty

>>> gm=opengm.addder.GraphicalModel()
>>> int(gm.numberOfVariables)
0
__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from any iterable python object where the iterabe object holds the number of labels for each variable.

The gm will have as many variables as the length of the iterable sequence

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(2 for x in xrange(100))
>>> int(gm.numberOfVariables)
100
>>> int(gm.numberOfLabels(0))
2

Construct a gm from list and tuples

>>> gm=opengm.adder.GraphicalModel( [3]*10 )
>>> int(gm.numberOfVariables)
10
>>> int(gm.numberOfLabels(0))
3
>>> gm=opengm.adder.GraphicalModel( (2,4,6) )
>>> int(gm.numberOfVariables)
3
>>> int(gm.numberOfLabels(0))
2
>>> int(gm.numberOfLabels(1))
4
>>> int(gm.numberOfLabels(2))
6

And factors can be reserved for the varialbes

>>> gm=opengm.adder.GraphicalModel(numberOfLabels=[2]*10,reserveNumFactorsPerVariable=5)
>>> gm=opengm.adder.GraphicalModel(numberOfLabels=(2,2,2),reserveNumFactorsPerVariable=3)

Note:

__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from a 1d numpy ndarray which holds the number of labels for each variable.

The gm will have as many variables as the length of the ndarray

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64)*4)
>>> int(gm.numberOfVariables)


100
>>> int(gm.numberOfLabels(0))
4

And factors can be reserved for the varialbes

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64,reserveNumFactorsPerVariable=3) )

Note:

__init__( (object)arg1, (int)numberOfVariables, (int)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm where each variable will have the same number of labels

The gm will have as many variables as given by numberOfVariables Args:

numberOfVariables: is the number of varables for the gm

numberOfLabels: is the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe.

This can speedup adding factors.

Example:

Construct a gm with 10 variables each having 2 possible labels:

>>> gm=opengm.addder.GraphicalModel(numberOfVariables=10,numberOfLabels=2)
>>> gm.numberOfVariables
10
__init__( (object)arg1, (IndexVector)arg2, (int)numberOfLabels) -> object :

Construct a gm from a gm from a opengm.IndexVector

Args:

numberOfLabels: holds the number of labels for each variable
addFactor(fid, variableIndices, finalze=True)

add a factor to the graphical model

Args:

fid : function identifier

variableIndices
: indices of the fator w.r.t. the graphical model.
The variable indices have to be sorted.

Examples:

>>> import opengm
>>> # a graphical model with 6 variables, some variables with 2, 3 and 4 labels
>>> gm=opengm.gm([2,2,3,3,4,4])
>>> # Add unary function and factor ( factor which is connect to 1 variable )
>>> # - add function ( a random function with 2 enties in the value table)
>>> fid =   gm.addFunction(opengm.randomFunction(shape=[2]))
>>> # - connect function and variables to factor 
>>> int(gm.addFactor(fid=fid,variableIndices=0))
0
addFactors(fids, variableIndices, finalize=True)
addFunction(function)

Adds a function to the graphical model.

Args:
function: a function/ value table
Returns:

A function identifier (fid) .

This fid is used to connect a factor to this function

Examples
:

Explicit functions added via numpy ndarrays:

>>> import opengm
>>> #Add 1th-order function with the shape [3]::
>>> gm=opengm.graphicalModel([3,3,3,4,4,4,5,5,2,2])
>>> f=numpy.array([0.8,1.4,0.1])
>>> fid=gm.addFunction(f)
>>> print fid.functionIndex
0
>>> print fid.functionType
0
>>> # Add 2th-order function with  the shape [4,4]::
>>> f=numpy.ones([4,4])
>>> #fill the function with values
>>> #..........
>>> fid=gm.addFunction(f)
>>> int(fid.functionIndex),int(fid.functionType)
(1, 0)
>>> # Adding 3th-order function with the shape [4,5,2]::
>>> f=numpy.ones([4,5,2])
>>> #fill the function with values
>>> #..........
>>> fid=gm.addFunction(f)
>>> print fid.functionIndex
2
>>> print fid.functionType
0

Potts functions:

>>> import opengm
>>> gm=opengm.gm([2,2,3,3,3,4,4,4])
>>> # 2-order potts function 
>>> f=opengm.pottsFunction(shape=[2,2],valueEqual=0.0,valueNotEqual=1.0)
>>> f[0,0],f[1,0],f[0,1],f[1,1]
(0.0, 1.0, 1.0, 0.0)
>>> fid=gm.addFunction(f)
>>> int(fid.functionIndex),int(fid.functionType)
(0, 1)
>>> # connect a second order factor to variable 0 and 1 and the potts function
>>> int(gm.addFactor(fid,[0,1]))
0
>>> # higher order potts function
>>> f=opengm.pottsFunction(shape=[2,3,4],valueEqual=0.0,valueNotEqual=2.0)
>>> f[0,0,0],f[1,0,0],f[1,1,1],f[1,1,3]
(0.0, 2.0, 0.0, 2.0)
>>> fid=gm.addFunction(f)
>>> int(fid.functionIndex),int(fid.functionType)
(0, 2)
>>> # connect a third order factor to variable 0,2 and 5 and the potts function
>>> int(gm.addFactor(fid,(0,2,5)))
1
Notes:
addFunctions(functions)
assign((GraphicalModel)arg1, (object)numberOfLabels) → None :

Assign a graphical model from any number of labels sequence

Args:

numberOfLabels: holds the number of labels for each variable

Note:

assign( (GraphicalModel)arg1, (object)numberOfLabels) -> None :

Assign a graphical model from number of labels sequence which is a 1d numpy.ndarray

Args:

numberOfLabels: holds the number of labels for each variable

Note:

assign( (GraphicalModel)arg1, (IndexVector)numberOfLabels) -> None :

Assign a graphical model from number of labels sequence which is a opengm.IndexVector

Args:

numberOfLabels: holds the number of labels for each variable

Note:

connectedComponentsFromLabels(labels)
evaluate(labels)

evaluate a labeling to get the energy / probability of that given labeling

Args:

labels : a labeling for all variables of the graphical model

Examples:

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(2, 2,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> energy=gm.evaluate([0,2,2,1])
factorClass

Get the class of the factor of this gm

Example :
>>> import opengm
>>> gm=opengm.gm([2]*10)
>>> # fill gm with factors...
>>> result=gm.vectorizedFactorFunction(gm.factorClass.isSubmodular,range(gm.numberOfFactors))
factorIds(order=None, minOrder=None, maxOrder=None)
factorIndices(variableIndices)

get the factor indices of all factors connected to variables within variableIndices

Args:

variableIndices : variable indices w.r.t. the graphical model

Examples :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(2, 2,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> factorIndicesNumpyArray=gm.factorIndices(variableIndices=[0,1])
>>> [fi for fi in factorIndicesNumpyArray]
[0, 1, 4, 5, 6]

Returns :

a sorted numpy.ndarray of all factor indices

Notes :

This function will be fastest if variableIndices is a numpy.ndarray. Otherwise the a numpy array will be allocated and the elements of variableIndices are copied.
factorOfVariable((GraphicalModel)arg1, (int)variableIndex, (int)factorIndex) → int :

Get the factor index of a factor which is connected to the variable at variable index.

Args:

variableIndex: index of a variable w.r.t the gm

factorIndex: index of a factor w.r.t the number of factors which are connected to this variable``

Returns:
The factor index w.r.t. the gm
factorSubset(factorIndices=None, order=None)
factors(order=None, minOrder=None, maxOrder=None)
factorsAndIds(order=None, minOrder=None, maxOrder=None)
factorsOfVariable((GraphicalModel)arg1, (int)variableIndex) → FactorsOfVariable :

Get the sequence of factor indices (w.r.t. the graphical model) of all factors connected to the variable at variableIndex Args:

variableIndex: index of a variable w.r.t the gm

Returns: A sequence of factor indices (w.r.t. the graphical model) of all factors connected to the variable at variableIndex

finalize((GraphicalModel)arg1) → None :

finalize the graphical model after adding all factors

this method must be called if any non finalized factor has been added (addFactor / addFactors with finalize=False)

fixVariables(variableIndices, labels)

return a new graphical model where some variables are fixed to a given label.

Args:
variableIndices : variable indices to fix labels : labels of the variables to fix
Returns:
new graphical model where variables are fixed.
isAcyclic((GraphicalModel)arg1) → bool :

check if the graphical is isAcyclic.

Returns:

True if model has no loops / is acyclic

False if model has loops / is not acyclic

moveLocalOpt((GraphicalModel)arg1, (str)arg2) → object
numberOfFactors

Number of factors of the graphical model

Example:

Get the number of factors of a gm

>>> import opengm
>>> gm=gm([2]*5)
>>> int(gm.numberOfFactors)
0
>>> fid=gm.addFunction(opengm.PottsFunction([2,2],1.0,0.0))
>>> int(gm.addFactor(fid,[0,1]))
0
>>> int(gm.numberOfFactors)
1
>>> int(gm.addFactor(fid,[1,2]))
1
>>> int(gm.numberOfFactors)
2
numberOfFactorsOfVariable((GraphicalModel)arg1, (int)variableIndex) → int :

Get the number of factors which are connected to a variable

Args:

variableIndex: index of a variable w.r.t. the gm
Returns:

The nubmer of variables which are connected

to the factor at factorIndex
numberOfLabels((GraphicalModel)arg1, (int)variableIndex) → int :

Get the number of labels for a variable

Args:

variableIndex: index to a variable in this gm
Returns:
The nubmer of labels for the variable at variableIndex
numberOfVariables

Number of variables of the graphical modelExample:

Get the number of variables of a gm

>>> gm=gm([2]*5)
>>> int(gm.numberOfVariables)
5
numberOfVariablesOfFactor((GraphicalModel)arg1, (int)factorIndex) → int :

Get the number of variables which are connected to a factor

Args:

factorIndex: index to a factor in this gm
Returns:

The nubmer of variables which are connected

to the factor at factorIndex
operator

The operator of the graphical model as a stringExample:

Get the operator of a gm as string

>>> import opengm
>>> gm=opengm.gm([2]*5)
>>> gm.operator
'adder'
>>> gm=opengm.gm([2]*5,operator='adder')
>>> gm.operator
'adder'
>>> gm=opengm.gm([2]*5,operator='multiplier')
>>> gm.operator
'multiplier'
reserveFactors((GraphicalModel)arg1, (int)numberOfFactors) → None :

reserve space for factors.

This can speedup adding factors

Args:

numberOfFactors: the number of factor to reserve

Example:

Reserve some factors

>>> gm=gm([2]*10)
>>> gm.reserveFactors(10)
reserveFactorsVarialbeIndices((GraphicalModel)arg1, (int)size) → None :

reserve space for factors varialbe indices (stored in one std::vector for all factors).

This can speedup adding factors

Args:

size: total size of variable indices

Example:

Reserve space for varaiable indices of 9 second order factors

>>> gm=gm([2]*10)
>>> gm.reserveFactorsVarialbeIndices(9*2)
reserveFunctions((GraphicalModel)arg1, (int)numberOfFunctions, (str)functionTypeName) → None :

reserve space for functions of a certain type

space((GraphicalModel)arg1) → Space :

Get the variable space of the graphical model

Returns:

A const reference to space of the gm.Example:

Get variable space

>>> gm=gm([2]*10)
>>> space=gm.space()
>>> len(space)
10
testf()
testf2()
variableIndices(factorIndices)

get the factor indices of all factors connected to variables within variableIndices

Args:

factorIndices : factor indices w.r.t. the graphical model

Examples :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(2, 2,4).astype(numpy.float64)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> variableIndicesNumpyArray=gm.variableIndices(factorIndices=[3,4])
>>> [vi for vi in variableIndicesNumpyArray]
[0, 2, 3]

Returns :

a sorted numpy.ndarray of all variable indices

Notes :

This function will be fastest if variableIndices is a numpy.ndarray. Otherwise the a numpy array will be allocated and the elements of variableIndices are copied.
variableOfFactor((GraphicalModel)arg1, (int)factorIndex, (int)variableIndex) → int :

Get the variable index of a varible which is connected to a factor.

Args:

factorIndex: index of a factor w.r.t the gm

variableIndex: index of a variable w.r.t the factor at factorIndex

Returns:
The variableIndex w.r.t. the gm of the factor at factorIndex
variables(labels=None, minLabels=None, maxLabels=None)

generator object to iterate over all variable indices

Args:

labels : iterate only over variables with labels number of Labels if this argument is set (default: None)

minLabels : iterate only over variabe which have at least minLabels if this argument is set (default: None)

minLabels : iterate only over variabe which have at maximum maxLabels if this argument is set (default: None)

Examples:

>>> import opengm
>>> # a graphical model with 6 variables, some variables with 2, 3 and 4 labels
>>> gm=opengm.gm([2,2,3,3,4,4])
>>> [vi for vi in gm.variables()]
[0, 1, 2, 3, 4, 5]
>>> [vi for vi in gm.variables(labels=3)]
[2, 3]
>>> [vi for vi in gm.variables(minLabels=3)]
[2, 3, 4, 5]
>>> [vi for vi in gm.variables(minLabels=2,maxLabels=3)]
[0, 1, 2, 3]
variablesAdjacency((GraphicalModel)arg1) → list :

generate variable adjacency

class opengm.adder.Factor((object)arg1) → None
__init__((object)arg1) → None
asIndependentFactor((Factor)arg1) → IndependentFactor
copyValues((Factor)arg1) → object :

Copy the value table of a factor to a new allocated 1d-numpy array in last-coordinate-major-order

copyValuesSwitchedOrder((Factor)arg1) → object :

Copy the value table of a factor to a new allocated 1d-numpy array in first-coordinate-major-order

functionIndex

Get the function index of a factor, which indicated the index of the function this factor is connected to

functionType

Get the function type index of a factorm which indicated the type of the function this factor is connected to

isAbsoluteDifference((Factor)arg1) → bool
isGeneralizedPotts((Factor)arg1) → bool :

Check if the factors value table can be written as generalized Potts function

isPotts((Factor)arg1) → bool :

Check if the factors value table can be written as Potts function

isSquaredDifference((Factor)arg1) → bool
isSubmodular((Factor)arg1) → bool :

Check if the factor is submodular

isTruncatedAbsoluteDifference((Factor)arg1) → bool
isTruncatedSquaredDifference((Factor)arg1) → bool
max((Factor)arg1, (object)accVariables) → IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
max( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
max( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
max( (Factor)arg1) -> float :
Get the maximum value of the factor ( the maximum scalar in the factors value table)
min((Factor)arg1, (object)accVariables) → IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
min( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
min( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
min( (Factor)arg1) -> float :
Get the minimum value of the factor ( the minimum scalar in the factors value table)
numberOfLabels((Factor)arg1, (int)variableIndex) → int :

Get the number of labels for a variable of the factor:

gm=opengm.graphicalModel([2,3,4,5])
fid=gm.addFunction(numpy.ones([2,3],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[0,1])
assert( gm[factorIndex].numberOfLabels(0)==2 )
assert( gm[factorIndex].numberOfLabels(1)==3 )
fid=gm.addFunction(numpy.ones([4,5],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[2,4])
assert( gm[factorIndex].numberOfLabels(0)==4 )
assert( gm[factorIndex].numberOfLabels(1)==5 )
numberOfVariables

The number of variables which are connected to the factor:

#assuming gm,fid2 and fid3 exist:
factorIndex=gm.addFactor(fid2,[0,1])
assert( gm[factorIndex].numberOfVariables==2 )
factorIndex=gm.addFactor(fid3,[0,2,4])
assert( gm[factorIndex].numberOfVariables==3 )
product((Factor)arg1, (object)accVariables) → IndependentFactor :
Multiply / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
product( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Multiply / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
product( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Multiply / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
product( (Factor)arg1) -> float :
Get the product of all values of the factor
shape

Get the shape of a factor, which is a sequence of the number of lables for all variables which are connected to this factor

size

The number of entries in the factor’s value table:

gm=opengm.graphicalModel([2,2,2,2])
fid=gm.addFunction(numpy.ones([2,2],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[0,1])
assert( gm[factorIndex].size==4 )
fid=gm.addFunction(numpy.ones([2,2,2],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[0,1,2])
assert( gm[factorIndex].size==8 )
subFactor(fixedVars, fixedVarsLabels)

get the value table of of a sub-factor where some variables of the factor have been fixed to a given label

Args:

fixedVars : a 1d-sequence of variable indices to fix w.r.t. the factor

fixedVarsLabels : a 1d-sequence of labels for the given indices in fixedVars

Example :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(10, 10,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> factor2Order=gm[100]
>>> int(factor2Order.numberOfVariables)
2
>>> print factor2Order.shape
[4, 4, ]
>>> # fix the second variable index w.r.t. the factor to the label 3
>>> subValueTable = factor2Order.subFactor(fixedVars=[1],fixedVarsLabels=[3])
>>> subValueTable.shape
(4,)
>>> for x in range(4):
...     print factor2Order[x,3]==subValueTable[x]
True
True
True
True
sum((Factor)arg1, (object)accVariables) → IndependentFactor :
Integrate / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
sum( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Integrate / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
sum( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Integrate / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
sum( (Factor)arg1) -> float :
Get the sum of all values of the factor
variableIndices

Get the variable indices of a factor (the indices of all variables which are connected to this factor)

class opengm.adder.Movemaker((object)arg1, (GraphicalModel)arg2) → None :

Construct a movemaker from a graphical model

__init__( (object)arg1, (GraphicalModel)gm, (object)labels) -> object :

construct a movemaker from a graphical model and initialize movemaker with given labeling

Args:

gm : the graphicalmodel

labels : the intital labeling starting point

Example:

>>> # assuming there is graphical model named gm with ``'adder'`` as operator

>>> labels=numpy.zeros(gm.numberOfVariables,dtype=opengm.index_type)

>>> movemaker.opengm.Movemaker(gm=gm,labels=labels)
__init__((object)arg1, (GraphicalModel)arg2) → None :

Construct a movemaker from a graphical model

__init__( (object)arg1, (GraphicalModel)gm, (object)labels) -> object :

construct a movemaker from a graphical model and initialize movemaker with given labeling

Args:

gm : the graphicalmodel

labels : the intital labeling starting point

Example:

>>> # assuming there is graphical model named gm with ``'adder'`` as operator

>>> labels=numpy.zeros(gm.numberOfVariables,dtype=opengm.index_type)

>>> movemaker.opengm.Movemaker(gm=gm,labels=labels)
initalize((Movemaker)arg1, (object)labeling) → None :

initialize movemaker with a labeling

label((Movemaker)arg1, (int)vi) → int :

get the label for the given varible

move((Movemaker)arg1, (object)vis, (object)labels) → None :

doc todo

move( (Movemaker)arg1, (int)vis, (int)labels) -> None :
doc todo
moveOptimallyMax((Movemaker)arg1, (object)vis) → None :

doc todo

moveOptimallyMax( (Movemaker)arg1, (int)vi) -> int :
doc todo
moveOptimallyMin((Movemaker)arg1, (object)vis) → None :

doc todo

moveOptimallyMin( (Movemaker)arg1, (int)vi) -> int :
doc todo
reset((Movemaker)arg1) → None :

reset the movemaker

value((Movemaker)arg1) → float :

get the value (energy/probability) of graphical model for the current labeling

valueAfterMove((Movemaker)arg1, (object)vis, (object)labels) → float :

doc todo

valueAfterMove( (Movemaker)arg1, (int)vis, (int)labels) -> float :
doc todo

opengm.multiplier Package

class opengm.multiplier.GraphicalModel((object)arg1) → None :

The central class of opengm which holds the factor graph and functions of the graphical model

Construct an empty graphical model with no variables Example:

Construct an empty

>>> gm=opengm.addder.GraphicalModel()
>>> int(gm.numberOfVariables)
0
__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from any iterable python object where the iterabe object holds the number of labels for each variable.

The gm will have as many variables as the length of the iterable sequence

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(2 for x in xrange(100))
>>> int(gm.numberOfVariables)
100
>>> int(gm.numberOfLabels(0))
2

Construct a gm from list and tuples

>>> gm=opengm.adder.GraphicalModel( [3]*10 )
>>> int(gm.numberOfVariables)
10
>>> int(gm.numberOfLabels(0))
3
>>> gm=opengm.adder.GraphicalModel( (2,4,6) )
>>> int(gm.numberOfVariables)
3
>>> int(gm.numberOfLabels(0))
2
>>> int(gm.numberOfLabels(1))
4
>>> int(gm.numberOfLabels(2))
6

And factors can be reserved for the varialbes

>>> gm=opengm.adder.GraphicalModel(numberOfLabels=[2]*10,reserveNumFactorsPerVariable=5)
>>> gm=opengm.adder.GraphicalModel(numberOfLabels=(2,2,2),reserveNumFactorsPerVariable=3)

Note:

__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from a 1d numpy ndarray which holds the number of labels for each variable.

The gm will have as many variables as the length of the ndarray

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64)*4)
>>> int(gm.numberOfVariables)


100
>>> int(gm.numberOfLabels(0))
4

And factors can be reserved for the varialbes

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64,reserveNumFactorsPerVariable=3) )

Note:

__init__( (object)arg1, (int)numberOfVariables, (int)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm where each variable will have the same number of labels

The gm will have as many variables as given by numberOfVariables Args:

numberOfVariables: is the number of varables for the gm

numberOfLabels: is the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe.

This can speedup adding factors.

Example:

Construct a gm with 10 variables each having 2 possible labels:

>>> gm=opengm.addder.GraphicalModel(numberOfVariables=10,numberOfLabels=2)
>>> gm.numberOfVariables
10
__init__( (object)arg1, (IndexVector)arg2, (int)numberOfLabels) -> object :

Construct a gm from a gm from a opengm.IndexVector

Args:

numberOfLabels: holds the number of labels for each variable
__init__((object)arg1) → None :

Construct an empty graphical model with no variables Example:

Construct an empty

>>> gm=opengm.addder.GraphicalModel()
>>> int(gm.numberOfVariables)
0
__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from any iterable python object where the iterabe object holds the number of labels for each variable.

The gm will have as many variables as the length of the iterable sequence

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(2 for x in xrange(100))
>>> int(gm.numberOfVariables)
100
>>> int(gm.numberOfLabels(0))
2

Construct a gm from list and tuples

>>> gm=opengm.adder.GraphicalModel( [3]*10 )
>>> int(gm.numberOfVariables)
10
>>> int(gm.numberOfLabels(0))
3
>>> gm=opengm.adder.GraphicalModel( (2,4,6) )
>>> int(gm.numberOfVariables)
3
>>> int(gm.numberOfLabels(0))
2
>>> int(gm.numberOfLabels(1))
4
>>> int(gm.numberOfLabels(2))
6

And factors can be reserved for the varialbes

>>> gm=opengm.adder.GraphicalModel(numberOfLabels=[2]*10,reserveNumFactorsPerVariable=5)
>>> gm=opengm.adder.GraphicalModel(numberOfLabels=(2,2,2),reserveNumFactorsPerVariable=3)

Note:

__init__( (object)arg1, (object)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm from a 1d numpy ndarray which holds the number of labels for each variable.

The gm will have as many variables as the length of the ndarray

Args:

numberOfLabels: holds the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe (default

This can speedup adding factors.

Example:

Construct a gm from generator expression

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64)*4)
>>> int(gm.numberOfVariables)


100
>>> int(gm.numberOfLabels(0))
4

And factors can be reserved for the varialbes

>>> gm=opengm.addder.GraphicalModel(numpy.ones(100,dtype=numpy.uint64,reserveNumFactorsPerVariable=3) )

Note:

__init__( (object)arg1, (int)numberOfVariables, (int)numberOfLabels [, (int)reserveNumFactorsPerVariable=1]) -> object :

Construct a gm where each variable will have the same number of labels

The gm will have as many variables as given by numberOfVariables Args:

numberOfVariables: is the number of varables for the gm

numberOfLabels: is the number of labels for each variable

reserveNumFactorsPerVariable: reserve a certain number of factors for each varialbe.

This can speedup adding factors.

Example:

Construct a gm with 10 variables each having 2 possible labels:

>>> gm=opengm.addder.GraphicalModel(numberOfVariables=10,numberOfLabels=2)
>>> gm.numberOfVariables
10
__init__( (object)arg1, (IndexVector)arg2, (int)numberOfLabels) -> object :

Construct a gm from a gm from a opengm.IndexVector

Args:

numberOfLabels: holds the number of labels for each variable
addFactor(fid, variableIndices, finalze=True)

add a factor to the graphical model

Args:

fid : function identifier

variableIndices
: indices of the fator w.r.t. the graphical model.
The variable indices have to be sorted.

Examples:

>>> import opengm
>>> # a graphical model with 6 variables, some variables with 2, 3 and 4 labels
>>> gm=opengm.gm([2,2,3,3,4,4])
>>> # Add unary function and factor ( factor which is connect to 1 variable )
>>> # - add function ( a random function with 2 enties in the value table)
>>> fid =   gm.addFunction(opengm.randomFunction(shape=[2]))
>>> # - connect function and variables to factor 
>>> int(gm.addFactor(fid=fid,variableIndices=0))
0
addFactors(fids, variableIndices, finalize=True)
addFunction(function)

Adds a function to the graphical model.

Args:
function: a function/ value table
Returns:

A function identifier (fid) .

This fid is used to connect a factor to this function

Examples
:

Explicit functions added via numpy ndarrays:

>>> import opengm
>>> #Add 1th-order function with the shape [3]::
>>> gm=opengm.graphicalModel([3,3,3,4,4,4,5,5,2,2])
>>> f=numpy.array([0.8,1.4,0.1])
>>> fid=gm.addFunction(f)
>>> print fid.functionIndex
0
>>> print fid.functionType
0
>>> # Add 2th-order function with  the shape [4,4]::
>>> f=numpy.ones([4,4])
>>> #fill the function with values
>>> #..........
>>> fid=gm.addFunction(f)
>>> int(fid.functionIndex),int(fid.functionType)
(1, 0)
>>> # Adding 3th-order function with the shape [4,5,2]::
>>> f=numpy.ones([4,5,2])
>>> #fill the function with values
>>> #..........
>>> fid=gm.addFunction(f)
>>> print fid.functionIndex
2
>>> print fid.functionType
0

Potts functions:

>>> import opengm
>>> gm=opengm.gm([2,2,3,3,3,4,4,4])
>>> # 2-order potts function 
>>> f=opengm.pottsFunction(shape=[2,2],valueEqual=0.0,valueNotEqual=1.0)
>>> f[0,0],f[1,0],f[0,1],f[1,1]
(0.0, 1.0, 1.0, 0.0)
>>> fid=gm.addFunction(f)
>>> int(fid.functionIndex),int(fid.functionType)
(0, 1)
>>> # connect a second order factor to variable 0 and 1 and the potts function
>>> int(gm.addFactor(fid,[0,1]))
0
>>> # higher order potts function
>>> f=opengm.pottsFunction(shape=[2,3,4],valueEqual=0.0,valueNotEqual=2.0)
>>> f[0,0,0],f[1,0,0],f[1,1,1],f[1,1,3]
(0.0, 2.0, 0.0, 2.0)
>>> fid=gm.addFunction(f)
>>> int(fid.functionIndex),int(fid.functionType)
(0, 2)
>>> # connect a third order factor to variable 0,2 and 5 and the potts function
>>> int(gm.addFactor(fid,(0,2,5)))
1
Notes:
addFunctions(functions)
assign((GraphicalModel)arg1, (object)numberOfLabels) → None :

Assign a graphical model from any number of labels sequence

Args:

numberOfLabels: holds the number of labels for each variable

Note:

assign( (GraphicalModel)arg1, (object)numberOfLabels) -> None :

Assign a graphical model from number of labels sequence which is a 1d numpy.ndarray

Args:

numberOfLabels: holds the number of labels for each variable

Note:

assign( (GraphicalModel)arg1, (IndexVector)numberOfLabels) -> None :

Assign a graphical model from number of labels sequence which is a opengm.IndexVector

Args:

numberOfLabels: holds the number of labels for each variable

Note:

connectedComponentsFromLabels(labels)
evaluate(labels)

evaluate a labeling to get the energy / probability of that given labeling

Args:

labels : a labeling for all variables of the graphical model

Examples:

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(2, 2,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> energy=gm.evaluate([0,2,2,1])
factorClass

Get the class of the factor of this gm

Example :
>>> import opengm
>>> gm=opengm.gm([2]*10)
>>> # fill gm with factors...
>>> result=gm.vectorizedFactorFunction(gm.factorClass.isSubmodular,range(gm.numberOfFactors))
factorIds(order=None, minOrder=None, maxOrder=None)
factorIndices(variableIndices)

get the factor indices of all factors connected to variables within variableIndices

Args:

variableIndices : variable indices w.r.t. the graphical model

Examples :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(2, 2,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> factorIndicesNumpyArray=gm.factorIndices(variableIndices=[0,1])
>>> [fi for fi in factorIndicesNumpyArray]
[0, 1, 4, 5, 6]

Returns :

a sorted numpy.ndarray of all factor indices

Notes :

This function will be fastest if variableIndices is a numpy.ndarray. Otherwise the a numpy array will be allocated and the elements of variableIndices are copied.
factorOfVariable((GraphicalModel)arg1, (int)variableIndex, (int)factorIndex) → int :

Get the factor index of a factor which is connected to the variable at variable index.

Args:

variableIndex: index of a variable w.r.t the gm

factorIndex: index of a factor w.r.t the number of factors which are connected to this variable``

Returns:
The factor index w.r.t. the gm
factorSubset(factorIndices=None, order=None)
factors(order=None, minOrder=None, maxOrder=None)
factorsAndIds(order=None, minOrder=None, maxOrder=None)
factorsOfVariable((GraphicalModel)arg1, (int)variableIndex) → FactorsOfVariable :

Get the sequence of factor indices (w.r.t. the graphical model) of all factors connected to the variable at variableIndex Args:

variableIndex: index of a variable w.r.t the gm

Returns: A sequence of factor indices (w.r.t. the graphical model) of all factors connected to the variable at variableIndex

finalize((GraphicalModel)arg1) → None :

finalize the graphical model after adding all factors

this method must be called if any non finalized factor has been added (addFactor / addFactors with finalize=False)

fixVariables(variableIndices, labels)

return a new graphical model where some variables are fixed to a given label.

Args:
variableIndices : variable indices to fix labels : labels of the variables to fix
Returns:
new graphical model where variables are fixed.
isAcyclic((GraphicalModel)arg1) → bool :

check if the graphical is isAcyclic.

Returns:

True if model has no loops / is acyclic

False if model has loops / is not acyclic

moveLocalOpt((GraphicalModel)arg1, (str)arg2) → object
numberOfFactors

Number of factors of the graphical model

Example:

Get the number of factors of a gm

>>> import opengm
>>> gm=gm([2]*5)
>>> int(gm.numberOfFactors)
0
>>> fid=gm.addFunction(opengm.PottsFunction([2,2],1.0,0.0))
>>> int(gm.addFactor(fid,[0,1]))
0
>>> int(gm.numberOfFactors)
1
>>> int(gm.addFactor(fid,[1,2]))
1
>>> int(gm.numberOfFactors)
2
numberOfFactorsOfVariable((GraphicalModel)arg1, (int)variableIndex) → int :

Get the number of factors which are connected to a variable

Args:

variableIndex: index of a variable w.r.t. the gm
Returns:

The nubmer of variables which are connected

to the factor at factorIndex
numberOfLabels((GraphicalModel)arg1, (int)variableIndex) → int :

Get the number of labels for a variable

Args:

variableIndex: index to a variable in this gm
Returns:
The nubmer of labels for the variable at variableIndex
numberOfVariables

Number of variables of the graphical modelExample:

Get the number of variables of a gm

>>> gm=gm([2]*5)
>>> int(gm.numberOfVariables)
5
numberOfVariablesOfFactor((GraphicalModel)arg1, (int)factorIndex) → int :

Get the number of variables which are connected to a factor

Args:

factorIndex: index to a factor in this gm
Returns:

The nubmer of variables which are connected

to the factor at factorIndex
operator

The operator of the graphical model as a stringExample:

Get the operator of a gm as string

>>> import opengm
>>> gm=opengm.gm([2]*5)
>>> gm.operator
'adder'
>>> gm=opengm.gm([2]*5,operator='adder')
>>> gm.operator
'adder'
>>> gm=opengm.gm([2]*5,operator='multiplier')
>>> gm.operator
'multiplier'
reserveFactors((GraphicalModel)arg1, (int)numberOfFactors) → None :

reserve space for factors.

This can speedup adding factors

Args:

numberOfFactors: the number of factor to reserve

Example:

Reserve some factors

>>> gm=gm([2]*10)
>>> gm.reserveFactors(10)
reserveFactorsVarialbeIndices((GraphicalModel)arg1, (int)size) → None :

reserve space for factors varialbe indices (stored in one std::vector for all factors).

This can speedup adding factors

Args:

size: total size of variable indices

Example:

Reserve space for varaiable indices of 9 second order factors

>>> gm=gm([2]*10)
>>> gm.reserveFactorsVarialbeIndices(9*2)
reserveFunctions((GraphicalModel)arg1, (int)numberOfFunctions, (str)functionTypeName) → None :

reserve space for functions of a certain type

space((GraphicalModel)arg1) → Space :

Get the variable space of the graphical model

Returns:

A const reference to space of the gm.Example:

Get variable space

>>> gm=gm([2]*10)
>>> space=gm.space()
>>> len(space)
10
testf()
testf2()
variableIndices(factorIndices)

get the factor indices of all factors connected to variables within variableIndices

Args:

factorIndices : factor indices w.r.t. the graphical model

Examples :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(2, 2,4).astype(numpy.float64)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> variableIndicesNumpyArray=gm.variableIndices(factorIndices=[3,4])
>>> [vi for vi in variableIndicesNumpyArray]
[0, 2, 3]

Returns :

a sorted numpy.ndarray of all variable indices

Notes :

This function will be fastest if variableIndices is a numpy.ndarray. Otherwise the a numpy array will be allocated and the elements of variableIndices are copied.
variableOfFactor((GraphicalModel)arg1, (int)factorIndex, (int)variableIndex) → int :

Get the variable index of a varible which is connected to a factor.

Args:

factorIndex: index of a factor w.r.t the gm

variableIndex: index of a variable w.r.t the factor at factorIndex

Returns:
The variableIndex w.r.t. the gm of the factor at factorIndex
variables(labels=None, minLabels=None, maxLabels=None)

generator object to iterate over all variable indices

Args:

labels : iterate only over variables with labels number of Labels if this argument is set (default: None)

minLabels : iterate only over variabe which have at least minLabels if this argument is set (default: None)

minLabels : iterate only over variabe which have at maximum maxLabels if this argument is set (default: None)

Examples:

>>> import opengm
>>> # a graphical model with 6 variables, some variables with 2, 3 and 4 labels
>>> gm=opengm.gm([2,2,3,3,4,4])
>>> [vi for vi in gm.variables()]
[0, 1, 2, 3, 4, 5]
>>> [vi for vi in gm.variables(labels=3)]
[2, 3]
>>> [vi for vi in gm.variables(minLabels=3)]
[2, 3, 4, 5]
>>> [vi for vi in gm.variables(minLabels=2,maxLabels=3)]
[0, 1, 2, 3]
variablesAdjacency((GraphicalModel)arg1) → list :

generate variable adjacency

class opengm.multiplier.Factor((object)arg1) → None
__init__((object)arg1) → None
asIndependentFactor((Factor)arg1) → IndependentFactor
copyValues((Factor)arg1) → object :

Copy the value table of a factor to a new allocated 1d-numpy array in last-coordinate-major-order

copyValuesSwitchedOrder((Factor)arg1) → object :

Copy the value table of a factor to a new allocated 1d-numpy array in first-coordinate-major-order

functionIndex

Get the function index of a factor, which indicated the index of the function this factor is connected to

functionType

Get the function type index of a factorm which indicated the type of the function this factor is connected to

isAbsoluteDifference((Factor)arg1) → bool
isGeneralizedPotts((Factor)arg1) → bool :

Check if the factors value table can be written as generalized Potts function

isPotts((Factor)arg1) → bool :

Check if the factors value table can be written as Potts function

isSquaredDifference((Factor)arg1) → bool
isSubmodular((Factor)arg1) → bool :

Check if the factor is submodular

isTruncatedAbsoluteDifference((Factor)arg1) → bool
isTruncatedSquaredDifference((Factor)arg1) → bool
max((Factor)arg1, (object)accVariables) → IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
max( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
max( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
max( (Factor)arg1) -> float :
Get the maximum value of the factor ( the maximum scalar in the factors value table)
min((Factor)arg1, (object)accVariables) → IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
min( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
min( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Minimize / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
min( (Factor)arg1) -> float :
Get the minimum value of the factor ( the minimum scalar in the factors value table)
numberOfLabels((Factor)arg1, (int)variableIndex) → int :

Get the number of labels for a variable of the factor:

gm=opengm.graphicalModel([2,3,4,5])
fid=gm.addFunction(numpy.ones([2,3],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[0,1])
assert( gm[factorIndex].numberOfLabels(0)==2 )
assert( gm[factorIndex].numberOfLabels(1)==3 )
fid=gm.addFunction(numpy.ones([4,5],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[2,4])
assert( gm[factorIndex].numberOfLabels(0)==4 )
assert( gm[factorIndex].numberOfLabels(1)==5 )
numberOfVariables

The number of variables which are connected to the factor:

#assuming gm,fid2 and fid3 exist:
factorIndex=gm.addFactor(fid2,[0,1])
assert( gm[factorIndex].numberOfVariables==2 )
factorIndex=gm.addFactor(fid3,[0,2,4])
assert( gm[factorIndex].numberOfVariables==3 )
product((Factor)arg1, (object)accVariables) → IndependentFactor :
Multiply / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
product( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Multiply / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
product( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Multiply / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
product( (Factor)arg1) -> float :
Get the product of all values of the factor
shape

Get the shape of a factor, which is a sequence of the number of lables for all variables which are connected to this factor

size

The number of entries in the factor’s value table:

gm=opengm.graphicalModel([2,2,2,2])
fid=gm.addFunction(numpy.ones([2,2],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[0,1])
assert( gm[factorIndex].size==4 )
fid=gm.addFunction(numpy.ones([2,2,2],dtype=numpy.uint64))
factorIndex=gm.addFactor(fid,[0,1,2])
assert( gm[factorIndex].size==8 )
subFactor(fixedVars, fixedVarsLabels)

get the value table of of a sub-factor where some variables of the factor have been fixed to a given label

Args:

fixedVars : a 1d-sequence of variable indices to fix w.r.t. the factor

fixedVarsLabels : a 1d-sequence of labels for the given indices in fixedVars

Example :

>>> import opengm
>>> import numpy
>>> unaries=numpy.random.rand(10, 10,4)
>>> gm=opengm.grid2d2Order(unaries=unaries,regularizer=opengm.pottsFunction([4,4],0.0,0.4))
>>> factor2Order=gm[100]
>>> int(factor2Order.numberOfVariables)
2
>>> print factor2Order.shape
[4, 4, ]
>>> # fix the second variable index w.r.t. the factor to the label 3
>>> subValueTable = factor2Order.subFactor(fixedVars=[1],fixedVarsLabels=[3])
>>> subValueTable.shape
(4,)
>>> for x in range(4):
...     print factor2Order[x,3]==subValueTable[x]
True
True
True
True
sum((Factor)arg1, (object)accVariables) → IndependentFactor :
Integrate / accumulate over some variables by of the factor.These variables are given by accVariables
The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a 1d numpy.ndarray
sum( (Factor)arg1, (tuple)accVariables) -> IndependentFactor :
Integrate / accumulate over some variables by of the factor.These variables are given by accVariables The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a tuple
sum( (Factor)arg1, (list)accVariables) -> IndependentFactor :
Integrate / accumulate over some variables by of the factor.These variables are given by accVariables. The result is an independentFactor. This independentFactor is only connected to the factors variables which where not in accVariables. In this overloading the type of accVariables has to be a list
sum( (Factor)arg1) -> float :
Get the sum of all values of the factor
variableIndices

Get the variable indices of a factor (the indices of all variables which are connected to this factor)

class opengm.multiplier.Movemaker((object)arg1, (GraphicalModel)arg2) → None :

Construct a movemaker from a graphical model

__init__( (object)arg1, (GraphicalModel)gm, (object)labels) -> object :

construct a movemaker from a graphical model and initialize movemaker with given labeling

Args:

gm : the graphicalmodel

labels : the intital labeling starting point

Example:

>>> # assuming there is graphical model named gm with ``'adder'`` as operator

>>> labels=numpy.zeros(gm.numberOfVariables,dtype=opengm.index_type)

>>> movemaker.opengm.Movemaker(gm=gm,labels=labels)
__init__((object)arg1, (GraphicalModel)arg2) → None :

Construct a movemaker from a graphical model

__init__( (object)arg1, (GraphicalModel)gm, (object)labels) -> object :

construct a movemaker from a graphical model and initialize movemaker with given labeling

Args:

gm : the graphicalmodel

labels : the intital labeling starting point

Example:

>>> # assuming there is graphical model named gm with ``'adder'`` as operator

>>> labels=numpy.zeros(gm.numberOfVariables,dtype=opengm.index_type)

>>> movemaker.opengm.Movemaker(gm=gm,labels=labels)
initalize((Movemaker)arg1, (object)labeling) → None :

initialize movemaker with a labeling

label((Movemaker)arg1, (int)vi) → int :

get the label for the given varible

move((Movemaker)arg1, (object)vis, (object)labels) → None :

doc todo

move( (Movemaker)arg1, (int)vis, (int)labels) -> None :
doc todo
moveOptimallyMax((Movemaker)arg1, (object)vis) → None :

doc todo

moveOptimallyMax( (Movemaker)arg1, (int)vi) -> int :
doc todo
moveOptimallyMin((Movemaker)arg1, (object)vis) → None :

doc todo

moveOptimallyMin( (Movemaker)arg1, (int)vi) -> int :
doc todo
reset((Movemaker)arg1) → None :

reset the movemaker

value((Movemaker)arg1) → float :

get the value (energy/probability) of graphical model for the current labeling

valueAfterMove((Movemaker)arg1, (object)vis, (object)labels) → float :

doc todo

valueAfterMove( (Movemaker)arg1, (int)vis, (int)labels) -> float :
doc todo