elektronn.net package

elektronn.net.convlayer2d module

elektronn.net.convlayer2d.getOutputShape(insh, fsh, pool, mfp, r=1)[source]

Returns shape of convolution result from (bs, ch, x, y) * (nof, ch, xf, yf)

elektronn.net.convlayer2d.getProbShape(output_shape, mfp_strides)[source]

Given outputshape (bs, ch, x, y) and mfp_stride (sx, sy) returns shape of Class Prob output

class elektronn.net.convlayer2d.ConvLayer2d(input, input_shape, filter_shape, pool, activation_func, enable_dropout, use_fragment_pooling, reshape_output, mfp_offsets, mfp_strides, input_layer=None, W=None, b=None, pooling_mode='max')[source]

Bases: object

Conv-Pool Layer of a CNN

Parameters:
  • input (theano.tensor.dtensor4 ('batch', 'channel', x, y)) – symbolic image tensor, of shape input_shape
  • input_shape (tuple or list of length 4) – (batch size, num input feature maps, image height, image width)
  • filter_shape (tuple or list of length 4) – (number of filters, input_channels, filter height,filter width)
  • pool (int 2-tuple) – the down-sampling (max-pooling) factor
  • activation_func (string) – Options: tanh, relu, sig, abs, linear, maxout <i>
  • enable_dropout (Bool) – whether to enable dropout in this layer. The default rate is 0.5 but it can be changed with self.activation_noise.set_value(set_value(np.float32(p)) or using cnn.setDropoutRates
  • use_fragment_pooling (Bool) – whether to use max fragment pooling in this layer (MFP)
  • reshape_output (Bool) – whether to reshape class_probabilities to (bs, cls, x, y) and re-assemble fragments to dense images if MFP was enabled. Use this in for the last layer.
  • mfp_offsets (list of list of ints) – this lists specifies the offsets that the MFP-fragments have w.r.t to the original patch. Only needed if MFP is enabled.
  • mfp_strides (list of int) – the strides of the output in each dimension
  • input_layer (layer object) – just for keeping track of un-usual input layers
  • W (np.ndarray or T.TensorVariable) – weight matrix. If array, the values are used to initialise a shared variable for this layer. If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)
  • b (np.ndarray or T.TensorVariable) – bias vector. If array, the values are used to initialise a shared variable for this layer. If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)
  • pooling_mode (str) – ‘max’ or ‘maxabs’ where the first is normal maxpooling and the second also retains sign of large negative values
fragmentpool(conv_out, pool, offsets, strides, pool_func)[source]
reshapeoutput(sh)[source]
fragmentstodense(sh)[source]
randomizeWeights(scale='glorot', mode='uni')[source]
gaborInitialisation()[source]
NLL(y, class_weights=None, mask_class_labeled=None, mask_class_not_present=None, label_prop_thresh=None)[source]

Returns the symbolic mean and instance-wise negative log-likelihood of the prediction of this model under a given target distribution.

y: theano.tensor.TensorType
corresponds to a vector that gives for each example the correct label. Labels < 0 are ignored (e.g. can be used for label propagation)
class_weights: theano.tensor.TensorType
weight vector of float32 of length n_lab. Values: 1.0 (default), w < 1.0 (less important), w > 1.0 (more important class)

The following refers to lazy labels, the masks are always on a per patch basis, depending on the origin cube of the patch. The masks are properties of the individual image cubes and must be loaded into CNNData.

mask_class_labeled: theano.tensor.TensorType
shape = (batchsize, num_classes). Binary masks indicating whether a class is properly labeled in y. If a class k is (in general) present in the image patches and mask_class_labeled[k]==1, then the labels must obey y==k for all pixels where the class is present. If a class k is present in the image, but was not labeled (-> cheaper labels), set mask_class_labeled[k]=0. Then all pixels for which the y==k will be ignored. Alternative: set y=-1 to ignore those pixels. Limit case: mask_class_labeled[:]==1 will result in the ordinary NLL.
mask_class_not_present: theano.tensor.TensorType
shape = (batchsize, num_classes). Binary mask indicating whether a class is present in the image patches. mask_class_not_present[k]==1 means that the image does not contain examples of class k. Then for all pixels in the patch, class k predictive probabilities are trained towards 0. Limit case: mask_class_not_present[:]==0 will result in the ordinary NLL.
label_prop_thresh: float (0.5,1)
This threshold allows unsupervised label propagation (only for examples with negative/ignore labels). If the predictive probability of the most likely class exceeds the threshold, this class is assumed to be the correct label and the training is pushed in this direction. Should only be used with pre-trained networks, and values <= 0.5 are disabled.

Examples:

  • A cube contains no class k. Instead of labelling the remaining classes they can be marked as unlabelled by the first mask (mask_class_labeled[:]==0, whether mask_class_labeled[k] is 0 or 1 is actually indifferent because the labels should not be y==k anyway in this case). Additionally mask_class_not_present[k]==1 (otherwise 0) to suppress predictions of k in in this patch. The actual value of the labels is indifferent, it can either be -1 or it could be the background class, if the background is marked as unlabelled (i.e. then those labels are ignored).
  • Only part of the cube is densely labelled. Set mask_class_labeled[:]=1 for all classes, but set the label values in the un-labelled part to -1 to ignore this part.
  • Only a particular class k is labelled in the cube. Either set all other label pixels to -1 or the corresponding flags in mask_class_labeled for the unlabelled classes.

Note

Using -1 labels or telling that a class is not labelled, is somewhat redundant and just supported for convenience.

NLL_weak(y, class_weights=None, mask_class_labeled=None, mask_class_not_present=None, label_prop_thresh=None)[source]

NLL that mixes the current cnn output and the hard labels as target

squared_distance(y)[source]

Returns squared distance between prediction and y

Parameters:y (theano.tensor.TensorType) – corresponds to a vector that gives for each example the correct label
errors(y)[source]

Returns classification accuracy

Parameters:y (theano.tensor.TensorType) – corresponds to a vector that gives for each example the correct label

elektronn.net.convlayer3d module

elektronn.net.convlayer3d.getOutputShape(insh, fsh, pool, mfp, r=1)[source]

Returns shape of convolution result from (bs, z, ch, x, y) * (nof, z, ch, xf, yf)

elektronn.net.convlayer3d.getProbShape(output_shape, mfp_strides)[source]

Given outputshape (bs, z, ch, x, y) and mfp_stride (sx, sy) returns shape of Class Prob output

class elektronn.net.convlayer3d.ConvLayer3d(input, input_shape, filter_shape, pool, activation_func, enable_dropout, use_fragment_pooling, reshape_output, mfp_offsets, mfp_strides, input_layer=None, W=None, b=None, pooling_mode='max', affinity=False)[source]

Bases: object

Conv-Pool Layer of a CNN

Parameters:
  • input (theano.tensor.dtensor5 ('batch', z, 'channel', x, y)) – symbolic image tensor, of shape input_shape
  • input_shape (tuple or list of length 5) – (batch size, z, num input feature maps, y, x)
  • filter_shape (tuple or list of length 5) – (number of filters, filter z, num input feature maps, filter y,filter x)
  • pool (int 3-tuple) – the down-sampling (max-pooling) factor
  • activation_func (string) – Options: tanh, relu, sig, abs, linear, maxout <i>
  • enable_dropout (Bool) – whether to enable dropout in this layer. The default rate is 0.5 but it can be changed with self.activation_noise.set_value(set_value(np.float32(p)) or using cnn.setDropoutRates
  • use_fragment_pooling (Bool) – whether to use max fragment pooling in this layer (MFP)
  • reshape_output (Bool) – whether to reshape class_probabilities to (bs, cls, x, y) and re-assemble fragments to dense images if MFP was enabled. Use this in for the last layer.
  • mfp_offsets (list of list of ints) – this lists specifies the offsets that the MFP-fragments have w.r.t to the original patch. Only needed if MFP is enabled.
  • mfp_strides (list of int) – the strides of the output in each dimension
  • input_layer (layer object) – just for keeping track of un-usual input layers
  • W (np.ndarray or T.TensorVariable) – weight matrix. If array, the values are used to initialise a shared variable for this layer. If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)
  • b (np.ndarray or T.TensorVariable) – bias vector. If array, the values are used to initialise a shared variable for this layer. If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)
  • pooling_mode (str) – ‘max’ or ‘maxabs’ where the first is normal maxpooling and the second also retains sign of large negative values
fragmentpool(conv_out, pool, offsets, strides, pool_func)[source]
fragmentstodense(sh)[source]
reshapeoutput(sh)[source]
randomizeWeights(scale='glorot', mode='uni')[source]
NLL(y, class_weights=None, example_weights=None, mask_class_labeled=None, mask_class_not_present=None, label_prop_thresh=None)[source]

Returns the symbolic mean and instance-wise negative log-likelihood of the prediction of this model under a given target distribution.

y: theano.tensor.TensorType
corresponds to a vector that gives for each example the correct label. Labels < 0 are ignored (e.g. can be used for label propagation)
class_weights: theano.tensor.TensorType
weight vector of float32 of length n_lab. Values: 1.0 (default), w < 1.0 (less important), w > 1.0 (more important class)
example_weights: theano.tensor.TensorType
weight vector of float32 of shape (bs, z, x, y) that can give the individual examples (i.e. labels for output pixels) different weights. Values: ``1.0 (default), w < 1.0 (less important), w > 1.0 (more important example). Note: if this is not normalised/bounded it may result in a effectively modified learning rate!

The following refers to lazy labels, the masks are always on a per patch basis, depending on the origin cube of the patch. The masks are properties of the individual image cubes and must be loaded into CNNData.

mask_class_labeled: theano.tensor.TensorType
shape = (batchsize, num_classes). Binary masks indicating whether a class is properly labeled in y. If a class k is (in general) present in the image patches and mask_class_labeled[k]==1, then the labels must obey y==k for all pixels where the class is present. If a class k is present in the image, but was not labeled (-> cheaper labels), set mask_class_labeled[k]=0. Then all pixels for which the y==k will be ignored. Alternative: set y=-1 to ignore those pixels. Limit case: mask_class_labeled[:]==1 will result in the ordinary NLL.
mask_class_not_present: theano.tensor.TensorType
shape = (batchsize, num_classes). Binary mask indicating whether a class is present in the image patches. mask_class_not_present[k]==1 means that the image does not contain examples of class k. Then for all pixels in the patch, class k predictive probabilities are trained towards 0. Limit case: mask_class_not_present[:]==0 will result in the ordinary NLL.
label_prop_thresh: float (0.5,1)
This threshold allows unsupervised label propagation (only for examples with negative/ignore labels). If the predictive probability of the most likely class exceeds the threshold, this class is assumed to be the correct label and the training is pushed in this direction. Should only be used with pre-trained networks, and values <= 0.5 are disabled.

Examples:

  • A cube contains no class k. Instead of labelling the remaining classes they can be marked as unlabelled by the first mask (mask_class_labeled[:]==0, whether mask_class_labeled[k] is 0 or 1 is actually indifferent because the labels should not be y==k anyway in this case). Additionally mask_class_not_present[k]==1 (otherwise 0) to suppress predictions of k in in this patch. The actual value of the labels is indifferent, it can either be -1 or it could be the background class, if the background is marked as unlabelled (i.e. then those labels are ignored).
  • Only part of the cube is densely labelled. Set mask_class_labeled[:]=1 for all classes, but set the label values in the unlabelled part to -1 to ignore this part.
  • Only a particular class k is labelled in the cube. Either set all other label pixels to -1 or the corresponding flags in mask_class_labeled for the unlabelled classes.

Note

Using -1 labels or telling that a class is not labelled, is somewhat redundant and just supported for convenience.

NLL_weak(y, class_weights=None, mask_class_labeled=None, mask_class_not_present=None, label_prop_thresh=None)[source]

NLL that mixes the current cnn output and the hard labels as target

NLL_affinity(y, class_weights=None, mask_class_labeled=None, mask_class_not_present=None, label_prop_thresh=None)[source]

TODO

squared_distance(y)[source]

Returns squared distance between prediction and y

Parameters:y (theano.tensor.TensorType) – corresponds to a vector that gives for each example the correct label
errors(y)[source]

Returns classification accuracy

Parameters:y (theano.tensor.TensorType) – corresponds to a vector that gives for each example the correct label
class elektronn.net.convlayer3d.AffinityLayer3d(input, input_shape, filter_shape, pool, activation_func, enable_dropout, use_fragment_pooling, reshape_output, mfp_offsets, mfp_strides, input_layer=None, W=None, b=None, pooling_mode='max')[source]

Bases: object

NLL_affinity(y, class_weights=None, example_weights=None, mask_class_labeled=None, mask_class_not_present=None, label_prop_thresh=None)[source]
errors(y)[source]
class elektronn.net.convlayer3d.MalisLayer(input, input_shape, filter_shape, pool, activation_func, enable_dropout, use_fragment_pooling, reshape_output, mfp_offsets, mfp_strides, input_layer=None, W=None, b=None, pooling_mode='max')[source]

Bases: elektronn.net.convlayer3d.AffinityLayer3d

NLL_Malis(aff_gt, seg_gt, unrestrict_neg=True)[source]
Parameters:
aff_gt: 4d, (bs, #edges, x, y, z) int16
seg_gt: (bs, x, y, z) int16
Returns:
pos_count: for every edge number of pixel-pairs that should be connected by this edge

(excluding background/ECS pixels and only edges considered within the same object, such that paths that go out from an object and back to the same object are irgnored)

neg_count: for every edge number of pixel-pairs that should be separated by this edge

(excluding background/ECS pixels and only edges considered between objects, such that minimal edges inside an object are not consideres to play a role for separating objects)

unrestrict_neg: Bool

Use this to relax the restriction on neg_counts. The restriction modifies the edge weights for before calculating the negative counts as: edge_weights_neg = np.maximum(affinity_pred, affinity_gt) If unrestricted the predictions are used directly.

elektronn.net.convnet module

class elektronn.net.convnet.MixedConvNN(input_size=None, input_depth=None, batch_size=None, enable_dropout=False, recurrent=False, dimension_calc=None)[source]

Bases: object

Parameters:
input_size: tuple
Data shapes, excluding batch and channel (used to infer the dimensionality)
input_depth: int/None
Is None by default this means non-image data (no conv layers allowed). Change to 1 for b/w, 3 for RGB and 4 for RGB-D images etc. For RNN this is the length of the time series.
batch_size: int/None
None for variable batch size
enable_dropout: Bool
Turn on or off dropout
recurrent: Bool
Support recurrent iterations along input depth/time
dimension_calc: dimension calculator object

Examples

Note that image data must have at least 1 channel, e.g. a 2d image (1,x,y). 3d requires data in the format (z,ch,x,y). E.g. to create an isotropic 3d CNN with 5 channels (total input shape is (1,30,5,30,30)):

>>> MixedConvNN((30,30,30), input_depth=5, batch_size=1)

A non-convolutional MLP can be created as:

>>> MixedConvNN((100,), input_depth=None, batch_size=2000)
addPerceptronLayer(n_outputs=10, activation_func='tanh', enable_input_noise=False, add_in_output_layers=False, force_no_dropout=False, W=None, b=None)[source]

Adds a Perceptron layer to the CNN.

Normally the each layer creates its own set of randomly initialised neuron weights. To reuse the weights of another layer (weight sharing) use the arguments W and b an pass T.TensorVariable. If W and b are numpy arrays own weights are initialised with these values.

Parameters:
n_outputs: int

The size of this layer

activation_func: string

{tanh, relu, sigmoid, abs, linear, maxout <i>} Activation function

enable_input_noise: Bool

If True set 20% of input to 0 randomly (similar to dropout)

force_no_dropout: Bool

Set True for last/output layer

addConvLayer(nof_filters=None, filter_size=None, pool_shape=2, activation_func='tanh', add_in_output_layers=False, force_no_dropout=False, use_fragment_pooling=False, reshape=False, is_last_layer=False, layer_input_shape=None, layer_input=None, W=None, b=None, pooling_mode='max', affinity=False)[source]

Adds a convolutional layer to the CNN. The dimensionality is automatically inferred.

Normally the inputs are automatically connected the the outputs of the last added layer. To connect to a different layer use layer_input_shape and layer_input arguments.

Normally the each layer creates its own set of randomly initialised neuron weights. To reuse the weights of another layer (weight sharing) use the arguments W and b an pass T.TensorVariable. If W and b are numpy arrays own weights are initialised with these values.

Parameters:
nof_filters: int

Number of feature maps

filter_size: int/tuple

Size/shape of convolutional filters, xy/zxy, (scalars are automatically extended to the 2d or 3d)

pool_shape: int/tuple

Size/shape of pool, xy/zxy, (scalars are automatically extended to the 2d or 3d)

activation_func: string

{tanh, relu, sigmoid, abs, linear, maxout <i>} Activation function

force_no_dropout: Bool

Set True for last/output layer

use_fragment_pooling: Bool

Set to True for predicting dense images efficiently. Requires batch_size==1.

reshape: Bool

Set to True to get 2d/3d output instead of flattened class_probabilities in the last layer

is_last_layer: Bool

Shorthand for reshape=True, force_no_dropout=True and reconstruction of pooling fragments (if mfp was active)

layer_input_shape: tuple of int

Only needed if layer_input is not not None

layer_input: T.TensorVariable

Symbolic input if you do not want to use the previous layer of the cnn. This requires specification of the shape of that input with layer_input_shape.

W: np.ndarray
weight matrix. If array, the values are used to initialise a shared variable for this layer.

If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)

b: np.ndarray or T.TensorVariable
bias vector. If array, the values are used to initialise a shared variable for this layer.

If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)

pooling_mode: str

‘max’ or ‘maxabs’ where the first is normal maxpooling and the second also retains sign of large negative values

addRecurrentLayer(n_hid=None, activation_func='tanh', iterations=None)[source]

Adds a recurrent layer (only possible for non-image input of format (batch, time, features))

Parameters:
n_hid: int

Number of hidden units

activation_func: string

{tanh, relu, sigmoid, abs, linear}

iterations: int

If layer input is not time-like (iterable on axis 1) it can be broadcasted and iterated over for a fixed number of iterations

addTiedAutoencoderChain(n_layers=None, force_no_dropout=False, activation_func='tanh', input_noise=0.3, tie_W=True)[source]

Creates connected layers to invert Perceptron layers. Input is assumed to come from the first layer.

Parameters:
n_layers: int

Number of layers that will be added/inverted, (input < 0 means all)

activation_func: string

{tanh, relu, sigmoid, abs, linear} Activation function

force_no_dropout: Bool

set True for last/output layer

input_noise: Bool

Noise rate that will be applied to the input of the first reconstructor

tie_W: Bool

Whether to share weight of dual layer pairs

compileDebugFunctions(gradients=True)[source]

Compiles the debug_functions which return the network activations / output. To use them compile them with this function. They by accessible as cnn.debug_functions (normal output), cnn.debug_conv_output, cnn.debug_gradients_function (if True).

compileOutputFunctions(target='nll', use_class_weights=False, use_example_weights=False, use_lazy_labels=False, use_label_prop=False, only_forward=False)[source]

Compiles the output functions get_loss, get_error, class_probabilities and defines the gradient (which is not compiled)

Parameters:
target: string
‘nll’/’regression’, regression has squared error and nll_masked allows training with

lazy labels; this requires the auxiliary (*aux) masks.

use_class_weights: Bool

whether to use class weights for the error

use_example_weights: Bool

whether to use example weights for the error

use_lazy_labels: Bool

whether to use lazy labels; this requires the auxiliary (*aux) masks

use_label_prop: Bool

whether to activate label propagation on unlabelled (-1) examples

only_forward: Bool

This exlcudes the building of the gradient (faster)

Defined functions:
(They are accessible as methods of ``MixedConvNN``)
get_loss: theano-function

[data, labels(, *aux)] –> [loss, loss_instance]

get_error: theano-function

[data, labels(, *aux)] –> [loss, (error,) prediction] no error for regression

class_probabilities: theano-function

[data] –> [prediction]

resetMomenta()[source]

Resets the trailing average of the gradient to sole current gradient

randomizeWeights(reset_momenta=True)[source]

Resets weights to random values (calls randomize_weights() on each layer)

trainingStep(*args, **kwargs)[source]

Perform one optimiser iteration. Optimizers can be chosen by the kwarg mode. They are complied on demand (which may take a while) and cached

Signature: cnn.trainingStep(data, label(, *aux)(,**kwargs))

Parameters:
data: float32 array

input [bs, ch (, x, y)] or [bs, z, ch, x, y]

labels: int16 array

[bs,((z,)y,x)] if output is not flattened

aux: int16 arrays

(optional) auxiliary weights/masks/etc. Should be unpacked list

kwargs:
  • mode: string

    [‘SGD’]: (default) Good if data set is big and redundant

    ‘RPROP’: which does neither uses a fix learning rate nor the momentum-value. It is faster than SGD if you do full-batch Training and use NO dropout. Any source of noise leads to failure of convergence (at all).

    ‘CG’: Good generalisation but requires large batches. Returns current loss always

    ‘LBFGS’: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html

  • update_loss: Bool
    determine current loss after update step (e.g. needed for queue, but get_loss can also be called explicitly)
Returns:
loss: float32

loss (nll or squared error)

loss_instance: float32 array

loss for individual batch examples/pixels

time_per_step: float

Time spent on the GPU per step

setOptimizerParams(SGD={}, CG={}, RPROP={}, LBFGS={}, Adam={}, weight_decay=0.0)[source]

Initialise optimiser hyper-parameters prior to compilation. SGD, CG and LBFGS this can also be done during Training.

weight_decay is global to all optimisers and is identical to a L2-penalty on the weights with the coefficient given by weight_decay

setSGDLR(value=0.09)[source]
setSGDMomentum(value=0.9)[source]
setWeightDecay(value=0.0005)[source]
setDropoutRates(rates)[source]

Assumes a vector/list/array as input, first entry <–> first layer (etc.)

getDropoutRates()[source]

Returns list of dropout rates

predictDense(raw_img, show_progress=True, offset=None, as_uint8=False, pad_raw=False)[source]

Core function that performs the inference

raw_img : np.ndarray
raw data in the format (ch, x, y(, z))
show_progress: Bool
Whether to print progress state
offset: 2/3-tuple
If the cnn has no dimension calculator object, this specifies the cnn offset.
as_uint8: Bool
Return class proabilites as uint8 image (scaled between 0 and 255!)
pad_raw: Bool
Whether to apply padding (by mirroring) to the raw input image in order to get predictions on the full imgae domain.
get_activities(data)[source]
get_nonpooled_activities(data)[source]
saveParameters(path='CNN.save', layers=None, show=True)[source]

Saves parameters to file, that can be loaded by loadParameters

loadParameters(myfile='CNN.save', strict=False, n_layers_to_load=-1)[source]

Loads parameters from file created by saveParameters. The parameter shapes do not need to fit the CNN architecture, they “squeezed” or “padded” to fit.

Additionally the momenta of the gradients are reset

Parameters:
myfile: string

Path to file

strict: bool

If true, parameter shapes must fit exactly, this the only way to load RNN parameters

n_layers_to_load: int

Only the first x layers are initialised if this is not at its default value (-1)

gradstats(*args, **kwargs)[source]
actstats(*args, **kwargs)[source]
paramstats(*args, **kwargs)[source]

elektronn.net.gaborfilters module

Supplementary functions to initialise CNN-params with gabor filters

elektronn.net.gaborfilters.makeGabor(filter_angle, n_modes, size, offset)[source]
Parameters:
filter_angle: in degree: 0 to 180
n_modes = 1,2,3 etc.
size: filter size
offset: 0 to 180
elektronn.net.gaborfilters.makeGaborFilters(size, number)[source]

Use this to generate number first order and number second order filters

elektronn.net.gaborfilters.blob(size)[source]

Return Gaussian blob filter

elektronn.net.introspection module

Supplementary functions to plot various CNN states

elektronn.net.introspection.plotFilters(cnn, layer=0, channel=None, normalize=False, savename='filters_layer0.png')[source]
elektronn.net.introspection.showActivations(cnn, data, show_first_class_prob=False, no_show=False)[source]

Plots activation maps given data. It requires that cnn.debug_functions contains a list of functions that return the activations (i.e. cnn.compileDebugFunctions must have been called

Parameters:
cnn:

instance of MixedConvNN

data:

input to cnn for which activations should be shown

show_first_class_prob:

True/False whether to additionally show the probability map for the first class

no_show:

True/False whether to pop up plots or silently return a list of image arrays

elektronn.net.introspection.showParamHistogram(cnn, no_show=False, onlyW=True)[source]

Plots histograms of parameter/weight values.

Parameters:
:type no_show: object
cnn:

instance of MixedConvNN

onlyW:

True/False whether to ignore the biases b

no_show:

True/False whether to pop up plots or silently return a list of image arrays

elektronn.net.introspection.showActivityHistogram(cnn, data, no_show=False)[source]

Plots histograms of activation maps given data. It requires that cnn.debug_functions contains a list of functions that return the activations (i.e. cnn.compileDebugFunctions must have been called

Parameters:
cnn:

instance of MixedConvNN

data:

input to cnn for which activations should be shown

no_show:

True/False whether to pop up plots or silently return a list of image arrays

elektronn.net.introspection.embedMatricesInGray(mat, border_width=1, normalize=False, output_ratio=1.7, fixed_n_horizontal=0)[source]

Creates a big matrix out of smaller ones (mat) assumed format of mat:(index, ch, i_vert,i_horiz)

elektronn.net.introspection.showMultipleFiguresAdd(fig, n, i, image, title, isGray=True)[source]

Add <i>th (of n, start: 0) image to figure <fig> as subplot (GRAY)

elektronn.net.netcreation module

elektronn.net.netcreation.createNet(config, input_size, n_ch, n_lab, dimension_calc)[source]

Creates CNN according to config

Parameters:
n_ch: int

Number of input channels in data

n_lab: int

Number of labels/classes/output_neurons

param_file: string/path

Optional file to initialise parameters of CNN from

Returns:
CNN-Object
elektronn.net.netcreation.createNetfromParams(param_file, patch_size, batch_size=1, activation_func='tanh', poolings=None, MFP=None, only_prediction=False)[source]

Convenience function to create CNN without config directly from a saved parameter file. Therefore this function only allows restricted configuration and does not initialise the training optimisers.

Parameters:
param_file: string/path

File to initialise parameters of CNN from. The file must contain a list of shapes of the W-parameters as first entry and should ideally contain a list of pooling factors as last entry, alternatively the can be given as optional argument

patch_size: tuple of int

Patch size for input data

batch_size: int

Number of input patches

activation_func: string

Activation function to use for all layers

poolings: list of int

Pooling factors per layer (if not included in the parameter file)

MFP: list of bool/{0,1}

Whether to use MFP in the respective layers

only_prediction: Bool

This excludes the building of the gradient (faster)

Returns:
CNN-Object

elektronn.net.netutils module

elektronn.net.netutils.CNNCalculator(filters, poolings, desired_input=None, MFP=False, force_center=False, desired_output=None, n_dim=1)[source]

Helper to calculate CNN architectures

This is a function, but it returns an object that has various architecture values as attributes. Useful is also to simply print ‘d’ as in the example.

Parameters:
filters: list

Filter shapes (for anisotropic filters the shapes are again a list)

poolings: list

Pooling factors

desired_input: int or list of int

Desired input size(s). If None a range of suggestions can be found in the attribute valid_inputs

MFP: list of int/{0,1}

Whether to apply Max-Fragment-Pooling in this layer and check compliance with max-fragment-pooling (requires other input sizes than normal pooling)

force_center: Bool

Check if output neurons/pixel lie at center of input neurons/pixel (and not in between)

desired_output: int or list of int

Alternative to desired_input

n_dim: int

Dimensionality of CNN

Examples

Calculation for anisotropic “flat” 3d CNN with MFP in the first layers only:

>>> desired_input   = [211, 211, 20]
>>> filters         = [[6,6,1], [4,4,4], [2,2,2], [1,1,1]]
>>> pool            = [[2,2,1], [2,2,2], [2,2,2], [1,1,1]]
>>> MFP             = [1,        1,       0,       0,   ]
>>> n_dim=3
>>> d = CNNCalculator(filters, pool, desired_input, MFP=MFP, force_center=True, desired_output=None, n_dim=n_dim)
Info: input (211) changed to (210) (size not possible)
Info: input (211) changed to (210) (size not possible)
Info: input (20) changed to (22) (size too small)
>>> print d
Input: [210, 210, 22]
Layer/Fragment sizes:     [[102, 49, 24, 24], [102, 49, 24, 24], [22, 9, 4, 4]]
Unpooled Layer sizes:     [[205, 99, 48, 24], [205, 99, 48, 24], [22, 19, 8, 4]]
Receptive fields: [[7, 15, 23, 23], [7, 15, 23, 23], [1, 5, 9, 9]]
Strides:          [[2, 4, 8, 8], [2, 4, 8, 8], [1, 2, 4, 4]]
Overlap:          [[5, 11, 15, 15], [5, 11, 15, 15], [0, 3, 5, 5]]
Offset:           [11.5, 11.5, 4.5].
    If offset is non-int: floor(offset).
    Select labels from within img[offset-x:offset+x]
    (non-int means, output neurons lie centered on input neurons,
    i.e. they have an odd field of view)
elektronn.net.netutils.initWeights(shape, scale='glorot', mode='normal', pool=None)[source]

elektronn.net.optimizer module

class elektronn.net.optimizer.Optimizer(model_obj=None, X=None, Y=None, Y_aux=[], top_loss=None, params=None)[source]

Bases: object

Optimizer Base Object, initialises generic optimizer variables

Parameters:
model_obj: cnn-object

Encapsulation of theano model (instead of giving X,Y etc. manually), all other arguments are retrieved from this object if they are None. If an argument is not None it will override the value from the model

X: symbolic input variable

Data

Y: symbolic output variable

Target

Y_aux: symbolic output variable

Auxiliary masks/weights/etc. for the loss, type: list!

top_loss: symbolic loss function:

Requires (X, Y (,*Y_aux)) for compilation

params: list of shared variables

List of parameter arrays against which the loss is optimised

Returns:
Callable optimizer object: loss = Optimizer(X, Y (,*Y_aux)) performs one iteration
updateOptimizerParams(optimizer_params)[source]

Update the hyper-parameter dictionary

get_loss(*args)[source]

[data, labels(, *aux)] –> [loss, loss_instance] loss_instance is the loss per instance (e.g. batch-item or pixel)

compileGradients()[source]

Compile and return a function that returns list of gradients

class elektronn.net.optimizer.compileSGD(optimizer_params, model_obj=None, X=None, Y=None, Y_aux=[], top_loss=None, params=None)[source]

Bases: elektronn.net.optimizer.Optimizer

Stochastic Gradient Descent

class elektronn.net.optimizer.compileAdam(optimizer_params, model_obj=None, X=None, Y=None, Y_aux=[], top_loss=None, params=None)[source]

Bases: elektronn.net.optimizer.Optimizer

Stochastic Gradient Descent

class elektronn.net.optimizer.compileRPROP(optimizer_params, model_obj=None, X=None, Y=None, Y_aux=[], top_loss=None, params=None)[source]

Bases: elektronn.net.optimizer.Optimizer

Resilient backPROPagation

class elektronn.net.optimizer.compileCG(optimizer_params, model_obj=None, X=None, Y=None, Y_aux=[], top_loss=None, params=None)[source]

Bases: elektronn.net.optimizer.Optimizer

Conjugate Gradient

lineSearch(*args)[source]

Needed for CG

class elektronn.net.optimizer.compileLBFGS(optimizer_params, model_obj=None, X=None, Y=None, Y_aux=[], top_loss=None, params=None, debug=False)[source]

Bases: elektronn.net.optimizer.Optimizer

L-BFGS (fast, full-batch method)

References (cite one):

R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound Constrained Optimization, (1995), SIAM Journal on Scientific and Statistical Computing, 16, 5, pp. 1190-1208.

C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization (1997), ACM Transactions on Mathematical Software, 23, 4, pp. 550 - 560.

J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization (2011), ACM Transactions on Mathematical Software, 38, 1.

vec2list(vec, target_list)[source]
list2vect(src_list, target_vec)[source]
loss_and_grad(params_vect_new, *args)[source]

internal use, updates self.params

elektronn.net.perceptronlayer module

class elektronn.net.perceptronlayer.PerceptronLayer(input, n_in, n_out, batch_size, enable_dropout, activation_func='tanh', input_noise=None, input_layer=None, W=None, b=None)[source]

Bases: object

Typical hidden layer of a MLP: units are fully-connected. Weight matrix W is of shape (n_in,n_out), the bias vector b is of shape (n_out,).

Parameters:
  • input (theano.tensor.dmatrix) – a symbolic tensor of shape (n_examples, n_in)
  • n_in (int) – dimensionality of input
  • n_out (int) – number of hidden units
  • batch_size (int) – batch_size
  • enable_dropout (Bool) – whether to enable dropout in this layer. The default rate is 0.5 but it can be changed with self.activation_noise.set_value(set_value(np.float32(p)) or using cnn.setDropoutRates
  • activation_func (string) – {‘relu’,’sigmoid’,’tanh’,’abs’, ‘maxout <i>’}
  • input_noise (theano.shared float32) – std of gaussian (centered) input noise. 0 or None –> no noise
  • input_layer (layer object) – just for keeping track of un-usual input layers
  • W (np.ndarray or T.TensorVariable) – weight matrix. If array, the values are used to initialise a shared variable for this layer. If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)
  • b (np.ndarray or T.TensorVariable) – bias vector. If array, the values are used to initialise a shared variable for this layer. If TensorVariable, than this variable is directly used (weight sharing with the layer from which this variable comes from)
randomizeWeights(scale='glorot', mode='uni')[source]
NLL(y, class_weights=None, example_weights=None, label_prop_thresh=None)[source]

Returns the symbolic mean and instance-wise negative log-likelihood of the prediction of this model under a given target distribution.

y: theano.tensor.TensorType
corresponds to a vector that gives for each example the correct label. Labels < 0 are ignored (e.g. can be used for label propagation)
class_weights: theano.tensor.TensorType
weight vector of float32 of length n_lab. Values: 1.0 (default), w < 1.0 (less important), w > 1.0 (more important class)
label_prop_thresh: float (0.5,1)
This threshold allows unsupervised label propagation (only for examples with negative/ignore labels). If the predictive probability of the most likely class exceeds the threshold, this class is assumed to be the correct label and the training is pushed in this direction. Should only be used with pre-trained networks, and values <= 0.5 are disabled.
NLL_weak(y, class_weights=None, example_weights=None, label_prop_thresh=None)[source]

Returns the symbolic mean and instance-wise negative log-likelihood of the prediction of this model under a given target distribution.

y: theano.tensor.TensorType
corresponds to a vector that gives for each example the correct label. Labels < 0 are ignored (e.g. can be used for label propagation)
class_weights: theano.tensor.TensorType
weight vector of float32 of length n_lab. Values: 1.0 (default), w < 1.0 (less important), w > 1.0 (more important class)
label_prop_thresh: float (0.5,1)
This threshold allows unsupervised label propagation (only for examples with negative/ignore labels). If the predictive probability of the most likely class exceeds the threshold, this class is assumed to be the correct label and the training is pushed in this direction. Should only be used with pre-trained networks, and values <= 0.5 are disabled.
nll_mutiple_binary(y, class_weights=None)[source]

Returns the mean and instance-wise negative log-likelihood of the prediction of this model under a given target distribution.

Parameters:y (theano.tensor.TensorType) – corresponds to a vector that gives for each example the correct label
Note: we use the mean instead of the sum so that
the learning rate is less dependent on the batch size
squared_distance(Target, Mask=None, return_instancewise=True)[source]

Target is the TARGET image (vectorized), -> shape(x) = (batchsize, n_target) output: scalar float32 mask: vectorized, 1==hole, 0==no_hole (== DOES NOT TRAIN ON NON-HOLES)

errors(y)[source]

Returns classification accuracy

Parameters:y (theano.tensor.TensorType) – corresponds to a vector that gives for each example the correct label
errors_no_tn(y)[source]
cross_entropy_array(Target, Mask=None, GaussianWindow=False)[source]

Target is the TARGET image (vectorized), -> shape(x) = (batchsize, imgsize**2) the output is of length: <batchsize>, Use cross_entropy() to get a scalar output.

cross_entropy(Target, Mask=None, GaussianWindow=False)[source]

Target is the TARGET image (vectorized), -> shape(x) = (batchsize, imgsize**2) output: scalar float32

class elektronn.net.perceptronlayer.RecurrentLayer(input, n_in, n_hid, batch_size, activation_func='tanh')[source]

Bases: object

Parameters:
  • input (symbolic input carrying [time, batch, feat]) – theano.tensor.ftensor3
  • n_in (int) – dimensionality of input
  • n_hid (int) – number of hidden units
  • activation_func (string) – {‘relu’,’sigmoid’,’tanh’,’abs’}
randomizeWeights(scale_w=1.0)[source]

elektronn.net.pooling module

elektronn.net.pooling.maxabs(t1, t2)[source]
elektronn.net.pooling.my_max_pool_3d(sym_input, pool_shape=(2, 2, 2))[source]

this one is pure theano. Hence all gradient-related stuff is working! No dimshuffling

elektronn.net.pooling.maxout(conv_out, factor=2, mode='max', axis=1)[source]

Pools axis 1 (the channels) of conv_out by factor. I.e. the number of channels is decreased by this factor. The pooling can either be done as max or maxabs. Spatial dimensions are unchanged

elektronn.net.pooling.pooling2d(conv_out, pool_shape=(2, 2), mode='max')[source]

Pools axis 2,3 (x,y) of conv_out by respective pool_shape. I.e. the spatial extent is decreased by this factor. The pooling can either be done as max or maxabs.

elektronn.net.pooling.pooling3d(conv_out, pool_shape=(2, 2, 2), mode='max')[source]

Pools axis 2,3 (x,y) of conv_out by respective pool_shape. I.e. the spatial extent is decreased by this factor. The pooling can either be done as max or maxabs.

Module contents