connectomics.utils¶
connectomics.utils.process¶
-
connectomics.utils.process.
bc_connected
(volume, thres1=0.8, thres2=0.5, thres_small=128, scale_factors=(1.0, 1.0, 1.0), dilation_struct=(1, 5, 5), remove_small_mode='background')[source]¶ Convert binary foreground probability maps and instance contours to instance masks via connected-component labeling.
Note
The instance contour provides additional supervision to distinguish closely touching objects. However, the decoding algorithm only keep the intersection of foreground and non-contour regions, which will systematically result in imcomplete instance masks. Therefore we apply morphological dilation (check
dilation_struct
) to enlarge the object masks.- Parameters
volume (numpy.ndarray) – foreground and contour probability of shape \((C, Z, Y, X)\).
thres1 (float) – threshold of foreground. Default: 0.8
thres2 (float) – threshold of instance contours. Default: 0.5
thres_small (int) – size threshold of small objects to remove. Default: 128
scale_factors (tuple) – scale factors for resizing in \((Z, Y, X)\) order. Default: (1.0, 1.0, 1.0)
dilation_struct (tuple) – the shape of the structure for morphological dilation. Default: (1, 5, 5)
remove_small_mode (str) –
'background'
,'neighbor'
or'none'
. Default:'background'
-
connectomics.utils.process.
bc_watershed
(volume, thres1=0.9, thres2=0.8, thres3=0.85, thres_small=128, scale_factors=(1.0, 1.0, 1.0), remove_small_mode='background', seed_thres=32, return_seed=False, precomputed_seed=None)[source]¶ Convert binary foreground probability maps and instance contours to instance masks via watershed segmentation algorithm.
Note
This function uses the skimage.segmentation.watershed function that converts the input image into
np.float64
data type for processing. Therefore please make sure enough memory is allocated when handling large arrays.- Parameters
volume (numpy.ndarray) – foreground and contour probability of shape \((C, Z, Y, X)\).
thres1 (float) – threshold of seeds. Default: 0.9
thres2 (float) – threshold of instance contours. Default: 0.8
thres3 (float) – threshold of foreground. Default: 0.85
thres_small (int) – size threshold of small objects to remove. Default: 128
scale_factors (tuple) – scale factors for resizing in \((Z, Y, X)\) order. Default: (1.0, 1.0, 1.0)
remove_small_mode (str) –
'background'
,'neighbor'
or'none'
. Default:'background'
-
connectomics.utils.process.
bcd_watershed
(volume, thres1=0.9, thres2=0.8, thres3=0.85, thres4=0.5, thres5=0.0, thres_small=128, scale_factors=(1.0, 1.0, 1.0), remove_small_mode='background', seed_thres=32, return_seed=False, precomputed_seed=None)[source]¶ Convert binary foreground probability maps, instance contours and signed distance transform to instance masks via watershed segmentation algorithm.
Note
This function uses the skimage.segmentation.watershed function that converts the input image into
np.float64
data type for processing. Therefore please make sure enough memory is allocated when handling large arrays.- Parameters
volume (numpy.ndarray) – foreground and contour probability of shape \((C, Z, Y, X)\).
thres1 (float) – threshold of seeds. Default: 0.9
thres2 (float) – threshold of instance contours. Default: 0.8
thres3 (float) – threshold of foreground. Default: 0.85
thres4 (float) – threshold of signed distance for locating seeds. Default: 0.5
thres5 (float) – threshold of signed distance for foreground. Default: 0.0
thres_small (int) – size threshold of small objects to remove. Default: 128
scale_factors (tuple) – scale factors for resizing in \((Z, Y, X)\) order. Default: (1.0, 1.0, 1.0)
remove_small_mode (str) –
'background'
,'neighbor'
or'none'
. Default:'background'
-
connectomics.utils.process.
binary_connected
(volume, thres=0.8, thres_small=128, scale_factors=(1.0, 1.0, 1.0), remove_small_mode='background')[source]¶ Convert binary foreground probability maps to instance masks via connected-component labeling.
- Parameters
volume (numpy.ndarray) – foreground probability of shape \((C, Z, Y, X)\).
thres (float) – threshold of foreground. Default: 0.8
thres_small (int) – size threshold of small objects to remove. Default: 128
scale_factors (tuple) – scale factors for resizing in \((Z, Y, X)\) order. Default: (1.0, 1.0, 1.0)
remove_small_mode (str) –
'background'
,'neighbor'
or'none'
. Default:'background'
-
connectomics.utils.process.
binary_watershed
(volume, thres1=0.98, thres2=0.85, thres_small=128, scale_factors=(1.0, 1.0, 1.0), remove_small_mode='background', seed_thres=32)[source]¶ Convert binary foreground probability maps to instance masks via watershed segmentation algorithm.
Note
This function uses the skimage.segmentation.watershed function that converts the input image into
np.float64
data type for processing. Therefore please make sure enough memory is allocated when handling large arrays.- Parameters
volume (numpy.ndarray) – foreground probability of shape \((C, Z, Y, X)\).
thres1 (float) – threshold of seeds. Default: 0.98
thres2 (float) – threshold of foreground. Default: 0.85
thres_small (int) – size threshold of small objects to remove. Default: 128
scale_factors (tuple) – scale factors for resizing in \((Z, Y, X)\) order. Default: (1.0, 1.0, 1.0)
remove_small_mode (str) –
'background'
,'neighbor'
or'none'
. Default:'background'
-
connectomics.utils.process.
polarity2instance
(volume, thres=0.5, thres_small=128, scale_factors=(1.0, 1.0, 1.0), semantic=False, dilate_sz=5, exclusive=False)[source]¶ From synaptic polarity prediction to instance masks via connected-component labeling. The input volume should be a 3-channel probability map of shape \((C, Z, Y, X)\) where \(C=3\), representing pre-synaptic region, post-synaptic region and their union, respectively. The function also handles the case where the pre- and post-synaptic masks are exclusive (applied a softmax function before post-processing).
Note
For each pair of pre- and post-synaptic segmentation, the decoding function will annotate pre-synaptic region as \(2n-1\) and post-synaptic region as \(2n\), for \(n>0\). If
semantic=True
, all pre-synaptic pixels are labeled with while all post-synaptic pixels are labeled with 2. Both kinds of annotation are compatible with theTARGET_OPT: ['1']
configuration in training.Note
The number of pre- and post-synaptic segments will be reported when setting
semantic=False
. Note that the numbers can be different due to either incomplete syanpses touching the volume borders, or errors in the prediction. We thus make a conservative estimate of the total number of synapses by using the relatively small number among the two.- Parameters
volume (numpy.ndarray) – 3-channel probability map of shape \((3, Z, Y, X)\).
thres (float) – probability threshold of foreground. Default: 0.5
thres_small (int) – size threshold of small objects to remove. Default: 128
scale_factors (tuple) – scale factors for resizing the output volume in \((Z, Y, X)\) order. Default: \((1.0, 1.0, 1.0)\)
semantic (bool) – return only the semantic mask of pre- and post-synaptic regions. Default: False
dilate_sz (int) – define a struct of size (1, dilate_sz, dilate_sz) to dilate the masks. Default: 5
exclusive (bool) – whether the synaptic masks are exclusive (with softmax) or not. Default: False
- Return type
numpy.ndarray
- Examples::
>>> from connectomics.data.utils import readvol, savevol >>> from connectomics.utils.processing import polarity2instance >>> volume = readvol(input_name) >>> instances = polarity2instance(volume) >>> savevol(output_name, instances)