Core modules#

Core image processing functions. Any function can be executed in prototyping, low throughput, and high throughput workflow. In general, the typical order in which the different steps are performed is:

preprocessing > segmentation > measurement > visualization > export

However, since phenopype version 2 you can arrange them however you like. In general, any function can either take an array or a phenopype container. If an array is passed, additional input arguments may be required (e.g. to draw contours onto an image, but an array and a DataFrame containing the contours must be supplied, whereas a container already includes both).

Preprocessing#

phenopype.core.preprocessing.blur(image, kernel_size=5, method='averaging', sigma_color=75, sigma_space=75, **kwargs)#

Apply a blurring algorithm to an image.

Parameters:
  • image (array) – input image

  • kernel_size (int, optional) – size of the blurring kernel (has to be odd - even numbers will be ceiled)

  • method ({averaging, gaussian, median, bilateral} str, optional) – blurring algorithm

  • sigma_colour (int, optional) – for ‘bilateral’

  • sigma_space (int, optional) – for ‘bilateral’

Returns:

image – blurred image

Return type:

ndarray

phenopype.core.preprocessing.clip_histogram(image, percent=1, **kwargs)#
phenopype.core.preprocessing.create_mask(image, tool='rectangle', include=True, label=None, line_colour='default', line_width='auto', label_colour='default', label_size='auto', label_width='auto', **kwargs)#

Mask an area by drawing a rectangle or polygon. Multiple mask components count as the same mask - e.g., if objects that you would like to mask out or include can be scattered across the image. Rectangles will finish upon lifting the mouse button, polygons are completed by pressing CTRL.

ANNOTATION FUNCTION

Parameters:
  • image (array) – input image

  • tool ({"rectangle","polygon"} str, optional) – Type of mask tool to be used. The default is “rectangle”.

  • include (bool, optional) – include or exclude area inside mask

  • label (str, optional) – label string for this mask and all its components

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

Returns:

annotations – phenopype annotation containing mask coordinates

Return type:

dict

phenopype.core.preprocessing.detect_mask(image, include=True, label=None, shape='circle', resize=1, circle_args={'dp': 1, 'max_radius': 0, 'min_dist': 50, 'min_radius': 0, 'param1': 200, 'param2': 100}, **kwargs)#

Detects geometric shapes in a single channel image (currently only circles are implemented) and converts boundary contours to a mask to include or exclude parts of the image. Depending on the object type, different settings apply.

Parameters:
  • image (ndarray) – input image (single channel).

  • include (bool, optional) – should the resulting mask include or exclude areas. The default is True.

  • shape (str, optional) – which geometric shape to be detected. The default is “circle”.

  • resize ((0.1-1) float, optional) – resize factor for image (some shape detection algorithms are slow if the image is very large). The default is 1.

  • circle_args (dict, optional) –

    A set of options for circle detection (for details see https://docs.opencv.org/3.4.9/dd/d1a/group__imgproc__feature.html ):

    • dp: inverse ratio of the accumulator resolution to the image ressolution

    • min_dist: minimum distance between the centers of the detected circles

    • param1: higher threshold passed to the canny-edge detector

    • param2: accumulator threshold - smaller = more false positives

    • min_radius: minimum circle radius

    • max_radius: maximum circle radius

    The default is:

    {
        "dp":1,
         "min_dist":50,
         "param1":200,
         "param2":100,
         "min_radius":0,
         "max_radius":0
         }
    

Returns:

annotations – phenopype annotation

Return type:

dict

phenopype.core.preprocessing.create_reference(image, unit='mm', line_colour='default', line_width='auto', label=True, label_colour='default', label_size='auto', label_width='auto', mask=False, **kwargs)#

Measure a size or colour reference card. Minimum input interaction is measuring a size reference: click on two points inside the provided image, and enter the distance - returns the pixel-to-mm-ratio.

In an optional second step, drag a rectangle mask over the reference card to exclude it from anysubsequent image segementation. The mask can also be stored as a template for automatic reference detection with the “detect_reference” function.

Parameters:
  • image (ndarray) – input image

  • mask (bool, optional) – mask a reference card inside the image and return its coordinates as

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

Returns:

annotations – phenopype annotation

Return type:

dict

phenopype.core.preprocessing.detect_reference(image, template, px_ratio, unit, get_mask=True, manual_fallback=True, correct_colours=False, min_matches=10, resize=1, **kwargs)#

Find reference from a template created with “create_reference”. Image registration is run by the “AKAZE” algorithm. Future implementations will include more algorithms to select from. First, use “create_reference” with “mask=True” and pass the template to this function. This happends automatically in the low and high throughput workflow. Use “equalize=True” to adjust the histograms of all colour channels to the reference image.

AKAZE: http://www.bmva.org/bmvc/2013/Papers/paper0013/abstract0013.pdf

Parameters:
  • image (ndarray) – input image

  • template (array) – template image-crop containing reference card

  • px_ratio – px-mm-ratio of template image

  • get_mask (bool) – retrieve mask and create annotation. The default is True.

  • manual_fallback (bool) – use manual reference-tool in case detection fails. The default is True.

  • correct_colours (bool, optional) – should the provided image be colour corrected to match the template images’ histogram

  • min_matches (int, optional) – minimum key point matches for image registration

  • resize (num, optional) – resize image to speed up detection process. default: 0.5 for images with diameter > 5000px (WARNING: too low values may result in poor detection performance or even crashes)

Returns:

annotations – phenopype annotation

Return type:

dict

phenopype.core.preprocessing.detect_QRcode(image, rot_steps=20, enter_manually=False, show_results=False, label='QR-code', label_colour='default', label_size='auto', label_width='auto', **kwargs)#

Find and read out a QR code that is contained inside an image. Rotate image until code is detected, or enter code manually if detection fails.

Parameters:
  • image (ndarray) – input image

  • rot_steps (TYPE, optional) – angle by which image is rotated (until 360 is reached). The default is 20.

  • enter_manually (TYPE, optional) – enter code manually of detection fails. The default is False.

  • show_results (TYPE, optional) – show the detection results. The default is False.

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – text colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – text label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – text label font thickness - automatically scaled to image by default

Returns:

annotations – phenopype annotation

Return type:

dict

phenopype.core.preprocessing.decompose_image(image, channel='gray', invert=False)#

Extract single channel from multi-channel array.

Parameters:
  • image (ndarray) – input image

  • channel ({"raw", "gray", "red", "green", "blue", "hue", "saturation", "value"} str, optional) – select specific image channel

  • invert (false, bool) – invert all pixel intensities in image (e.g. 0 to 255 or 100 to 155)

Returns:

image – decomposed image.

Return type:

ndarray

phenopype.core.preprocessing.write_comment(image, label='ID', label_colour='default', label_size='auto', label_width='auto', **kwargs)#

Add a comment.

Parameters:
  • image (ndarray) – input image

  • field (str, optional) – name the comment-field (useful for later processing). The default is “ID”.

Returns:

annotation_ref – phenopype annotation containing comment

Return type:

dict

Segmentation#

phenopype.core.segmentation.contour_to_mask(annotations, include=True, label=None, box_margin=0, largest=True, **kwargs)#

Converts a contour to a mask annotation, e.g. for the purpose of creating an ROI and exporting it or for subsequent segmentation inside that mask. Creates a rectangle bounding box around the largest or all contours.

Parameters:
  • annotation (dict) – phenopype annotation containing contours

  • include (bool, optional) – include or exclude area inside mask

  • label (str, optional) – text label for this mask and all its components

  • box_margin (int, optional) – margin that is added between the outer perimeter of the contour and the box

  • largest (bool, optional) – either use only the largest contour or concatenate all supplied contours

Returns:

annotation – phenopype annotation containing contours

Return type:

dict

phenopype.core.segmentation.detect_contour(image, approximation='simple', retrieval='ext', match_against=None, apply_drawing=False, offset_coords=[0, 0], min_nodes=3, max_nodes=inf, min_area=0, max_area=inf, min_diameter=0, max_diameter=inf, **kwargs)#

Find objects in binarized image. The input image needs to be a binarized image A flavor of different approximation algorithms and retrieval intstructions can be applied. The contours can also be filtered for minimum area, diameter or nodes (= number of corners).

Parameters:
  • image (ndarray) – input image (binary)

  • approximation ({"none", "simple", "L1", "KCOS"] str, optional) –

    contour approximation algorithm:
    • none: no approximation, all contour coordinates are returned

    • simple: only the minimum coordinates required are returned

    • L1: Teh-Chin chain approximation algorithm

    • KCOS: Teh-Chin chain approximation algorithm

  • retrieval ({"ext", "list", "tree", "ccomp", "flood"} str, optional) –

    contour retrieval procedure:
    • ext: retrieves only the extreme outer contours

    • list: retrieves all of the contours without establishing any hierarchical relationships

    • tree: retrieves all of the contours and reconstructs a full hierarchy of nested contours

    • ccomp: retrieves all of the contours and organizes them into a two-level hierarchy (parent and child)

    • flood: flood-fill algorithm

  • offset_coords (tuple, optional) – offset by which every contour point is shifted.

  • min_nodes (int, optional) – minimum number of coordinates

  • max_nodes (int, optional) – maximum number of coordinates

  • min_area (int, optional) – minimum contour area in pixels

  • max_area (int, optional) – maximum contour area in pixels

  • min_diameter (int, optional) – minimum diameter of boundary circle

  • max_diameter (int, optional) – maximum diameter of boundary circle

Returns:

annotation – phenopype annotation containing contours

Return type:

dict

phenopype.core.segmentation.edit_contour(image, annotations, overlay_blend=0.2, overlay_line_width=1, overlay_colour_left='lime', overlay_colour_right='red', **kwargs)#

Edit contours with a “paintbrush”. The brush size can be controlled by pressing Tab and using the mousewhell. Right-click removes, and left-click adds areas to contours. Overlay colour, transparency (blend) and outline can be controlled.

Parameters:
  • image (ndarray) – input image

  • annotations (dict) – phenopype annotation containing contours

  • overlay_blend (float, optional) – transparency / colour-mixing of the contour overlay

  • overlay_line_width (int, optional) – add outline to the contours. useful when overlay_blend == 0

  • left_colour (str, optional) – overlay colour for left click (include). (for options see pp.colour)

  • right_colour (str, optional) – overlay colour for right click (exclude). (for options see pp.colour)

Returns:

annotations – phenopype annotation containing contours

Return type:

dict

phenopype.core.segmentation.mask_to_contour(annotations, include=True, box_margin=0, **kwargs)#

Converts a mask to contour annotation, e.g. for the purpose of extracting information from this mask.

Parameters:
  • annotation (dict) – phenopype annotation containing masks

  • include (bool, optional) – include or exclude area inside contour

  • box_margin (int, optional) – margin that is added between the outer perimeter of the mask and the box

Returns:

annotation – phenopype annotation containing contours

Return type:

dict

phenopype.core.segmentation.morphology(image, kernel_size=5, shape='rect', operation='close', iterations=1, **kwargs)#

Performs advanced morphological transformations using erosion and dilation as basic operations. Provides different kernel shapes and a suite of operation types (read more about morphological operations here: https://docs.opencv.org/master/db/df6/tutorial_erosion_dilatation.html)

Parameters:
  • image (ndarray) – input image (binary)

  • kernel_size (int, optional) – size of the morphology kernel (has to be odd - even numbers will be ceiled)

  • shape ({"rect", "cross", "ellipse"} str, optional) – shape of the kernel

  • operation ({erode, dilate, open, close, gradient, tophat, blackhat, hitmiss} str, optional) –

    the morphology operation to be performed:
    • erode: remove pixels from the border

    • dilate: add pixels to the border

    • open: erosion followed by dilation

    • close: dilation followed by erosion

    • gradient: difference between dilation and erosion of an input image

    • tophat: difference between input image and opening of input image

    • blackhat: difference between input image and closing of input image

    • hitmiss: find patterns in binary images (read more here: https://docs.opencv.org/master/db/d06/tutorial_hitOrMiss.html)

  • iterations (int, optional) – number of times to run morphology operation

Returns:

image – processed binary image

Return type:

ndarray

phenopype.core.segmentation.threshold(image, method='otsu', constant=1, blocksize=99, value=127, channel=None, mask=True, invert=False, **kwargs)#

Applies a threshold to create a binary image from a grayscale or a multichannel image (see phenopype.core.preprocessing.decompose_image for channel options).

Three types of thresholding algorithms are supported:
  • otsu: use Otsu algorithm to choose the optimal threshold value

  • adaptive: dynamic threshold values across image (uses arguments “blocksize” and “constant”)

  • binary: fixed threshold value (uses argument “value”)

Mask annotations can be supplied to include or exclude areas.

Parameters:
  • image (ndarray) – input image

  • method ({"otsu", "adaptive", "binary"} str, optional) – type of thresholding algorithm to be used

  • blocksize (int, optional) – Size of a pixel neighborhood that is used to calculate a threshold value for the pixel (has to be odd - even numbers will be ceiled; for “adaptive” method)

  • constant (int, optional) – value to subtract from binarization output (for “adaptive” method)

  • value ({between 0 and 255} int, optional) – thesholding value (for “binary” method)

  • {"gray" (channel) – which channel of the image to use for thresholding

  • "red" (str, optional) – which channel of the image to use for thresholding

  • "green" (str, optional) – which channel of the image to use for thresholding

  • "blue"} (str, optional) – which channel of the image to use for thresholding

Returns:

image – binary image

Return type:

ndarray

phenopype.core.segmentation.watershed(image, annotations, iterations=1, kernel_size=3, distance_cutoff=0.8, distance_mask=0, distance_type='l1', **kwargs)#

Performs non-parametric marker-based segmentation - useful if many detected contours are touching or overlapping with each other. Input image should be a binary image that serves as the true background. Iteratively, edges are eroded, the difference serves as markers.

Parameters:
  • image (ndarray) – input image

  • image_thresh (array) – binary image (e.g. from threshold)

  • kernel_size (int, optional) – size of the diff-kernel (has to be odd - even numbers will be ceiled)

  • iterations (int, optional) – number of times to apply diff-operation

  • distance_cutoff ({between 0 and 1} float, optional) – watershed distance transform cutoff (larger values discard more pixels)

  • distance_mask ({0, 3, 5} int, optional) – size of distance mask - not all sizes work with all distance types (will be coerced to 0)

  • distance_type ({"user", "l1", "l2", "C", "l12", "fair", "welsch", "huber"} str, optional) – distance transformation type

Returns:

image – binary image

Return type:

ndarray

Measurement#

phenopype.core.measurement.set_landmark(image, point_colour='default', point_size='auto', label=True, label_colour='default', label_size='auto', label_width='auto', **kwargs)#

Place landmarks. Note that modifying the appearance of the points will only be effective for the placement, not for subsequent drawing, visualization, and export.

Parameters:
  • image (ndarray) – input image

  • point_colour (str, optional) – landmark point colour (for options see pp.colour)

  • point_size (int, optional) – landmark point size in pixels

  • label (bool, optional) – add text label

  • label_colour (str, optional) – landmark label colour (for options see pp.colour)

  • label_size (int, optional) – landmark label font size (scaled to image)

  • label_width (int, optional) – landmark label font width (scaled to image)

Returns:

annotations – phenopype annotation containing landmarks

Return type:

dict

phenopype.core.measurement.set_polyline(image, line_width='auto', line_colour='default', **kwargs)#

Set points, draw a connected line between them, and measure its length. Note that modifying the appearance of the lines will only be effective for the placement, not for subsequent drawing, visualization, and export.

Parameters:
  • image (ndarray) – input image

  • line_width (int, optional) – width of polyline

  • line_colour (str, optional) – poly line colour (for options see pp.colour)

Returns:

annotations – phenopype annotation containing polylines

Return type:

dict

phenopype.core.measurement.detect_skeleton(annotations, thinning='zhangsuen', **kwargs)#

Applies a binary blob thinning operation, to achieve a skeletization of the input image using the technique, i.e. retrieve the topological skeleton (https://en.wikipedia.org/wiki/Topological_skeleton), using the algorithms of Thang-Suen or Guo-Hall.

Parameters:
  • image (ndarray) – input image

  • annotation (dict) – phenopype annotation containing contours

  • thinning ({"zhangsuen", "guohall"} str, optional) – type of thinning algorithm to apply

Returns:

annotations – phenopype annotation containing skeleton coords

Return type:

dict

phenopype.core.measurement.compute_shape_features(annotations, features=['basic'], min_diameter=5, **kwargs)#

Collects a set of 41 shape descriptors from every contour. There are three sets of descriptors: basic shape descriptors, moments, and hu moments. Two additional features, contour area and diameter are already provided by the find_contours function. https://docs.opencv.org/3.4.9/d3/dc0/group__imgproc__shape.html

Of the basic shape descriptors, all 12 are translational invariants, 8 are rotation invariant (rect_height and rect_width are not) and 4 are also scale invariant (circularity, compactness, roundness, solidity). https://en.wikipedia.org/wiki/Shape_factor_(image_analysis_and_microscopy)

The moments set encompasses 10 raw spatial moments (some are translation and rotation invariants, but not all), 7 central moments (all translational invariant) and 7 central normalized moments (all translational and scale invariant). https://en.wikipedia.org/wiki/Image_moment

The 7 hu moments are derived of the central moments, and are all translation, scale and rotation invariant. http://www.sci.utah.edu/~gerig/CS7960-S2010/handouts/Hu.pdf

Basic shape descriptors:
  • circularity = 4 * np.pi * contour_area / contour_perimeter_length^2

  • compactness = √(4 * contour_area / pi) / contour_diameter

  • min_rect_max = minimum bounding rectangle major axis

  • min_rect_min = minimum bounding rectangle minor axis

  • perimeter_length = total length of contour perimenter

  • rect_height = height of the bounding rectangle (“caliper dim 1”)

  • rect_width = width of the bounding rectangle (“caliper dim 2”)

  • roundness = (4 * contour_area) / (pi * contour_perimeter_length^2)

  • solidity = contour_area / convex_hull_area

  • tri_area = area of minimum bounding triangle

Moments:
  • raw moments = m00, m10, m01, m20, m11, m02, m30, m21, m12, m03

  • central moments = mu20, mu11, mu02, mu30, mu21, mu12, mu03,

  • normalized central moments = nu20, nu11, nu02, nu30, nu21, nu12, nu03

Hu moments:
hu moments:

hu1 - hu7

Parameters:
  • image (ndarray) – input image

  • features (["basic", "moments", "hu_moments"]) – type of shape features to extract

Returns:

annotations – phenopype annotation containing shape features

Return type:

dict

phenopype.core.measurement.compute_texture_features(image, annotations, features=['firstorder'], channel_names=['blue', 'green', 'red'], min_diameter=5, **kwargs)#

Collects 120 texture features using the pyradiomics feature extractor ( https://pyradiomics.readthedocs.io/en/latest/features.html ):

  • firstorder: First Order Statistics (19 features)

  • shape2d: Shape-based (2D) (16 features)

  • glcm: Gray Level Cooccurence Matrix (24 features)

  • gldm: Gray Level Dependence Matrix (14 features)

  • glrm: Gray Level Run Length Matrix (16 features)

  • glszm: Gray Level Size Zone Matrix (16 features)

  • ngtdm: Neighbouring Gray Tone Difference Matrix (5 features)

Features are collected from every contour that is supplied along with the raw image, which, depending on the number of contours, may result in long computing time and very large dataframes.

The specified channels correspond to the channels that can be selected in phenopype.core.preprocessing.decompose_image.

Parameters:
  • image (ndarray) – input image

  • annotation (dict) – phenopype annotation containing contours

  • features (["firstorder", "shape", "glcm", "gldm", "glrlm", "glszm", "ngtdm"] list, optional) – type of texture features to extract

  • channels (list, optional) – image channel to extract texture features from. if none is given, will extract from all channels in image

  • min_diameter (int, optional) – minimum diameter of the contour (shouldn’t be too small for sensible feature extraction’)

Returns:

annotations – phenopype annotation containing texture features

Return type:

dict

Visualization#

phenopype.core.visualization.draw_comment(image, annotations, label_colour='default', label_size='auto', label_width='auto', background=True, background_colour='white', background_pad=10, background_border='black', font='simplex', **kwargs)#
Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing QR-code (comment)

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label (bool, optional) – draw reference label

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

  • **kwargs (TYPE) – DESCRIPTION.

Returns:

image – canvas with contours

Return type:

ndarray

phenopype.core.visualization.draw_contour(image, annotations, fill=0.3, line_colour='default', line_width='auto', label=False, label_colour='default', label_size='auto', label_width='auto', offset_coords=None, bounding_box=False, bounding_box_ext=20, bounding_box_colour='default', bounding_box_line_width='auto', **kwargs)#

Draw contours and their labels onto a canvas. Can be filled or empty, offset coordinates can be supplied.

Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing contours

  • offset_coords (tuple, optional) – offset coordinates, will be added to all contours

  • label (bool, optional) – draw contour label

  • fill (float, optional) – background transparency for contour fill (0=no fill).

  • level (int, optional) – the default is 3.

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

  • bounding_box (bool, optional) – draw bounding box around the contour

  • bounding_box_ext (in, optional) – value in pixels by which the bounding box should be extended

  • bounding_box_colour ({"green", "red", "blue", "black", "white"} str, optional) – bounding box line colour

  • bounding_box_line_width (int, optional) – bounding box line width

Returns:

image – canvas with contours

Return type:

ndarray

phenopype.core.visualization.draw_landmark(image, annotations, label=True, label_colour='default', label_size='auto', label_width='auto', offset=0, point_colour='default', point_size='auto', **kwargs)#

Draw landmarks into an image.

Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing landmarks

  • label (bool, optional) – draw landmark label

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

  • offset (int, optional) – add offset (in pixels) to text location (to bottom-left corner of the text string)

  • point_colour ({"green", "red", "blue", "black", "white"} str, optional) – landmark point colour

  • point_size (int, optional) – landmark point size in pixels

Returns:

image – canvas with landmarks

Return type:

ndarray

phenopype.core.visualization.draw_mask(image, annotations, line_colour='default', line_width='auto', label=False, label_colour='default', label_size='auto', label_width='auto', **kwargs)#

Draw masks into an image. This function is also used to draw the perimeter of a created or detected reference card.

Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing masks

  • label (bool, optional) – draw mask label

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

Returns:

image – canvas with masks

Return type:

ndarray

phenopype.core.visualization.draw_polyline(image, annotations, line_colour='default', line_width='auto', show_nodes=False, node_colour='default', node_size='auto', **kwargs)#

Draw masks into an image. This function is also used to draw the perimeter of a created or detected reference card.

Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing lines

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default DESCRIPTION. The default is “auto”.

  • show_nodes (bool, optional) – show nodes of polyline. The default is False.

  • node_colour (str, optional) – colour of node points. The default is “default”.

  • node_size (int, optional) – size of node points. The default is “auto”.

Returns:

image – canvas with lines

Return type:

ndarray

phenopype.core.visualization.draw_QRcode(image, annotations, line_colour='default', line_width='auto', label=False, label_colour='default', label_size='auto', label_width='auto', **kwargs)#
Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing QR-code (comment)

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label (bool, optional) – draw reference label

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

  • **kwargs (TYPE) – DESCRIPTION.

Returns:

canvas – DESCRIPTION.

Return type:

TYPE

phenopype.core.visualization.draw_reference(image, annotations, line_colour='default', line_width='auto', label=True, label_colour='default', label_size='auto', label_width='auto', **kwargs)#
Parameters:
  • image (ndarray) – image used as canvas

  • annotation (dict) – phenopype annotation containing reference data

  • line_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour line colour - default colour as specified in settings

  • line_width ({"auto", ... int > 0} int, optional) – contour line width - automatically scaled to image by default

  • label (bool, optional) – draw reference label

  • label_colour ({"default", ... see phenopype.print_colours()} str, optional) – contour label colour - default colour as specified in settings

  • label_size ({"auto", ... int > 0} int, optional) – contour label font size - automatically scaled to image by default

  • label_width ({"auto", ... int > 0} int, optional) – contour label font thickness - automatically scaled to image by default

Returns:

canvas – DESCRIPTION.

Return type:

TYPE

phenopype.core.visualization.select_canvas(image, canvas='raw', multi_channel=True, invert=False, **kwargs)#

Isolate a colour channel from an image or select canvas for the pype method.

Parameters:
  • image (ndarray) – image used as canvas

  • canvas ({"mod", "bin", "gray", "raw", "red", "green", "blue"} str, optional) – the type of canvas to be used for visual feedback. some types require a function to be run first, e.g. “bin” needs a segmentation algorithm to be run first. black/white images don’t have colour channels. coerced to 3D array by default

  • multi (bool, optional) – coerce returned array to multichannel (3-channel)

Returns:

canvas – canvas for drawing

Return type:

ndarray

Export#

phenopype.core.export.convert_annotation(annotations, annotation_type, annotation_id, annotation_type_new, annotation_id_new, overwrite=False, **kwargs)#

convert coordinates from one annotation type to another. currently, only converting from contour to mask format is possible

Parameters:
  • annotations (dict) – A phenopype annotation dictionary.

  • annotation_type (str | list of str) – If dict contains multiple annotation types, select one or more to load. None will load all types.

  • annotation_id (str | list of str) – If file contains multiple annotation IDs, select one or more to load. None will load all IDs within a type.

  • annotation_type_new (str) – target annotation type

  • annotation_id_new (str | list of str, optional) – target annotation id

  • overwrite (bool, optional) – if target exists, overwrite? The default is False.

Return type:

None.

phenopype.core.export.export_csv(annotations, dir_path, annotation_type=None, image_name=None, overwrite=False, **kwargs)#

export annotations from json to csv format.

Parameters:
  • annotations (dict) – A phenopype annotation dictionary.

  • dir_path (str) – To which folder should the csv file be exported. PYPE: Automatically set to the current image directory

  • annotation_type (str / list, optional) – Which annotation types should be exported - can be string or list of strings.

  • image_name (str) – Image name to be added as a column PYPE: Automatically adds the image name.

  • overwrite (bool, optional) – Should an existing csv file be overwritten. The default is False.

Return type:

None.

phenopype.core.export.load_annotation(filepath, annotation_type=None, annotation_id=None, tag=None, **kwargs)#

load phenopype annotations file

Parameters:
  • filepath (str) – Path to JSON file containing annotations

  • annotation_type (str | list of str, optional) – If file contains multiple annotation types, select one or more to load. None will load all types. The default is None.

  • annotation_id (str | list of str, optional) – If file contains multiple annotation IDs, select one or more to load. None will load all IDs within a type. The default is None.

Returns:

Loaded annotations.

Return type:

dict

phenopype.core.export.save_annotation(annotations, dir_path, annotation_type=None, annotation_id=None, overwrite=False, **kwargs)#

save phenopype annotations file

Parameters:
  • annotation (dict) –

    Annotation dictionary formatted by phenopype specifications:

    {
        annotation_type = {
            annotation_id = annotation,
            annotation_id = annotation,
            ...
            },
        annotation_type = {
            annotation_id = annotation,
            annotation_id = annotation,
            ...
            },
        ...
    }
    

  • annotation_id (str, optional) – String (“a”-“z”) specifying the annotation ID to be saved. None will save all IDs. The default is None.

  • dir_path (str, optional) – Path to folder where annotation should be saved. None will save the annotation in the current Python working directory. The default is None.

  • file_name (str, optional) – Filename for JSON file containing annotation. The default is “annotations.json”.

  • overwrite (bool, optional) –

    Overwrite options should file or annotation entry in file exist:

    • False = Neither file or entry will be overwritten

    • True or “entry” = A single entry will be overwritten

    • ”file” = The whole will be overwritten.

    The default is False.

Return type:

None.

phenopype.core.export.save_ROI(image, annotations, dir_path, file_name, channel='raw', counter=True, prefix=None, suffix=None, rotate=False, rotate_padding=5, angle_apply=None, align='v', extension='png', background='original', canvas_max_dim=False, max_dim=False, padding=True, **kwargs)#

Save a region of interest (ROI) indicated by contour or mask coordinates as a crop of the original image, optionally with white background

Parameters:
  • image (ndarray) – An image containing regions of interest (ROI).

  • annotations (dict) – A phenopype annotation dict containing one or more contour coordinate entries.

  • dir_path (str, optional) – Path to folder where annotation should be saved. None will save the annotation in the current Python working directory. The default is None.

  • file_name (str) – Name for ROI series (should reflect image content, not “ROI” or the like which is specified with prefix or suffix arguments). The contour index will be added as a numeric string at the end of the filename.

  • channel (str, optional) – Which channel to save. The default is “raw”.

  • counter (TYPE, optional) – Whether to add a contour to the filename. The default is True.

  • prefix (str, optional) – Prefix to prepend to individual ROI filenames. The default is None.

  • suffix (str, optional) – Suffix to append to individual ROI filenames. The default is “roi”.

  • extension (str, optional) – New e. The default is “png”.

  • background (str, optional) – Sets background. The default is “original”, providing the background contained within the bounding rectangle. “transparent” will produce a png file with tranparent background. “white”, “black” or any other color will produce a different background color.

Return type:

None.

phenopype.core.export.save_canvas(image, dir_path, file_name='canvas', **kwargs)#
Parameters:
  • image (ndarray) – A canvas to be saved.

  • save_suffix (str) – A suffix to be appended to the filename.

  • dir_path (str) – Path to directory to save canvas.

Return type:

None.