{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Tutorial 6: Size and colour reference\n", "\n", "Unless images are taken in a highly standardized environment, e.g. via a scanner or a microscope, variation will be introduced in terms of exposure or distance between camera and photographed object, zooming, etc. To compensate this variation among images within and across datasets, Phenopype contains some preprocessing tools that can automatically correct images. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "
\n", "

See also:

\n", "

\n", "\n", "* [phenopype docs: Project-API](https://www.phenopype.org/docs/api/project/)\n", "* [phenopype docs: preprocessing-API](https://www.phenopype.org/docs/api/core/#phenopype.core.preprocessing.detect_reference)\n", "\n", "

\n", "
\n", "
\n", " \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load project \n", "\n", "We will use the project we created before in [Tutorial 5](tutorial_4.ipynb):" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--------------------------------------------\n", "Found existing project root directory - loading from:\n", "C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\n", "\n", "Project \"tutorial_project\" successfully loaded with 3 images\n", "--------------------------------------------\n" ] } ], "source": [ "import phenopype as pp\n", "import os\n", "\n", "os.chdir(r\"C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\")\n", "\n", "myproj = pp.Project(r\"tutorial_project\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First we will need to select and measure the pixel-ratio in a reference image, to which we want to match all images in the project. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adding a reference to the project\n", "\n", "With the class method `add_reference` we can set a project specific scale by measuring the pixel-ratio in a reference image. The steps for this are:\n", "\n", "1. Click on two points with a known distance, e.g. on a piece of mm-paper that you put in the image, and hit `Enter`.\n", "2. Enter the length that will be used to calculate the pixel-ratio.\n", "3. Mark the boundaries of the reference card to create a template for automatic scale detection that is saved on the project's root directory (in the template folder).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "\n", "
\n", "
\n", "\n", "
\n", "
\n", " \n", "**Fig 1** The reference image can be any image, but choose it carefully: if you plan on doing brightness and colour corrections, it should be in the middle of the distribution of all exposures and colours so corrections will not over-expose or over-saturate the images. \n", " \n", "
\n", "
\n", "\n", "\n", "We will use the image `stickleback_side.jpg` from the `image` folder in `tutorials`:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Reference image not saved, file already exists - use \"overwrite==True\" or chose different name.\n", "setting active global project reference to \"scale1\" for 0__stickle1 (active=True)\n", "setting active global project reference to \"scale1\" for 0__stickle2 (active=True)\n", "setting active global project reference to \"scale1\" for 0__stickle3 (active=True)\n" ] } ], "source": [ "myproj.add_reference(\n", " reference_image_path=r\"tutorials/data/stickleback_side.jpg\", \n", " reference_tag=\"scale1\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are now all set to automatically detect the reference card in our images." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Detecting a reference\n", "\n", "To detect the reference template in our images, we need the function `detect_reference` from the `preprocessing` module. To do so on the existing project, we will use the `edit_config` method that was demonstrated in [Tutorial 5](tutorial_5.ipynb#Modify-configurations-in-all-subdirectories). Again, we want to modify the config in two places: first in the preprocessing chunk, to perform the actual reference detection, and then in the visualization chunk, to show the outline of the detected reference:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "# phenopype quickstart template\n", "# -----------------------------\n", "# This template is intended to go with the phenopype quickstart materials \n", "# - for details see https://www.phenopype.org/docs/quickstart/ and refer to\n", "# Figure 2 in Luerig 2021 (https://doi.org/10.1111/2041-210X.13771) or \n", "# phenopype tutorial 3 (https://www.phenopype.org/docs/tutorials/tutorial_3).\n", "# For a better job of measuring individual plate area see the phenopype \n", "# gallery (https://www.phenopype.org/gallery/example_5/).\n", "\n", "config_info:\n", " config_name: pype_config_plates-v1.yaml\n", " date_created: '2022-05-06 15:04:33'\n", " date_last_modified:\n", " template_name: quickstart-template.yaml\n", " template_path: C:\\Users\\mluerig\\Downloads\\phenopype-quickstart-main\\quickstart-template.yaml\n", "processing_steps:\n", " - preprocessing:\n", " - detect_reference\n", " - create_mask:\n", " ANNOTATION: {type: mask, id: a, edit: false}\n", " tool: polygon\n", " label: plates\n", " - blur:\n", " kernel_size: 9\n", " - segmentation:\n", " - threshold:\n", " method: adaptive\n", " blocksize: 99\n", " constant: 5\n", " - detect_contour:\n", " ANNOTATION: {type: contour, id: a, edit: overwrite}\n", " retrieval: ext\n", " min_area: 150\n", " - measurement:\n", " - compute_shape_features:\n", " ANNOTATION: {type: shape_features, id: a, edit: overwrite}\n", " - visualization:\n", " - select_canvas:\n", " canvas: raw\n", " - draw_contour\n", " - draw_mask:\n", " label: true\n", " - export:\n", " - save_canvas\n", " - save_annotation:\n", " overwrite: true\n", "\n", "This is what the new config may look like (can differ beteeen files) - proceed?y\n", "New config saved for 0__stickle1\n", "New config saved for 0__stickle2\n", "New config saved for 0__stickle3\n" ] } ], "source": [ "## modify the \"preprocessing\" section\n", "\n", "target1 = \"\"\" - preprocessing:\"\"\"\n", "replacement1 = \"\"\" - preprocessing:\n", " - detect_reference\"\"\"\n", " \n", "myproj.edit_config(\n", " tag=\"plates-v1\",\n", " target=target1,\n", " replacement=replacement1,\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "# phenopype quickstart template\n", "# -----------------------------\n", "# This template is intended to go with the phenopype quickstart materials \n", "# - for details see https://www.phenopype.org/docs/quickstart/ and refer to\n", "# Figure 2 in Luerig 2021 (https://doi.org/10.1111/2041-210X.13771) or \n", "# phenopype tutorial 3 (https://www.phenopype.org/docs/tutorials/tutorial_3).\n", "# For a better job of measuring individual plate area see the phenopype \n", "# gallery (https://www.phenopype.org/gallery/example_5/).\n", "\n", "config_info:\n", " config_name: pype_config_plates-v1.yaml\n", " date_created: '2022-05-06 15:04:33'\n", " date_last_modified:\n", " template_name: quickstart-template.yaml\n", " template_path: C:\\Users\\mluerig\\Downloads\\phenopype-quickstart-main\\quickstart-template.yaml\n", "processing_steps:\n", " - preprocessing:\n", " - detect_reference\n", " - create_mask:\n", " ANNOTATION: {type: mask, id: a, edit: false}\n", " tool: polygon\n", " label: plates\n", " - blur:\n", " kernel_size: 9\n", " - segmentation:\n", " - threshold:\n", " method: adaptive\n", " blocksize: 99\n", " constant: 5\n", " - detect_contour:\n", " ANNOTATION: {type: contour, id: a, edit: overwrite}\n", " retrieval: ext\n", " min_area: 150\n", " - measurement:\n", " - compute_shape_features:\n", " ANNOTATION: {type: shape_features, id: a, edit: overwrite}\n", " - visualization:\n", " - select_canvas:\n", " canvas: raw\n", " - draw_contour\n", " - draw_mask:\n", " label: true\n", " - draw_reference:\n", " line_colour: aqua\n", " line_width: 10\n", " scale: true\n", " - export:\n", " - save_canvas\n", " - save_annotation:\n", " overwrite: true\n", "\n", "This is what the new config may look like (can differ beteeen files) - proceed?y\n", "New config saved for 0__stickle1\n", "New config saved for 0__stickle2\n", "New config saved for 0__stickle3\n" ] } ], "source": [ "## \"visualization\" modification to add `draw_reference`:\n", "\n", "target2 = \"\"\"- draw_mask:\n", " label: true\"\"\"\n", "replacement2 = \"\"\"- draw_mask:\n", " label: true\n", " - draw_reference:\n", " line_colour: aqua\n", " line_width: 10\n", " scale: true\"\"\"\n", " \n", "myproj.edit_config(\n", " tag=\"plates-v1\",\n", " target=target2,\n", " replacement=replacement2,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we run our loop with the new `pype` configuration: " ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "- no annotation_type selected - returning all annotations\n", "\n", "AUTOLOAD\n", "- annotations loaded:\n", "{\n", "\"mask\": [\"a\"],\n", "\"contour\": [\"a\"],\n", "\"shape_features\": [\"a\"],\n", "\"reference\": [\"a\"]\n", "}\n", "- reference template image loaded from root directory\n", "Stage: add annotation control args\n", "Updating pype config: applying staged changes\n", "\n", "\n", "------------+++ new pype iteration 2022-05-06 15:08:46 +++--------------\n", "\n", "\n", "\n", "\n", "PREPROCESSING\n", "detect_reference\n", "- loaded existing annotation of type \"reference\" with ID \"a\": skipping (edit=False)\n", "create_mask\n", "- loaded existing annotation of type \"mask\" with ID \"a\": skipping (edit=False)\n", "blur\n", "\n", "\n", "SEGMENTATION\n", "threshold\n", "- multichannel image supplied, converting to grayscale\n", "- decompose image: using gray channel\n", "- including pixels from 1 drawn masks \n", "- excluding pixels from reference\n", "detect_contour\n", "- loaded existing annotation of type \"contour\" with ID \"a\": overwriting (edit=overwrite)\n", "- found 16 contours that match criteria\n", "\n", "\n", "MEASUREMENT\n", "compute_shape_features\n", "- loaded existing annotation of type \"shape_features\" with ID \"a\": overwriting (edit=overwrite)\n", "\n", "\n", "VISUALIZATION\n", "select_canvas\n", "- raw image\n", "draw_contour\n", "draw_mask\n", "draw_reference\n", "\n", "\n", "EXPORT\n", "save_canvas\n", "- image saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\data\\0__stickle1\\canvas_plates-v1.jpg (overwritten).\n", "save_annotation\n", "- loading existing annotation file\n", "- no annotation_type selected - exporting all annotations\n", "- updating annotations of type \"mask\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"contour\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"shape_features\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"reference\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "\n", "\n", "------------+++ finished pype iteration +++--------------\n", "-------(End with Ctrl+Enter or re-run with Enter)--------\n", "\n", "\n", "AUTOSHOW\n", "\n", "\n", "------------+++ new pype iteration 2022-05-06 15:08:52 +++--------------\n", "\n", "\n", "\n", "\n", "PREPROCESSING\n", "detect_reference\n", "- loaded existing annotation of type \"reference\" with ID \"a\": skipping (edit=False)\n", "create_mask\n", "- loaded existing annotation of type \"mask\" with ID \"a\": skipping (edit=False)\n", "blur\n", "\n", "\n", "SEGMENTATION\n", "threshold\n", "- multichannel image supplied, converting to grayscale\n", "- decompose image: using gray channel\n", "- including pixels from 1 drawn masks \n", "- excluding pixels from reference\n", "detect_contour\n", "- loaded existing annotation of type \"contour\" with ID \"a\": overwriting (edit=overwrite)\n", "- found 16 contours that match criteria\n", "\n", "\n", "MEASUREMENT\n", "compute_shape_features\n", "- loaded existing annotation of type \"shape_features\" with ID \"a\": overwriting (edit=overwrite)\n", "\n", "\n", "VISUALIZATION\n", "select_canvas\n", "- raw image\n", "draw_contour\n", "draw_mask\n", "draw_reference\n", "\n", "\n", "EXPORT\n", "save_canvas\n", "- image saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\data\\0__stickle1\\canvas_plates-v1.jpg (overwritten).\n", "save_annotation\n", "- loading existing annotation file\n", "- no annotation_type selected - exporting all annotations\n", "- updating annotations of type \"mask\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"contour\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"shape_features\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"reference\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "\n", "\n", "------------+++ finished pype iteration +++--------------\n", "-------(End with Ctrl+Enter or re-run with Enter)--------\n", "\n", "\n", "AUTOSHOW\n", "\n", "\n", "TERMINATE\n", "\n", "AUTOSAVE\n", "- nothing to autosave\n", "- no annotation_type selected - returning all annotations\n", "\n", "AUTOLOAD\n", "- annotations loaded:\n", "{\n", "\"mask\": [\"a\"],\n", "\"contour\": [\"a\"],\n", "\"shape_features\": [\"a\"]\n", "}\n", "- reference template image loaded from root directory\n", "Stage: add annotation control args\n", "Updating pype config: applying staged changes\n", "\n", "\n", "------------+++ new pype iteration 2022-05-06 15:08:54 +++--------------\n", "\n", "\n", "\n", "\n", "PREPROCESSING\n", "detect_reference\n", "---------------------------------------------------\n", "Reference card found with 226 keypoint matches:\n", "template image has 36.44 pixel per mm.\n", "current image has 35.21 pixel per mm.\n", "= 96.626 %% of template image.\n", "---------------------------------------------------\n", "create_mask\n", "- loaded existing annotation of type \"mask\" with ID \"a\": skipping (edit=False)\n", "blur\n", "\n", "\n", "SEGMENTATION\n", "threshold\n", "- multichannel image supplied, converting to grayscale\n", "- decompose image: using gray channel\n", "- including pixels from 1 drawn masks \n", "- excluding pixels from reference\n", "detect_contour\n", "- loaded existing annotation of type \"contour\" with ID \"a\": overwriting (edit=overwrite)\n", "- found 9 contours that match criteria\n", "\n", "\n", "MEASUREMENT\n", "compute_shape_features\n", "- loaded existing annotation of type \"shape_features\" with ID \"a\": overwriting (edit=overwrite)\n", "\n", "\n", "VISUALIZATION\n", "select_canvas\n", "- raw image\n", "draw_contour\n", "draw_mask\n", "draw_reference\n", "\n", "\n", "EXPORT\n", "save_canvas\n", "- image saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\data\\0__stickle2\\canvas_plates-v1.jpg (overwritten).\n", "save_annotation\n", "- loading existing annotation file\n", "- no annotation_type selected - exporting all annotations\n", "- updating annotations of type \"mask\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"contour\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"shape_features\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- writing annotations of type \"reference\" with id \"a\" to \"annotations_plates-v1.json\"\n", "\n", "\n", "------------+++ finished pype iteration +++--------------\n", "-------(End with Ctrl+Enter or re-run with Enter)--------\n", "\n", "\n", "AUTOSHOW\n", "\n", "\n", "TERMINATE\n", "\n", "AUTOSAVE\n", "- nothing to autosave\n", "- no annotation_type selected - returning all annotations\n", "\n", "AUTOLOAD\n", "- annotations loaded:\n", "{\n", "\"mask\": [\"a\"],\n", "\"contour\": [\"a\"],\n", "\"shape_features\": [\"a\"]\n", "}\n", "- reference template image loaded from root directory\n", "Stage: add annotation control args\n", "Updating pype config: applying staged changes\n", "\n", "\n", "------------+++ new pype iteration 2022-05-06 15:08:56 +++--------------\n", "\n", "\n", "\n", "\n", "PREPROCESSING\n", "detect_reference\n", "---------------------------------------------------\n", "Reference card found with 254 keypoint matches:\n", "template image has 36.44 pixel per mm.\n", "current image has 35.264 pixel per mm.\n", "= 96.772 %% of template image.\n", "---------------------------------------------------\n", "create_mask\n", "- loaded existing annotation of type \"mask\" with ID \"a\": skipping (edit=False)\n", "blur\n", "\n", "\n", "SEGMENTATION\n", "threshold\n", "- multichannel image supplied, converting to grayscale\n", "- decompose image: using gray channel\n", "- including pixels from 1 drawn masks \n", "- excluding pixels from reference\n", "detect_contour\n", "- loaded existing annotation of type \"contour\" with ID \"a\": overwriting (edit=overwrite)\n", "- found 10 contours that match criteria\n", "\n", "\n", "MEASUREMENT\n", "compute_shape_features\n", "- loaded existing annotation of type \"shape_features\" with ID \"a\": overwriting (edit=overwrite)\n", "\n", "\n", "VISUALIZATION\n", "select_canvas\n", "- raw image\n", "draw_contour\n", "draw_mask\n", "draw_reference\n", "\n", "\n", "EXPORT\n", "save_canvas\n", "- image saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\data\\0__stickle3\\canvas_plates-v1.jpg (overwritten).\n", "save_annotation\n", "- loading existing annotation file\n", "- no annotation_type selected - exporting all annotations\n", "- updating annotations of type \"mask\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"contour\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- updating annotations of type \"shape_features\" with id \"a\" in \"annotations_plates-v1.json\" (overwrite=\"entry\")\n", "- writing annotations of type \"reference\" with id \"a\" to \"annotations_plates-v1.json\"\n", "\n", "\n", "------------+++ finished pype iteration +++--------------\n", "-------(End with Ctrl+Enter or re-run with Enter)--------\n", "\n", "\n", "AUTOSHOW\n", "\n", "\n", "TERMINATE\n", "\n", "AUTOSAVE\n", "- nothing to autosave\n" ] } ], "source": [ "for path in myproj.dir_paths:\n", " pp.Pype(path, tag=\"plates-v1\")" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Created C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\results\\canvas-v1\n", "Search string: ['canvas_plates-v1']\n", "Collected canvas_plates-v1.jpg from 0__stickle1\n", "0__stickle1_canvas_plates-v1.jpg saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\results\\canvas-v1\\0__stickle1_canvas_plates-v1.jpg.\n", "Collected canvas_plates-v1.jpg from 0__stickle2\n", "0__stickle2_canvas_plates-v1.jpg saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\results\\canvas-v1\\0__stickle2_canvas_plates-v1.jpg.\n", "Collected canvas_plates-v1.jpg from 0__stickle3\n", "0__stickle3_canvas_plates-v1.jpg saved under C:\\Users\\mluerig\\Downloads\\phenopype-tutorials-main\\tutorial_project\\results\\canvas-v1\\0__stickle3_canvas_plates-v1.jpg.\n" ] } ], "source": [ "myproj.collect_results(tag=\"plates-v1\", \n", " files=\"canvas\", # \n", " folder=\"canvas-v1\",\n", " overwrite=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.13" } }, "nbformat": 4, "nbformat_minor": 2 }