Cichlid teeth#

For a more detailed description see: https://www.phenopype.org/gallery/projects/cichlid-teeth/

Single tooth case#

These images are quite variable in their background, and thus the contrast to the tooth contained inside changes from image to image. This makes it hard for signal processing, but the SAM algorithm is very good at detecting edges, so it should be able to deal with this variation. We need to load the fastSAM repo to the Python path before loading phenopype so it is recognized and the plugin properly loaded.

import sys
sys.path.append(r'D:\git-repos\mluerig\FastSAM')

import phenopype as pp
import os 

## my root directory - modify as needed
os.chdir(r"D:\science\packages\phenopype\phenopype-gallery_exec")

## my laptop has a small screen, so I use a smaller phenopype window
pp._config.window_max_dim = 800

Make phenopype project#

Create a phenopype project (remember, Project is used to both create new projects, and load existing ones). Different phenopype projects can be useful, e.g., to process different batches of images from field seasons, perspectives, etc,. It makes sense to create them side by side in a subfolder, which I call “phenopype”. Thus, my research projects often have the following directory structure (just my way of working - this is really totally up to you):

my-project
    data                       # processed data (e.g., tables)
    data_raw                   # raw data (images, videos, etc.)
    phenopype                  # phenopype projects
    phenopype_templates        # phenopype .yaml config templates
    scripts                    # python, R, etc.
    [...]                      # manuscripts, figures, literature, etc.
proj = pp.Project(r"phenopype\cichlid-teeth-1")
## add tooth-images from the data folder, but exclude the images containing multiple teeth 
proj.add_files(image_dir = r"data_raw\cichlid-teeth", include=["depth", "Depth"], include_all=False, mode="link")
proj.add_config(template_path=r"phenopype_templates\cichlid-teeth_template1.yaml", tag="v1")

We need to add the fastSAM model that is saved in the github repo:

proj.add_model("D:\git-repos\mluerig\FastSAM\FastSAM-x.pt")
## run image processing
for path in proj.dir_paths:
    pp.Pype(path, tag="v1")
proj.collect_results(files=["canvas", "shape"], tag="v1", aggregate_csv=True, overwrite=True)

Add alternative configuration#

Phenopype can also the fastSAM model without box prompts, i.e., use the entire image. This can be useful when you only have single objects inside your images. First we need to add another configuration (make sure to supply another tag), otherwise your old config will be overwritten.

proj.add_config(template_path=r"phenopype_templates\cichlid-teeth_template2.yaml", tag="v2")

Now run the config with interactive mode off (feedback=False)

for path in proj.dir_paths: 
    p = pp.Pype(
        image_path=path, 
        tag="v2", 
        feedback=False,             # turn off feedback for automatic mode 
        )
proj.collect_results(files=["canvas", "shape"], tag="v2", aggregate_csv=True, overwrite=True)