Skip to the content.

Here is a repository for the manuscript titled MAPPER: An open-source, high-dimensional image analysis pipeline unmasks differential regulation of Drosophila wing features developed and written by Nilay Kumar and Francisco Huizar in the Zartman lab at the University of Notre Dame. You can find the pre-print of the paper on bioRxiv here. The bulk of the code was built by Nilay Kumar and co-developed by Francisco Huizar. Dr. Ramezan Paravi Torghabeh and Dr. Pavel Brodskiy provided guidance for code development. Experimental work and validation was carried out by Nilay Kumar, Dr. Maria Unger, Trent Robinett, Keity J. Farfan-Pira, and Dharsan Soundarrajan. This work was done within the Multicellular Systems Engineering Lab at the University of Notre Dame and the Laboratory of Growth Biology and Morphogenesis at the Center for Research and Advanced Studies of the National Polytechnical Institute (Cinvestav). Please direct any questions to the principal investigator, Dr. Jeremiah Zartman.

All code for the MAPPER application was done using MATLAB.

Open-source license agreement

†Kumar, N., †Huizar, F.J., Farfán-Pira K.J., Brodskiy, P., Soundarrajan, D.S., Nahmad, M., Zartman, J.J.; MAPPER: An open-source, high-dimensional image analysis pipeline unmasks differential regulation of Drosophila wing features. Frontiers in Genetics (2022). https://doi.org/10.3389/fgene.2022.869719 † These authors contributed equally.

The below licensing statements are verbatim statements from the Free and Open Source Software Auditing (FOSSA) team originating from this webpage

Users of this code must:

The LGPL license allows users of the licensed code to:

Instructions to run the application

Available ILASTIK pixel classification modules

Below you will find pre-trained pixel classification modules in ILASTIK for several wing images we have already processed. These modules are crucial for step five of the MAPPER user manual. Below each module link, you will find a representative image of the Drosophila wings that were used to train the module. You should download and use the ILASTIK module that has the closest resemblance in lighting, background, brightness, contrast, and saturation to the images you would like to process. If none of the available ILASTIK modules closely resemble the images you would like to process, there are detailed instructions in the user manual on how to train your own ILASTIK module. NOTE: The number of channels of your images must match the number of channels in the training data for the ILASTIK module you choose (i.e., RGB channel images must have an ILASTIK module trained on RGB channel images).

U-Net deep learning trained model

Supplementary File 2 R Notebook

Raw Data Sheets

Acknowledgements

We would like to thank the South Bend Medical Foundation for generous access to their Apero Slide Scanner. We would like to thank Dr. Ramezan Paravi Torghabeh, Vijay Kumar Naidu Velagala, Dr. Megan Levis, and Dr. Qinfeng Wu for technical assistance and scientific discussions related to the project. The work in this manuscript was supported in part by NIH Grant R35GM124935, NSF award CBET-1553826, NSF-Simons Pilot award through Northwestern University, the Notre Dame International Mexico Faculty Grant Program, and grant CB-014-01-236685 from the Concejo Nacional de Ciencia y Tecnología of Mexico.

Repository last updated: April 12, 2022 11:00AM EST