EuroSciPy 2024

Marianne Corvellec

Marianne Corvellec is a core developer of scikit-image, a popular Python library for scientific image processing, where she specializes in biomedical applications. Her technical interests include data science workflows, data visualization, and best practices from testing to documenting. She holds a PhD in statistical physics from École normale supérieure de Lyon, France. Since 2013, she has been a regular speaker and contributor in the Python, Carpentries, and FLOSS communities.


Institute / Company

IGDORE / scikit-image

Homepage

https://orcid.org/0000-0002-1994-3581

Git*hub|lab

https://github.com/mkcor


Sessions

08-26
16:00
90min
Image analysis in Python with scikit-image
Marianne Corvellec, Lars Grüter, Stéfan van der Walt

Scientists are producing more and more images with telescopes, microscopes, MRI scanners, etc. They need automatable tools to measure what they've imaged and help them turn these images into knowledge. This tutorial covers the fundamentals of algorithmic image analysis, starting with how to think of images as NumPy arrays, moving on to basic image filtering, and finishing with a complete workflow: segmenting a 3D image into regions and making measurements on those regions.

Data Science and Visualisation
Room 6
08-28
13:20
30min
The joys and pains of reproducing research: An experiment in bioimaging data analysis
Marianne Corvellec

The conversation about reproducibility is usually focused on how to make research workflows (more) reproducible. Here, we consider it from the opposite perspective, and ask: How feasible is it, in practice, to reproduce research which is meant to be reproducible? Is it even done or attempted? We provide a detailed account of such an attempt, trying to reproduce some segmentation results for 3D microscopy images of a developing mouse embryo. The original research is a monumental work of bioimaging and analysis at the single-cell level, published in Cell in 2018, alongside with all the necessary research artifacts. Did we succeed in this attempt? As we share the joys and pains of this journey, many questions arise: How do reviewers assess the reproducibility claims exactly? Incentivizing reproducible research is still an open problem, since it is so much more costly (in time) to produce. And how can we incentivize those who test reproducibility? Not only is it costly to set up computational environments and execute data-intensive scientific workflows, but it may not appear as rewarding at first thought. In addition, there is a human factor: It is thorny to show authors that their publication does not hold up to their reproducibility claims.

Data Science and Visualisation
Room 6