M I C C A I 2 0 1 7 T u t o r i a l

Date/Time/Place

Main Organizer

Co-organizers

Topic and Background

Biologists and physicians have to be able to rely on the correctness of results obtained by automatic analysis of biomedical images. This, in turn, requires paying proper attention to quality control of the developed algorithms and software for this task. Both the medical image analysis and bioimage analysis communities are becoming increasingly aware of the strong need for benchmarking various image analysis methods in order to compare their performance and assess their suitability for specific applications. Reference benchmark datasets with ground truth (both simulated and real data annotated by experts) have become publicly available and challenges are being organized in association with well-known conferences, such as ISBI and MICCAI. This tutorial summarizes recent developments in this respect and describes common ways of measuring algorithm performance as well as providing guidelines for best practices for designing biomedical image analysis benchmarks and challenges, including proper dataset selection (training versus test sets, simulated versus real data), task description and defining corresponding evaluation metrics that can be used to rank performance.

Proper benchmarking of image analysis algorithms and software makes life easier not only for future developers (to learn the strengths and weaknesses of existing methods) but also for users (who can select methods that best suit their particular needs). Also reviewers can better assess the usefulness of a newly developed analysis method if it is compared to the best performing methods for a particular task on the same type of data using standard metrics.

Preliminary Program

Audience

This tutorial is open to everyone with an interest in designing benchmarks or challenges for biomedical image analysis, and/or using benchmarks and challenges for validating algorithms.

The goal is to increase awareness of the importance of benchmarking and proper validation of algorithms. Using public benchmarks tends to convince reviewers more easily of the novelty of your method, for example using various benchmarks to proof the generalization of your method. Organizing a challenge is a great way to help the community and to learn more about the strengths and weaknesses of various methods, which aids in building on previous work and has the potential to lead to more novel solutions in terms of algorithm development and solving the task at hand.

Registration

Participants are kindly requested to register via MICCAI registration system (item Workshop Day, choice September 14). Afterwards it is necessary to login to Jujama portal (using username and password sent shortly after MICCAI registration) and register for the specific tutorial by adding the chosen tutorial into personal agenda (menu item Agenda).

References

Copyright Notice

Banner image ("measure a thousand times, cut once" by Avrene)†is released under the creative common license.