Rules of the TIGER challenge

On this page, the rules of the TIGER challenge are detailed.

ENTRY INTO THIS CHALLENGE CONSTITUTES YOUR ACCEPTANCE OF THESE OFFICIAL RULES.

One account per participant

You can not use multiple grand-challenge accounts to participate in the TIGER challenge, only one account per user is allowed to participate in TIGER. 

No private sharing outside teams

Privately sharing code or data outside of teams is not permitted. It's okay to share code if made available to all participants on the forum of the TIGER challenge. 

Teams

Participants are allowed to make teams. Participants in a team are not allowed to make individual submissions outside of their team. Participants are not allowed to be part of more than one team.

Submission Limits

The maximum number of submissions per week is 2. After each submission, leaderboard 1 will be updated with new results. However, overall only a fixed number of new submissions will be executed weekly to update leaderboard 2, see Evaluation section for details.

Competition Timeline

  • January 11, 2022: Release of TIGER training set
  • February 4, 2022: Open leaderboard 1 on the experimental test set
  • March 10, 2022: Open leaderboard 2 on the experimental test set
  • June 3 -  23:59 CEST, 2022: L1 closes
  • June 10 -  23:59 CEST, 2022: End of the challenge
  • July 2022: Announce results on the final test set and awards (exact date TBA)

Use of pre-trained models

Models pre-trained on ImageNet or other natural images datasets are allowed. Models pre-trained on digital pathology data are not allowed, even if pre-trained using publicly available data. 

Data

  • All training data annotations (for TCGA-BRCA, RUMC, and JB slides) are released under a CC BY-NC 4.0 license.
  • Training slides from RUMC and JB are also released under a CC BY-NC 4.0 license.
  • Training slides from TCGA-BRCA are shared in the same format as slides from RUMC and JB (i.e., multiresolution TIF files at 0.5 um/px maximum resolution). The same rights applicable to original TCGA-BRCA slides apply to the shared slides. 

Use of data and manual annotations

Models entering the TIGER challenge should be solely trained using the official TIGER training dataset. The use of additional digital pathology data from either public or private datasets is not allowed.  

The use of additional manual annotations on the training data, other than the ones provided by the challenge organizers, is not allowed. 

Prize and open-source solutions

Three top-performing solutions from each leaderboard will be rewarded. These algorithms must improve upon the performance of baseline models provided by the organizers and must include a short article describing the methodology. The authors of the top-performing algorithms, based on the final results, will receive AWS credits as follows: 

Leaderboard 1 (computer vision performance):

  • 1st place: 1,000 $ 
  • 2nd place: 1,000 $
  • 3rd place: 1,000 $ 

Leaderboard 2 (prognostic value):

  • 1st place: 5,000 $ 
  • 2nd place: 3,000 $
  • 3rd place: 2,000 $ 

Awards can be combined across leaderboards, for example, if the same algorithm achieves 1st place in leaderboard 1 and 1st place in leaderboard 2, the authors will be awarded 5,000 + 1,000 $ AWS credits. In order to be eligible for an award, the authors of winning solutions should release their solution as a code repository on GitHub under a permissive open-source license. Full details and a template for how to structure the code will be provided.

Submitted algorithms and authorship

We plan to invite at least three top solutions from each leaderboard based on their final performance, methodology, and the write-up provided by the authors for inclusion in a peer-reviewed article about the challenge and its (extended) results, for which we will perform additional experiments also including additional (external) datasets.

When submitting an Algorithm, participants agree to authorize the challenge organizers to use the submitted algorithm for collaborative future research, for example, additional validation of algorithms on external datasets, which will be part of a scientific paper resulting from this challenge. Such a collaboration will be established via an agreement between participants and the challenge organizers. Algorithms without such an agreement will not be eligible to be included in the scientific publication derived from the results of TIGER.