CLEAR item#58

“Sharing a ready-to-use system. (Please note this item is “not essential” but “recommended.”) An easy-to-use tool (e.g., standalone executable applications, notebooks, websites, virtual machines, etc.) can be created and shared with or without source code that is based on the model created. The main aim is to be able to test or validate the model by other research groups. With this approach, users even without experience in machine learning or coding can also test the proposed models.” [1] (from the article by Kocak et al.; licensed under CC BY 4.0)

Reporting examples for CLEAR item#58

Example#1. “The radiomics-based preoperative-Fistula Risk Score, which uses only preoperative CT features, is a new and promising radiomics-based score that has the potential to be integrated with hospital CT report systems and improve patient counselling before surgery. The model with underlying code is readily available via www.pancreascalculator.com and https://github.com/PHAIR-Consortium/POPF-predictor.” [2] (from the article by Ingwersen et al.; licensed under CC BY 4.0)

Example#2. “We shared the code which we used for experiments in the COCO data set. Available from: https://github.com/VisionAI-USF/COCO_Size_Decoding.” [3] (from the article by Cherezov et al.; licensed under CC BY 4.0)

Example#3. “The framework, as well as code for this analysis, are publicly available under https://github.com/pwoznicki/AutoRadiomics.” [4] (from the article by Woznicki et al.; licensed under CC BY 4.0)

Explanation and elaboration of CLEAR item#58

Data sharing and open science are important for several reasons. Firstly, it allows reproducibility of the proposed work, externally validating the model on an independent dataset [5]. Additionally, it empowers healthcare professionals with limited data science expertise, maximizing opportunities and moving the field forward [6]. However, this item brings another and even more extreme aspect of open science into consideration, sharing tools ready to use or ready to make predictions. Although rarely done artificial intelligence-related studies in the literature (i.e., not evaluated for radiomic studies in particular), such tools can provide a great opportunity to externally verify the results or models of a radiomic study, without a significant setup by the users.

References

  1. Kocak B, Baessler B, Bakas S, et al (2023) CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII. Insights Imaging 14:75. https://doi.org/10.1186/s13244-023-01415-8
  2. Ingwersen EW, Bereska JI, Balduzzi A, et al (2023) Radiomics preoperative-Fistula Risk Score (RAD-FRS) for pancreatoduodenectomy: development and external validation. BJS Open 7:zrad100. https://doi.org/10.1093/bjsopen/zrad100
  3. Cherezov D, Paul R, Fetisov N, et al (2020) Lung Nodule Sizes Are Encoded When Scaling CT Image for CNN’s. Tomography 6:209–215. https://doi.org/10.18383/j.tom.2019.00024
  4. Woznicki P, Laqua F, Bley T, Baeßler B (2022) AutoRadiomics: A Framework for Reproducible Radiomics Research. Front Radiol 2:
  5. Akinci D’Antonoli T, Cuocolo R, Baessler B, Pinto dos Santos D (2023) Towards reproducible radiomics research: introduction of a database for radiomics studies. Eur Radiol. https://doi.org/10.1007/s00330-023-10095-3
  6. Waring J, Lindvall C, Umeton R (2020) Automated machine learning: Review of the state-of-the-art and opportunities for healthcare. Artif Intell Med 104:101822. https://doi.org/10.1016/j.artmed.2020.101822

Back