CLEAR item#56

“Sharing source code for modeling. Share the modeling scripts. Code scripts should include sufficient information to replicate the presented analysis (e.g., to train and test pipeline), with all dependencies and relevant comments to easily understand and build upon the method. Even if the actual input dataset used cannot be shared, in situations where a similar dataset is available publicly, it should be used to share an example workflow with all pre- and post-processing steps included. Specify the reason in case the source code is not available.” [1] (from the article by Kocak et al.; licensed under CC BY 4.0)

Reporting examples for CLEAR item#56

Example#1. “Code used for analysis can be accessed at https://github.com/martonkolossvary/radiomics_ex-vivo_src.” [2] (from the article by Kolossváry et al.; licensed under CC BY 4.0)

Example#2. “The used source codes are available at GitHub (https://github.com/tt1107/wangradiology).” [3] (from the article by Wang et al; licensed under CC BY 4.0)

Example#3. “The script of the model development and validation is available at GitHub (https://github.com/xby947/RF-Model-development.git) to improve the reproducibility of this research.” [4] (from the article by Li et al.; licensed under CC BY 4.0)

Explanation and elaboration of CLEAR item#56

Item #56 in the CLEAR checklist underscores the importance of sharing source code for modeling in radiomic studies. This practice is crucial for ensuring transparency, replicability, and the advancement of scientific research. The code scripts should contain sufficient details and explanatory comments to replicate the analysis. The provided examples demonstrate good practices where the authors have shared their modeling scripts via open-access repositories like GitHub. This openness allows other researchers to access and build upon the methods used, thereby facilitating the replication of results and fostering further research. However, to our knowledge, there are no papers explicitly stating why the source code is not shared when it is withheld. Additionally, there seems to be a gap in studies that use publicly available datasets to demonstrate their pipeline. Utilizing public datasets in such instances would be valuable for illustrating the workflow and ensuring that the methods are accessible and verifiable.

References

  1. Kocak B, Baessler B, Bakas S, et al (2023) CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII. Insights Imaging 14:75. https://doi.org/10.1186/s13244-023-01415-8
  2. Kolossváry M, Karády J, Kikuchi Y, et al (2019) Radiomics versus Visual and Histogram-based Assessment to Identify Atheromatous Lesions at Coronary CT Angiography: An ex Vivo Study. Radiology 293:89–96. https://doi.org/10.1148/radiol.2019190407
  3. Wang T, She Y, Yang Y, et al (2022) Radiomics for Survival Risk Stratification of Clinical and Pathologic Stage IA Pure-Solid Non–Small Cell Lung Cancer. Radiology 302:425–434. https://doi.org/10.1148/radiol.2021210109
  4. Li M, Qin H, Yu X, et al (2023) Preoperative prediction of Lauren classification in gastric cancer: a radiomics model based on dual-energy CT iodine map. Insights Imaging 14:125. https://doi.org/10.1186/s13244-023-01477-8

Back Next