METRICS items and conditions
Table 1. Overview of METRICS items and conditions with corresponding links to METRICS-E3 explanations. Adapted from [1] (licensed under CC BY)
Category | Item | Description |
---|---|---|
Study Design | Item 1 | Adherence to radiomics and/or machine learning-specific checklists or guidelines |
Item 2 | Eligibility criteria that describe a representative study population | |
Item 3 | High-quality reference standard with a clear definition | |
Imaging Data | Item 4 | Multi-center |
Item 5 | Clinical translatability of the imaging data source for radiomics analysis | |
Item 6 | Imaging protocol with acquisition parameters | |
Item 7 | The interval between imaging used and reference standard | |
Segmentation | Cond. 1 | Does the study include segmentation? |
Cond. 2 | Does the study include fully automated segmentation? | |
Item 8 | Transparent description of segmentation methodology | |
Item 9 | Formal evaluation of fully automated segmentation | |
Item 10 | Test set segmentation masks produced by a single reader or automated tool | |
Image Processing & Feature Extraction | Cond. 3 | Does the study include hand-crafted feature extraction? |
Item 11 | Appropriate use of image preprocessing techniques with transparent description | |
Item 12 | Use of standardized feature extraction software | |
Item 13 | Transparent reporting of feature extraction parameters | |
Feature Processing | Cond. 4 | Does the study include tabular data? |
Cond. 5 | Does the study include end-to-end deep learning? | |
Item 14 | Removal of non-robust features | |
Item 15 | Removal of redundant features | |
Item 16 | Appropriateness of dimensionality compared to data size | |
Item 17 | Robustness assessment of end-to-end deep learning pipelines | |
Preparation for Modeling | Item 18 | Proper data partitioning process |
Item 19 | Handling of confounding factors | |
Metrics and Comparison | Item 20 | Use of appropriate performance evaluation metrics for task |
Item 21 | Consideration of uncertainty | |
Item 22 | Calibration assessment | |
Item 23 | Use of uni-parametric imaging or proof of its inferiority | |
Item 24 | Comparison with a non-radiomic approach or proof of added clinical value | |
Item 25 | Comparison with simple or classical statistical models | |
Testing | Item 26 | Internal testing |
Item 27 | External testing | |
Open Science | Item 28 | Data availability |
Item 29 | Code availability | |
Item 30 | Model availability |
Fully functional online METRICS tool can be accessed here.
Weights of categories and items
Figure 1. “Weights of METRICS categories and items. Each category has a different color and those colors are matched between right and left panels” [1] (licensed under CC BY)
References
- Kocak B, Akinci D’Antonoli T, Mercaldo N, et al (2024) METhodological RadiomICs Score (METRICS): a quality scoring tool for radiomics research endorsed by EuSoMII. Insights Imaging 15:8. https://doi.org/10.1186/s13244-023-01572-w