This checklist is relevant to studies reporting prediction models (both diagnostic and prognostic) and is based on the TRIPOD statement.
Or download and complete
Complete this checklist by entering the page numbers from your manuscript where readers will find each of the items listed below.
Your article may not currently address all the items on the checklist. Please modify your text to include the missing information. If you are certain that an item does not apply, please write "n/a" and provide a short explanation.
Download your completed checklist and include it as an extra file when you submit to a journal.
Identify the study as developing and / or validating a multivariable prediction model, the target population, and the outcome to be predicted.
Provide a summary of objectives, study design, setting, participants, sample size, predictors, outcome, statistical analysis, results, and conclusions.
Explain the medical context (including whether diagnostic or prognostic) and rationale for developing or validating the multivariable prediction model, including references to existing models.
Specify the objectives, including whether the study describes the development or validation of the model or both.
Source of data
Describe the study design or source of data (e.g., randomized trial, cohort, or registry data), separately for the development and validation data sets, if applicable.
Specify the key study dates, including start of accrual; end of accrual; and, if applicable, end of follow-up.
Specify key elements of the study setting (e.g., primary care, secondary care, general population) including number and location of centres.
Describe eligibility criteria for participants.
Give details of treatments received, if relevant.
Clearly define the outcome that is predicted by the prediction model, including how and when assessed.
Report any actions to blind assessment of the outcome to be predicted.
Clearly define all predictors used in developing or validating the multivariable prediction model, including how and when they were measured.
Report any actions to blind assessment of predictors for the outcome and other predictors.
Explain how the study size was arrived at.
Describe how missing data were handled (e.g., complete-case analysis, single imputation, multiple imputation) with details of any imputation method.
Statistical analysis methods
If you are developing a prediction model describe how predictors were handled in the analyses.
If you are developing a prediction model, specify type of model, all model-building procedures (including any predictor selection), and method for internal validation.
If you are validating a prediction model, describe how the predictions were calculated.
Specify all measures used to assess model performance and, if relevant, to compare multiple models.
If you are validating a prediction model, describe any model updating (e.g., recalibration) arising from the validation, if done.
Provide details on how risk groups were created, if done.
Development vs. validation
For validation, identify any differences from the development data in setting, eligibility criteria, outcome, and predictors.
Describe the flow of participants through the study, including the number of participants with and without the outcome and, if applicable, a summary of the follow-up time. A diagram may be helpful.
Describe the characteristics of the participants (basic demographics, clinical features, available predictors), including the number of participants with missing data for predictors and outcome.
For validation, show a comparison with the development data of the distribution of important variables (demographics, predictors and outcome).
If developing a model, specify the number of participants and outcome events in each analysis.
If developing a model, report the unadjusted association, if calculated between each candidate predictor and outcome.
If developing a model, present the full prediction model to allow predictions for individuals (i.e., all regression coefficients, and model intercept or baseline survival at a given time point).
If developing a prediction model, explain how to the use it.
Report performance measures (with CIs) for the prediction model.
If validating a model, report the results from any model updating, if done (i.e., model specification, model performance).
Discuss any limitations of the study (such as nonrepresentative sample, few events per predictor, missing data).
For validation, discuss the results with reference to performance in the development data, and any other validation data.
Give an overall interpretation of the results, considering objectives, limitations, results from similar studies, and other relevant evidence.
Discuss the potential clinical use of the model and implications for future research.
Provide information about the availability of supplementary resources, such as study protocol, Web calculator, and data sets.
Give the source of funding and the role of the funders for the present study.
To acknowledge this checklist in your methods, please state "We used the TRIPOD
when writing our report [citation]". Then cite this checklist as Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement..
The TRIPOD checklist is distributed under the terms of the Creative Commons Attribution License CC-BY
Prediction models are developed to aid health care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making.