Guidance for reporting an a randomised trial of a social or psychological intervention

This advice is an extension of CONSORT with modified and additional items for reporting trials of social and psychological interventions  Read more

The following information was originally published here.

Go to checklist

Title and Abstract

1a Title

Identification as a randomised trial in the title.

The ability to identify a report of a randomised trial in an electronic database depends to a large extent on how it was indexed. Indexers may not classify a report as a randomised trial if the authors do not explicitly report this information. To help ensure that a study is appropriately indexed and easily identified, authors should use the word “randomised” in the title to indicate that the participants were randomly assigned to their comparison groups.

Example/s:

Using problem-solving therapy to reduce depressive symptom severity among older adult methadone clients: A randomized clinical trial

Impact of a social-emotional and character development program on school-level indicators of academic achievement, absenteeism, and disciplinary outcomes: A matched-pair, cluster-randomized, controlled trial.

1b Abstract

Structured summary of trial design, methods, results, and conclusions. Refer to CONSORT extension for social and psychological intervention trial abstracts (https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-018-2735-z/tables/3).

Abstracts are the most widely read section of manuscripts, and they are used for indexing reports in electronic databases. Authors should follow the CONSORT Extension for Abstracts, which provides detailed advice for structured journal article abstracts and for conference abstracts. We have tailored the CONSORT Extension for Abstracts for social and psychological intervention trials with relevant items for the objective, trial design, participants, and interventions from the CONSORT-SPI 2018 checklist.

Introduction

2a Background and objectives

Scientific background and explanation of rationale.

A structured introduction should describe the rationale for the trial and how the trial contributes to what is known. In particular, the introduction should describe the targeted problem or issue and what is already know about the intervention, ideally by referencing systematic reviews.

Example/s:

Social and psychological intervention trials

National data suggest that 15% to 25% of women will be the victim of an attempted or completed rape during their lifetime. Research suggests college women are at greater risk for sexual victimization than women in the general population. The mental health consequences of sexual assault are serious. Women who are victims of sexual violence have higher and more severe rates of posttraumatic stress disorder (PTSD) than survivors of accidents and natural disasters. In addition to PTSD, there are many other insidious effects of sexual violence, which include psychological distress, physical distress, interpersonal problems, and increased risk for sexual revictimization.

Specific to cluster randomised trials

Sampling was based on a cluster randomized approach with schools, rather than individuals or classes, as the randomization units in order to minimize possible contamination or spillover effects between treatment conditions.

2b Background and objectives

Specific objectives or hypothesis, If pre-specified, how the intervention was hypothesized to work.

The objectives summarise the research questions, including any hypotheses about the expected magnitude and direction of intervention effects. For social and psychological interventions that have multiple units of intervention and multiple outcome assessments (e.g. individuals, groups, places), authors should specify to whom or to what each objective and hypothesis applies.

If pre-specified, how the intervention was hypothesized to work

Describing how the interventions in all groups (i.e. all experimental and comparator groups) were expected to affect outcomes provides important information about the theory underlying the interventions. For each intervention evaluated, authors should describe the 'mechanism of action', also known as the 'theory of change', 'programme theory', or 'causal pathway'. Authors should state how interventions were thought to affect outcomes prior to the trial, and whether the hypothesised mechanisms of action were specified a priori, ideally with reference to the trial registration and protocol. Specifically, authors should report: how the components of each intervention were expected to influence modifiable psychological and social processes, how influencing these processes was thought to affect the outcomes of interest, the role of context, facilitators of and barriers to intervention implementation, and potential adverse events or unintended consequences. Graphical depictions—such as a logic model or analytic framework—may be useful.

Example/s:

Specific objectives or hypotheses: social and psychological intervention trials

We thus hypothesize that sexual safety among HIV-positive men would be facilitated by self-efficacy and skills for enhancing social support and coping with HIV, modulating negative affect, enhancing HIV disclosure, and enhancing information and motivation specifically around sexuality…. We [also] hypothesized that overall unprotected anal intercourse [UAI] would lessen only moderately, whereas transmission risk—UAI that may transmit HIV to uninfected partners—would show significant intervention effects.

Specific objectives or hypotheses: Cluster randomised trials

The central question we addressed is, relative to the “business-as-usual” control condition, what is the effect of assignment to receive the Open Court Reading professional development and curricular materials on the spring literacy achievement outcomes of elementary school classrooms?

How the intervention is hypothesised to work: social and psychological intervention trials

Parent–infant interaction that is attuned and in which the parent is able to ‘read’ the child's communicative signals promotes positive social and communicative development in all children. Infants at-risk for autism often show ‘weak’ or distorted communicative signals which parents can struggle to recognise and respond to accurately. The iBASIS intervention was designed specifically to reverse such disrupted patterns of early parent–infant interaction, with the hypothesis that there would be consequent positive effects on other infant developmental markers and emerging prodromal autism symptoms…. iBASIS-Video Interaction for Promoting Positive Parenting (iBASIS-VIPP) uses video-feedback to help parents understand and adapt to their infant's individual communication style to promote optimal social and communicative development.

Methods

3a Trial design

Description of trial design (such as parallel, factorial) including allocation ratio. If the unit of random assignment is not the individual, please refer to CONSORT for Cluster Randomised Trials (https://www.equator-network.org/reporting-guidelines/consort-cluster/).

Unambiguous details about trial design help readers assess the suitability of trial methods for addressing trial objectives, and clear and transparent reporting of all design features of a trial facilitates reproducibility and replication. Authors should explain their choice of design (especially if it is not an individually randomised, two-group parallel trial); state the allocation ratio and its rationale; and indicate whether the trial was designed to assess the superiority, equivalence, or noninferiority of the interventions.

Randomising at the cluster level (e.g. schools) has important implications for trial design, analysis, inference, and reporting. For cluster randomised trials, authors should follow the CONSORT Extension to Cluster Randomised Trials. Because many social and psychological interventions are cluster randomised. Authors should also report the unit of randomisation, which might be social units (e.g. families), organisations (e.g. schools, prisons), or places (e.g. neighbourhoods), and specify the unit of each analysis, especially when the unit of analysis is not the unit of randomisation (e.g. randomising at the cluster level and analysing outcomes assessed at the individual level).

Example/s:

Example 1: social and psychological interventions

We employed a 2 (intervention: CBT (cognitive-behavioural therapy) vs. GHE (general health education)) x 3 (time: end of counseling at 2 weeks, follow-ups at 3 and 6 months) mixed factorial design. Participants … were randomly assigned to receive either CBT or GHE at a 1:1 ratio (n = 77 per intervention group).

Example 2: cluster randomised trials

We conducted a stratified randomized pretest–posttest controlled design study by enrolling passive recreation areas (PRAs) within public parks in 3 annual waves. After completion of the pretest assessment, parks were randomized by an independent biostatistician in an unequal 1:3 allocation ratio to treatment (shaded) versus control (unshaded) stratified by city, wave, and pretest use of the study PRA.

3b Trial design

Important changes to methods after trial commencement (such as eligibility criteria), with reasons.

Deviations from planned trial methods are common, and not necessarily associated with flawed or biased research. Changes from the planned methods are important for understanding and interpreting trial results. A trial report should refer to a trial registration (Item 23) and protocol (Item 24) developed in advance of assigning the first participant, and to a pre-specified statistical analysis plan. The report should summarise all amendments to the protocol and statistical analysis plan, when they were made, and the rationale for each amendment. Because selective outcome reporting is pervasive, authors should state any changes to the outcome definitions during the trial.

Example/s:

Originally, each teacher had three seventh-grade classes participating in the study. Each class was randomly assigned to receive one of the treatments so that each teacher taught all three treatments. The purpose of this restricted random assignment was to reduce the potential confounding effect of instructional methods. However, after the school year started, Teacher C lost one of her seventh-grade classes (the third treatment) because of a change of assignment to teach eighth-grade science.

4a Participants

Eligibility criteria for participants. When applicable, eligibility criteria for settings and those delivering the interventions.

Eligibility criteria should describe how participants (i.e. individuals, groups, or places) were recruited. Readers need this information to understand who could have entered the trial and the generalisability of findings. Authors should describe all inclusion and exclusion criteria used to determine eligibility, as well as the methods used to screen and assess participants to determine their eligibility.

In addition to the eligibility criteria that apply to individuals, social and psychological intervention trials often have eligibility criteria for the settings where participants will be recruited and interventions delivered, as well as intervention providers. Authors should describe these criteria to help readers compare the trial context with other contexts in which interventions might be used.

Example/s:

Example 1: social and psychological interventions

Families were referred to the program by schools (30%), community-based agencies (22%), health care clinics (21%), self (16%), or public social services (12%). Participants were screened according to the criteria listed above and recruited during the 4-year period (1997–2001). Of the 302 families screened, 216 met the eligibility criteria…. The targeted families resided in Baltimore’s Westside Empowerment Zone (i.e., federally designated as an area of extreme poverty, unemployment, and general economic distress) and had at least one child between the ages of 5 and 11. Eligibility included (a) a concern by the referring person that at least 1 of 19 neglect subtypes (e.g., unsafe housing conditions, inadequate supervision, inadequate/delayed health care) was occurring at a low level but not at a level that Child Protective Services (CPS) would accept for investigation; (b) at least two additional risk factors for neglect related to the child (e.g., behavioral problem; physical, developmental, or learning disability; more than three children) or the caregiver/family (e.g., unemployment/ overemployment, mental health problem, drug or alcohol problem, domestic violence, homelessness); (c) no current CPS involvement; and (d) caregiver expressed willingness to participate in the FC program.

Example 2: cluster randomised trials

Eligible schools were officially registered, had at least four classrooms and 120 students, were located in close proximity to other schools (i.e., ∼10 km or one hour walking), were in a secure zone at the time of school recruitment (e.g., no movement of armed groups), accessible by motorbike, and presumably not receiving support similar to OPEQ by other private, local, or international agencies.

Example 3: Eligibility criteria for settings and those delivering the interventions

The following study inclusion criteria were used to control for any further variability in classroom characteristics across program type: (1) teachers with a bachelor’s degree or an associate’s degree and working towards a bachelor’s; (2) programs with moderate to high quality as measured by the NC (North Carolina) star-rating system (3–5 stars out of 5 stars total), (3) use of the Creative Curriculum (a state-approved MAF (More at Four Pre-Kindergarten Program) curriculum and the predominant curriculum used by MAF classrooms), (4) classroom enrollment of at least four Latino ELL (English Language Learner) children, but not to exceed 85% of total enrollment, and (5) use of English as the primary language of instruction.

4b Participants

Settings and locations of intervention delivery and where the data were collected.

Information about settings and locations of intervention delivery and data collection are essential for understanding trial context. Important details might include the geographic location, day and time of trial activities, space required, and features of the inner setting (e.g. implementing organisation) and outer setting (e.g. external context and environment) that might influence implementation. Authors should refer to the mechanism of action when deciding what information about setting and location to report.

Example/s:

Our study site is St. Louis County, Missouri. We chose St. Louis County because it is part of a large metropolitan area with significant crime problems. In areas of the county patrolled by St Louis County Police Department (SLCPD), the 2012 violent crime rate was 244.6 per 100,000 while the property crime rate was 2063.8. Covering more than 500 square miles, with over 1 million residents, the county is the 34th largest in the U.S. and contains 17% of the state’s population, although the SLCPD provides primary police services to just over 400,000, including to more than 90 municipalities that contract for services. The SLCPD employs just over 800 sworn and 240 civilian personnel and is an internationally accredited, full-service department. Officers have fairly stable geographic assignments, ensuring some continuity across the treatment period.

5. Interventions

The interventions for each group with sufficient details to allow replication, including how and when they were actually administered.

Complete and transparent information about the content and delivery of all interventions in all groups (experimental and comparator) is vital for understanding, replicating, and synthesising intervention effects. Essential information includes: naming the interventions, what was actually delivered (e.g. materials and procedures), who provided the interventions, how, where, when, and how much. Details about providers should include their professional qualifications and education, expertise or competence with the interventions or area in general, and training and supervision for delivering the interventions. Tables or diagrams showing the sequence of intervention activities, such as the participant timeline recommended in the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) 2013 Statement, are often useful. Authors should avoid the sole use of labels such as ‘treatment as usual’ or ‘standard care’ because they are not uniform across time and place.

Example/s:

Example 1: social and psychological interventions

The integrated 12-Step facilitation (iTSF) intervention consisted of one 60–75-minute individual session, followed by eight weekly, 90-minute iTSF group sessions (n = 2-5 per group). Immediately prior to the fifth group therapy session, participants completed a second 30–50-minute individual ‘booster’ session. iTSF employed a ‘Socratic’ therapeutic questioning style to promote adolescent attention, verbal engagement and participation in discussion of topics, and therapists also used a variety of TSF strategies. At the end of each group, participants identified a sober activity goal for the week and reported on it at the beginning of the session the following week. Six sessions were based around a recovery-related topic, and two sessions invited members of 12-Step organizations to share their recovery story and experience. One therapist had a master’s degree in social work and was a certified Licensed Alcohol and Drug Counselor (LADC)-I with more than 5 years of experience. The other was in a clinical psychology doctoral training program, with several years of supervised clinical experience in substance use disorder (SUD) and mental health treatment. Both therapists had specific experience in cognitive behavioral therapy, 12-Step philosophy and principles and group-based interventions for individuals with SUD particularly with adolescents. Additionally, prior to beginning the study, therapists each attended five 12-Step meetings to familiarize themselves with the meeting format.

Example 2: cluster randomised trials

In the parent and youth (PY) group, parents received the Familias: Preparando la Nueva Generación (FPNG) parenting curriculum, and youth received the youth-centered substance-use prevention program, keepin’ it REAL (kiR). In the youth group, youth received kiR, and parents did not receive any curriculum. In the control group, parents and youth received treatment-as-usual with respect to curricula offered at the schools.

5a Interventions

Extent to which interventions were actually delivered by providers and taken up by participants as planned.

Frequently, interventions are not implemented as planned. Authors should describe the actual delivery by providers and uptake by participants of interventions for all groups, including methods used to ensure or assess whether the interventions were delivered by providers and taken up by participants as intended. Quantitative or qualitative process evaluations may be used to assess what providers actually did (e.g. recording and coding sessions), the amount of an intervention that participants received (e.g. recording the number of sessions attended), and contamination across intervention groups. Authors should distinguish planned systematic adaptations (e.g. tailoring) from modifications that were not anticipated in the trial protocol. When this information cannot be included in a single manuscript, authors should use online supplements, additional reports, and data repositories to provide this information.

Example/s:

Treatment adherence and competence were monitored through weekly supervision and supervisor review of audio-recorded sessions. … In addition, four iTSF sessions (three group and one individual) were selected at random and rated by two independent doctoral level clinicians on three dimensions: adherence to protocol, skill level and frequency and extensiveness of skills used. On average, adherence was rated at 96.4%, and skill level as 6.4 – where 6 = ‘very good’ and 7 = ‘excellent’. Frequency and extensiveness was rated as 3.7 – where 3 = ‘adequately’ and 4 = ‘extensively’…. Of 59 participants who completed a baseline assessment, four never received any treatment, two of whom actively withdrew and two of whom were unable to be contacted; by the mid-treatment assessment, three more individuals withdrew consent; by end of treatment, two more participants withdrew consent. The study took place from July 2013 to October 2015.

5b Interventions

Where other informational materials about delivering the interventions can be accessed.

Authors should indicate where readers can find sufficient information to replicate the interventions, such as intervention protocols, training manuals, or other materials (e.g. worksheets and websites). For example, new online platforms such as the Open Science Framework allow researchers to share some or all of their study materials freely (https://osf.io).

Example/s:

For more details, see: Kelly J. F., Yeterian J. D., Cristello J. C., Kaminer Y., Kahler C., Timko C. Developing and testing twelve-step facilitation for adolescents with substance use disorder: manual development and preliminary outcomes. Subst Abuse 2016; 10: 55–64.

5c Interventions

When applicable, how intervention providers were assigned to each group.

Some trials assign specific providers to different conditions to prevent expertise and allegiance from confounding the results. Authors should report whether the same people delivered the experimental and comparator interventions, whether providers were nested within intervention groups, and the number of participants assigned to each provider.

Example/s:

The two study therapists treated patients in both treatment conditions to avoid the problem of differential therapist effects. Individual motivational enhancement therapy (MET) sessions were divided equally across the two therapists and therapists co-led all groups in both conditions.

6a Outcomes

Completely defined pre-specified outcomes, including how and when they were assessed.

All outcomes should be defined in sufficient detail for others to reproduce the results using the trial data. An outcome definition includes: (1) the domain (e.g. depression), (2) the measure (e.g. the Beck Depression Inventory II Cognitive subscale), (3) the specific metric (e.g. a value at a time point, a change from baseline), (4) the method of aggregation (e.g. mean, proportion), and (5) the time point (e.g. 3 months post-intervention). In addition, authors should report the methods and persons used to collect outcome data, properties of measures or references to previous reports with this information, methods used to enhance measurement quality (e.g. training of outcome assessors), and any differences in outcome assessment between trial groups. Authors also should indicate where readers can access materials used to measure outcomes. When a trial includes a measure (e.g. a questionnaire) that is not available publicly, authors should provide a copy (e.g. through an online repository or as an online supplement).

Example/s:

Example 1: social and psychological interventions

Irritable bowel syndrome symptom severity was measured using the Irritable Bowel Syndrome Severity Scoring System, which measures the severity of pain, distension, bowel dysfunction and quality of life/global well-being. Assessments were administered at four time-points: baseline (pretreatment), post-treatment (2 months) and at 3 and 6 months post-treatment. A decrease of 50 points on this scale has been identified as a clinically significant change in symptom severity.

Example 2: cluster randomised trials

Viewsheds incorporate GIS methods to digitize the actual line-of-sight of CCTV cameras, which more accurately reflects camera coverage than traditional units of analysis, such as aggregate geographies (i.e., neighborhoods or police beats) and circular buffers drawn around camera sites. Researchers viewed the live feeds of all CCTV cameras in Newark and digitized the viewshed of each site within a GIS, … [which] created 75 separate CCTV schemes from the 146 individual viewsheds…. Catchment zones were created for each of the CCTV schemes [and served as the unit of measurement].

6b Outcomes

Any changes to trial outcomes after the trial commenced, with reasons.

All outcomes assessed should be reported. If the reported outcomes differ from those in the trial registration (Item 23) or protocol (Item 24), authors should state which outcomes were added and which were removed. To allow readers to assess the risk of bias from outcome switching, authors should also identify any changes to level of importance (e.g. primary or secondary). Authors should provide the rationale for any changes made and state whether these were done before or after collecting the data.

Example/s:

We attempted to obtain collateral reports on the participants’ alcohol and drug use at the 12 month follow-up…. Because of the relatively low rate of apparent under reporting by participants relative to collaterals, and the small percentage of participants for whom collateral data were available, participant self-reports were used in the analyses of alcohol use.

7a Sample size

How sample size was determined.

Authors should indicate the intended sample size for the trial and how it was determined, including whether the sample size was determined a priori using a sample size calculation or due to practical constraints. If an a priori sample size calculation was conducted, authors should report the effect estimate used for the sample size calculation and why it was chosen (e.g. the smallest effect size of interest, from a meta-analysis of previous trials). If an a priori sample size calculation was not performed, authors should not present a post hoc calculation, but rather the genuine reason for the sample size (e.g. limitations in time or funding) and the actual power to detect an effect for each result (Item 17).

Example/s:

Example 1: social and psychological interventions

Power calculations were performed using SAS PROC POWER for the primary aims of evaluating the equivalence of the UP and SDPs and evaluating the efficacy of the UP and SDPs relative to a benchmark WLC and were based on conventional target values of power = 0.80 and α = .05. With an allocation ratio of 2:1 for active treatment to WLC groups, results of the power calculations indicated that a sample size of 91 individuals per active treatment group provided adequate power for the analyses of both equivalence and superiority…. The equivalence margin of 0.75 Anxiety Disorders Interview Schedule clinical severity rating (ADIS CSR) units was selected based on available meta-analytic reviews of cognitive behavioral therapy outcome studies and recommendations for selecting a priori equivalence limits.

Example 2: cluster randomised trials

For a medium effect size (d = 0.5, power = 0.80, α = .05), a sample size of 64 per group was needed to test for differences in means between two groups. For cluster RCTs, the sample size was adjusted according to the design effect (design effect = 1 + [cluster size −1] × ICC). As the design effect was 1.23 for parent and child outcomes and .277 for teacher reports, the required sample sizes were 158 and 433, respectively.

7b Sample size

When applicable, explanation of any interim analyses and stopping guidelines.

Multiple statistical analyses can lead to false-positive results, especially when using stopping guidelines based on statistical significance. Any interim analyses should be described, including which analyses were conducted (i.e. the outcomes and methods of analysis), when they were conducted, and why (particularly whether they were pre-specified ). Authors should also describe the reasons for stopping the trial, including any procedures used to determine whether the trial would be stopped early (e.g. regular meetings of a data safety monitoring board).

Example/s:

An independent data monitoring committee reviewed unblinded data for safety after the first 1,000 women in the study had given birth. In response to a lower than anticipated attrition rate, we stopped recruitment when 1,748 had been randomly assigned.

8a Randomization - Sequence generation

Method used to generate the random allocation sequence.

In a randomised trial, participants are assigned to groups by chance using processes designed to be unpredictable. Authors should describe the method used to generate the allocation sequence (e.g. a computer-generated random number sequence), so that readers may assess whether the process was truly random. Authors should not use the term ‘random’ to describe sequences that are deterministic (e.g. alternation, order of recruitment, date of birth).

Example/s:

Random allocation was managed by the study statistician using computer-generated random numbers.

8b Randomization - Sequence generation

Type of randomization; details of any restriction (such as blocking and block size).

Some trials restrict randomisation to balance groups in size or important characteristics. Blocking restricts randomisation by grouping participants into 'blocks' and by assigning participants using a random sequence within each block. When blocking is used, authors should describe how the blocks were generated, the size of the blocks, whether and how block size varied, and if trial staff became aware of the block size. Stratification restricts randomisation by creating multiple random allocation sequences based on site or characteristics thought to modify intervention effects. When stratification is used, authors should report why it was used and describe the variables used for stratification, including cut-off values for categories within each stratum. When minimisation is used, authors should report the variables used for minimisation and include the statistical code. When there are no restrictions on randomisation, authors should state that they used ‘simple randomisation’.

Example/s:

Example 1: social and psychological interventions

A blocked randomization scheme (blocks of 30) was used to yield balanced allocation of participants to treatment groups. Using a computerized adaptive minimization procedure, subjects were matched on suicide attempts or nonsuicidal self-injuries; psychiatric hospitalizations; history of suicide attempts and/or nonsuicidal self-injury; age; and a negative prognostic indicator of depression or a comorbid condition.

Example 2: cluster randomised trials

School Report Card data were used to stratify schools into strata ranked on an index based on [individual] demographic variables; (b) characteristics of the student populations; and (c) indicators of student behavior and performance outcomes. Schools were matched on index score, resulting in 19 strata. Matched pairs were randomly selected from within strata, with one school of each pair randomly assigned to either intervention or control.

9. Randomization - Allocation concealment mechanism

Mechanism used to implement the random allocation sequence, describing any steps taken to conceal the sequence until interventions were assigned.

In addition to generating a truly random sequence (Item 8a), researchers should conceal the sequence to prevent foreknowledge of the intervention assignment by persons enrolling and assigning participants. Otherwise, recruitment and allocation could be affected by knowledge of the next assignment. Authors should report whether and how allocation was concealed. When allocation was concealed, authors should describe the mechanism and how this mechanism was monitored to avoid tampering or subversion (e.g. centralised or 'third-party' assignment, automated assignment system, sequentially numbered identical containers, sealed opaque envelopes). While masking (blinding) is not always possible, allocation concealment is always possible.

Example/s:

Example 1: social and psychological interventions

The randomization list was transferred to a sequence of brown envelopes by writing the sequence of treatment names on the inside of the envelopes, which were then sealed. The sequence of envelopes was then ‘cut’ by taking approximately the first half of the envelopes and placing them at the end of the sequence so that no person involved in the trial would know the starting point of the randomization sequence and to preserve allocation concealment. The envelopes were then numbered.

Example 2: cluster randomised trials

Schools agreeing to participate were stratified by percentage of children receiving free school meals (dichotomised at the median) and were randomly allocated within stratum to treatment arm. One researcher generated the allocation schedule using the Stats Direct computer program, the research unit co-coordinator allocated the schools to treatment arm blind to the identity of each school, and a second researcher enrolled schools.

10. Randomization - Implementation

Who generated the allocation sequence, who enrolled participants, and who assigned participants to interventions.

In many individually randomised trials, staff who generate and conceal the random sequence are different from the staff involved in implementing the sequence. This can prevent tampering or subversion. Other procedures may be used to ensure true randomisation in trials in which participants (e.g. groups, places) are recruited and then randomised at the same time. Authors should indicate who carried out each procedure (i.e. generating the random sequence, enrolling participants, and assigning participants to interventions) and the methods used to protect the sequence.

Example/s:

Example 1: social and psychological interventions

A study nurse telephoned a person at a randomization center who did not know the identities of the potential couples. The study nurse read the names from a list in order in which they had been assessed. Couples were randomly allocated by means of computer-generated random numbers. Every randomization result appeared in the program after the participants name was written and the person executing the randomization confirmed the process with her initials. This ensured that neither the study nurse, nor person doing the randomization could influence the result.

Example 2: cluster randomised trials

General practices were the unit of randomization and determined the patients’ group status. GPs were randomised by coin toss after they gave their written informed consent. 136 GPs (15.9%) gave written informed consent to participate and agreed to adhere to the DelpHi trial protocol…. GPs assessed the eligibility of patients (≥70 years, living at home) and systematically screened patients who met the inclusion criteria…. All persons eligible for the study will be screened for cognitive impairment. People who met the inclusion criteria and provided their written informed consent to participate were included.

11a Awareness of assignment

Who was aware after assignment to interventions (for example, participants, providers, those assessing outcomes), and how any masking was done.

Masking (blinding) refers to withholding information about assigned interventions post-randomisation from those involved in the trial. Masking can reduce threats to internal validity arising from an awareness of the intervention assignment by those who could be influenced by this knowledge. Authors should state whether and how (a) participants, (b) providers, (c) data collectors, and (d) data analysts were kept unaware of intervention assignment. If masking was not done (e.g. because it was not possible), authors should describe the methods, if any, used to assess performance and expectancy biases (e.g. masking trial hypotheses, measuring participant expectations). Although masking of providers and participants is often not possible, masking outcome assessors is usually possible, even for outcomes assessed through interviews or observations. If examined, authors should report the extent to which outcome assessors remained masked to participants’ intervention status.

Example/s:

Although participants and clinicians delivering the treatment could not be blinded to treatment assignment, assessors and clinicians conducting outcome assessments were blinded. In addition, participants were instructed at their follow-up assessment interviews not to reveal their treatment assignment.

11b Awareness of assignment

If relevant, description of the similarity of interventions.

Particularly because masking providers and participants is impossible in many social and psychological intervention trials, authors should describe any differences between interventions delivered to each group that could lead to differences in the performance and expectations of providers and participants. Important details include differences in intervention components and acceptability, co-interventions (or adjunctive interventions) that might be available to some groups and not others, and contextual differences between groups (e.g. differences in place of delivery).

Example/s:

The attention control condition was led by the same interventionists who led the hypnosis intervention sessions. However, the interventionists did not lead the attention control patients in imagery, relaxation, or even simple discussion. Rather the interventionists allowed patients to direct the flow of the conversation and provided supportive and empathic comments according to standardized procedures.

12a Analytical methods

Statistical methods used to compare group outcomes. How missing data were handled, with details of any imputation method.

Complete statistical reporting allows the reader to understand the results and to reproduce analyses. For each outcome, authors should describe the methods of analysis, including transformations and adjustment for covariates, and whether the methods of analysis were chosen a priori or decided after data were collected. In the United States, trials funded by the National Institutes of Health must deposit a statistical analysis plan on www.ClinicalTrials.gov with their results. Authors with other funding sources should ascertain whether there are similar requirements. For cluster randomised trials, authors should state whether the unit analysed differs from the unit of assignment, and if applicable, the analytical methods used to account for differences between the unit of assignment, level of intervention, and the unit of analysis. Authors should also note any procedures and rationale for any transformations to the data. To facilitate full reproducibility, authors should report software used to run analyses and provide the exact statistical code.

Missing data are common in trials of social and psychological interventions for many reasons, such as participant discontinuation, missed visits, and participant failure to complete all items or measures (even for participants who have not discontinued the trial). Authors should report the amount of missing data, evidence regarding the reasons for missingness, and assumptions underlying judgements about missingness (e.g. missing at random). For each outcome, authors should describe the analysis population (i.e. participants who were eligible to be included in the analysis) and the methods for handling missing data, including procedures to account for missing participants (i.e. participants who withdrew from the trial, did not complete an assessment, or otherwise did not provide data) and procedures to account for missing data items (i.e. questions that were not completed on a questionnaire). Imputation methods, which aim to estimate missing data based on other data in the dataset, can influence trial results. When imputation is used, authors should describe the variables used for imputation, the number of imputations performed, the software procedures for executing the imputations, and the results of any sensitivity analyses conducted to test assumptions about missing data. For example, it is often helpful to report results without imputation to help readers evaluate the consequences of imputing data.

Example/s:

Example 1: social and psychological interventions

Linear random-effects models (hierarchical regression models) were implemented with random intercepts and slopes. These models estimate main effects for change from baseline to each assessment at 6, 12, and 18 months, main effect for the treatment, and interactions between the visit and treatment indicator variables. For each of the primary and secondary outcomes, separate intent-to-treat tests and estimates (with 95% CIs) of randomized group contrasts at 6, 12, and 18 months were obtained from the estimates of the respective time × treatment interactions. Potential confounding variables were evaluated by assessing whether baseline factors imbalanced between the treatment groups were related to outcome. Age was found to be a confounding variable and was controlled in all analyses.

Example 2: cluster randomised trials

Because schools were matched into pairs prior to randomization, the data presented here are nested: Children are nested in schools, and schools are nested in their matched pairs. To accommodate these design features, we calculated estimates of intervention impact on change in the primary child outcomes from preintervention baseline (fall 2004, Wave 1) to the first follow-up (spring 2005, Wave 2) using a series of two-level hierarchical linear models with random effects in HLM 6.02. In these models, Level 1 (child) included the preintervention baseline of the dependent variable and the child-level baseline covariates, and Level 2 (school) included a dummy variable indicating intervention condition as well as eight dummy variables representing the school pair matches.

Example 3: How missing data were handled

For continuous outcomes, we used the multiple imputation procedure from SAS; because the data were non-monotonically missing, we used the Markov Chain Monte Carlo procedure. We used all available data regarding demographic status, psychosocial variables, unprotected anal intercourse, and transmission risk partners to impute missing values on risk outcomes. Missing data correction for binary measures used the previous wave value.

12b Analytical methods

Methods for additional analyses, such as subgroup analyses, adjusted analyses, and process evaluations.

In addition to analysing impacts on primary and secondary outcomes, trials often include additional analyses, such as subgroup analyses and mediation analyses to investigate processes of change. All analyses should be reported at the same level of detail. Authors should indicate which subgroup analyses were specified a priori in the trial registration or protocol (Items 23 and 24), how subgroups were constructed, and distinguish confirmatory analyses from exploratory analyses. For adjusted analyses, authors should report the statistical procedures and covariates used and the rationale for these. Additionally, qualitative analyses may be used to investigate processes of change, implementation processes, contextual influences, and unanticipated outcomes. Authors should indicate whether such analyses were undertaken or are planned (and where they are or will be reported if so). Authors should report methods and results of qualitative analyses according to reporting standards for primary qualitative research.

Example/s:

Example 1

We examined effect moderators using multiple regression. In Step 1, baseline conduct problem score was entered, followed by intervention status and moderator variable. In Step 2, the interaction term (Potential Moderator x Intervention Status) was introduced. We examined mediators by assessing associations between change in putative mediator, change in outcome, and intervention status; conducting hierarchical multiple regressions; and assessing significance using the Sobel test.

Example 2

The process evaluation was based on an inductive thematic analysis. Recordings were transcribed, coded, and analyzed by two researchers. Each researcher drew on the other to assess rater reliability and interpretation. Each session observed was reported separately first using a grounded theory approach with no observational protocol. Themes emerged through considering sessions and of archival materials.

Results

13a Participant flow diagram (strongly recommended)

For each group, the numbers randomly assigned, receiving intended treatment, and analysed for the outcomes. Where possible, the number approached, screened, and eligible prior to random assignment, with reasons for non-enrolment.

Attrition after randomisation can affect internal validity (i.e. by introducing selection bias), and attrition before or after randomisation can affect generalisability. Authors should report available information about the total number of participants at each stage of the trial, with reasons for non-enrolment (i.e. before randomisation) or discontinuation (i.e. after randomisation). Key stages typically include: approaching participants, screening for potential eligibility, assessment to confirm eligibility, random assignment, intervention receipt, and outcome assessment. As there may be delays between each stage (e.g. between randomisation and initiation of the intervention) , authors should include a flow diagram to describe trial attrition in relation to each of these key stages.

13b Participant flow

For each group, losses and exclusions after randomization, together with reasons.

Authors should report participant attrition and data exclusion by the research team for each randomised group at each follow-up point. Authors should distinguish between the number of participants who deviate from the intervention protocol but continue to receive an intervention, discontinue an intervention but continue to provide outcome data, discontinue the trial altogether, and were excluded by the investigators. Authors should provide reasons for each loss (e.g. lost contact, died) and exclusion (e.g. excluded by the investigators because of poor adherence to intervention protocol), and indicate the number of persons who discontinued for unknown reasons.

Example/s:

Two of the 502 BMI (brief motivational intervention) participants were administratively dropped from the study when it was discovered after randomization that they began working at the university survey research center collecting data for the study.

14a Recruitment

Dates defining the periods of recruitment and follow-up.

The dates of a trial and its activities provide readers some information about the historical context of the trial. The SPIRIT 2013 Statement includes a table that authors can use to provide a complete schedule of trial activities, including recruitment practices, pre-randomisation assessments, periods of intervention delivery, a schedule of post-randomisation assessments, and when the trial was stopped. In the description, authors should define baseline assessment and follow-up times relative to randomisation. For example, by itself, ‘4-week follow-up’ is unclear and could mean different things if meant after randomisation or after the end of an intervention.

Example/s:

Recruitment occurred from April 2006 to January 2008 using radio, web-based, and newspaper advertisements for a smoking cessation intervention consisting of group therapy plus nicotine patch…. Smoking was assessed at 1 week-, 4 weeks- (end of behavioral treatment), 16 weeks-, and 26 weeks-post assigned quit date.

14b Recruitment

Why the trial ended or was stopped.

Authors should state why the trial was stopped. Trials might be stopped for reasons decided a priori (e.g. sample size reached and predetermined follow-up period completed) or in response to the results. For trials stopped early in response to interim analyses (Item 7b), authors should state the reason for stopping (e.g. for safety or futility) and whether the stopping rule was decided a priori. If applicable, authors should describe other reasons for stopping, such as implementation challenges (e.g. could not recruit enough participants) or extrinsic factors (e.g. a natural disaster). Authors should indicate whether there are plans to continue collecting outcome data (e.g. long-term follow-up).

Example/s:

Recruitment was stopped once the study sample size was achieved because of study timelines and budget constraints, and the study was stopped once the 1-year follow-up assessments were completed.

15. Baseline data

A table showing baseline characteristics for each group. Include socioeconomic variables where applicable.

Authors should provide a table summarising all data collected at baseline, with descriptive statistics for each randomised group. This table should include all important characteristics measured at baseline, including pre-intervention data on trial outcomes, and potential prognostic variables. Authors should pay particular attention to topic-specific information related to socioeconomic and other inequalities. For continuous variables, authors should report the average value and its variance (e.g. mean and standard deviation). For categorical variables, authors should report the numerator and denominator for each category. Authors should not use standard errors and confidence intervals for baseline data because these are inferential (rather than descriptive): inferential statistics assess the probability that observed differences occurred by chance, and all baseline differences in randomised trials occur by chance.

16. Numbers analysed

For each group, number included in each analysis and whether the analysis was by original assigned groups.

While a flow diagram is helpful for indicating the number of participants at each trial stage, the number of participants included in each analysis often differs across outcomes and analyses. Authors should report the number of participants per intervention group for each analysis, so readers can interpret the results and perform secondary analyses of the data. For each outcome, authors should also identify the analysis population and the method used for handling missing data (Item 12a).

Example/s:

Example 1

See Table 6 for number of participants for each analysis. The primary analyses used all available follow-up data and compared participants in their randomized groups, irrespective of the intervention they received. The sensitivity of the primary analyses was assessed including baseline school attendance, using a per protocol analysis (excluding three participants in the psycho-education group, two of whom did not fulfil criteria for CFS (chronic fatigue syndrome) and one who received 13 sessions of CBT (cognitive behavioural therapy)) and multiple imputation as an alternative method for handling missing data.

Example 2

See Table 6. Importantly for our analyses, allocation was not associated with attrition. Results from complete case analyses (CCA; not tabled) were also carried out and did not differ markedly from those reported here. All models were carried out on the intent-to-treat basis and estimated controlling for student sex and baseline values of the evaluated outcome.

17a Outcomes and estimation

For each outcome, results for each group, and the estimated effect size and its precision (such as 95% confidence interval). Indicate availability of trial data.

For each outcome in a trial, authors should report summary results for all analyses, including results for each trial group and the contrast between groups, the estimated magnitude of the difference (effect size), the precision or uncertainty of the estimate (e.g. 95% confidence interval or CI), and the number of people included in the analysis in each group. The p value does not describe the precision of an effect estimate, and authors should report precision even if the difference between groups is not statistically significant. For categorical outcomes, summary results for each analysis should include the number of participants with the event of interest. The effect size can be expressed as the risk ratio, odds ratio, or risk difference and its precision (e.g. 95% CI). For continuous outcomes, summary results for each analysis should include the average value and its variance (e.g. mean and standard error). The effect size is usually expressed as the mean difference and its precision (e.g. 95% CI). Summary results are often more clearly presented in a table rather than narratively in text.

As part of the growing open-science movement, triallists are increasingly expected to maintain their datasets, linked via trial registrations and posted in trusted online repositories (see http://www.bitss.org/resource-tag/data-repository/), to facilitate reproducibility of reported analyses and future secondary data analyses. Data sharing is also associated with higher citations. Authors should indicate whether and how to obtain trial datasets, including any metadata and analytic code needed to replicate the reported analyses. Any legal or ethical restrictions on making the trial data available should be described.

Example/s:

Example 1: social and psychological interventions

See Table 2 for adjusted and unadjusted summary results for each study group and the estimated effect size on continuous and dichotomous outcomes.

Example 2: cluster randomised trials

See Table 2 for unadjusted summary results for each study group and the estimated effect size for each outcome. The average cluster size (number of participants in each preschool) was eight and the intracluster correlation (ρ) was 0.07. The design effect was thus 1.49.

Example 3: Availability of trial data

Data Availability: All relevant data are within the paper and its Supporting Information files.

17b Outcomes and estimation

For binary outcomes, presentation of both absolute and relative effect sizes is recommended.

By themselves, neither relative measures nor absolute measures provide comprehensive information about intervention effects. Authors should report relative effect sizes (e.g. risk ratios) to express the strength of effects and absolute effect sizes (e.g. risk differences) to indicate actual differences in events between interventions.

Example/s:

Based on observed data, 24.3% (N=27/111) of the patients in the CBT (cognitive behavioural therapy) condition and 21.3% (N=26/122) in the psycho- dynamic therapy condition met the remission criterion at the posttreatment assessment…. At the posttreatment assessment, the odds ratio was 0.82 (95% CI=0.45–1.50), indicating that remission rates did not differ significantly.

18. Ancillary analyses

Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratory.

Authors should report the results for each additional analysis described in the methods (Item 12b), indicating the number of analyses performed for each outcome, which analyses were pre-specified, and which analyses were not pre-specified. When evaluating effects for subgroups, authors should report interaction effects or other appropriate tests for heterogeneity between groups, including the estimated difference in the intervention effect between each subgroup with confidence intervals. Comparing tests for the significance of change within subgroups is not an appropriate basis for evaluating differences between subgroups. If reporting adjusted analyses, authors should provide unadjusted results as well. Authors reporting any results from qualitative data analyses should follow reporting standards for qualitative research, though adequately reporting these findings will likely require more than one journal article.

Example/s:

Example 1

There were no significant moderator effects for single parenthood, very low income, teen parenthood, and baseline level of observed child deviant behavior. Child gender, depression, and age were significant moderators, interacting with intervention status to predict conduct problem outcome. The intervention produced better conduct problems for boys, children of more depressed mothers, and younger children. Mediator analyses found change in positive parenting skill predicted change in conduct problems.

Example 2

There was an openness and willingness from police officers to talk about their work in an accessible, engaging way. Young people were generally very well behaved and engaged in the sessions. Some facilitators were adept at holding information generated by the group and returning to it at relevant later points. The best facilitators did so in neutral, non-judgmental ways. However, balancing the quantity of material with the quality of interaction was the biggest practical challenge observed. Supporting materials were difficult to implement systematically or consistently.

19. Harms

All important harms or unintended effects in each group (For specific guidance see CONSORT for harms).

Social and psychological interventions have the potential to produce unintended effects, both harmful and beneficial. These may be identified in the protocol and relate to the theory of how the interventions are hypothesised to work (Item 2b), or they may be unexpected events that were not pre-specified for assessment. Harms may include indirect effects such as increased inequalities at the level of groups or places that result from the intervention. When reporting quantitative data on unintended effects, authors should indicate how they were defined and measured, and the frequency of each event per trial group. Authors should report all results from qualitative investigations that identify possible unintended effects because this information may help readers make informed decisions about using interventions in future research and practice.

Example/s:

Regarding major incidents, two suicide attempts, two accidental deaths related to psychotic symptoms, a serious fight where the patients sustained serious injuries and three patients initiating substance abuse were recorded in the control group. In the family intervention group minor incidents were detected such as starting sexual relationships with the risk of HIV infection, vagrancy, bouts of alcohol consumption, and aggressivity.

Discussion

20. Limitations

Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses.

Authors should provide a balanced discussion of the strengths and limitations of the trial and its results. Authors should consider issues related to risks of bias, precision of effect estimates, the use of multiple outcomes and analyses, and whether the intervention was delivered and taken up as planned.

Example/s:

As described previously, statistical conclusion validity in the argumentation analysis was limited by not having a specific argumentation pretest covariate in the model, and instead a knowledge/reasoning pretest value for each student was used because a correlation was expected. Our claims about retention are similarly tempered (no argumentation pretest or no knowledge/reasoning retention measure). Other limitations of this study include the small sample size (58 students) and the short length of the intervention (10 hours of instruction, 4 hours of testing), yet the fact that we found significant and consistent differences despite these limitations speaks to the strength of the effect. Despite the teacher in this study having many years experience teaching both traditional and inquiry-based materials, he is undoubtedly more of an advocate of an inquiry-based approach. However, we believe the benefits of controlling variables by having the same teacher in both sections outweighed the potential bias created by a teacher being more comfortable in one approach than the other, and findings such as the comparable levels of student engagement shown in Table 4 suggest that the treatments were not strongly teacher-biased.

21. Generalisability

Generalisability (external validity, applicability) of the trial findings.

Authors should address generalisability, or the extent to which the authors believe that trial results can be expected in other situations. Authors should explain how statements about generalisability relate to the trial design and execution. Key factors to consider discussing include: recruitment practices, eligibility criteria, sample characteristics, facilitators and barriers to intervention implementation, the choice of comparator, what outcomes were assessed and how, length of follow-up, and setting characteristics.

Example/s:

Example 1: social and psychological interventions

Although the ethnic makeup of the sample roughly approximated that of the county in which our research center was situated (Middlesex County, Massachusetts), the sample included fewer African Americans, more biracial children, and participants with higher parental education and socioeconomic status than the general Middlesex County community. Further studies are needed to examine the protocol’s efficacy in samples with greater socioeconomic and ethnic diversity, as well its effectiveness in community mental health settings. In addition, our criteria allowed for the exclusion of children judged too uncooperative or distractible to take part in the treatment (two children) or children deemed too clinically severe to wait 6 months to receive treatment, based on severe mood disorder, severe social isolation, severe impairment in school function or attendance, or severe OCD (obsessive compulsive disorder) (a total of seven children). These criteria generally excluded children who clinically would not be administered CBT (cognitive behavioral therapy) for anxiety disorders as their first treatment (i.e., they might be offered such treatment after their other symptoms were addressed). Therefore, study results can be generalized only to children whose anxiety disorders are not so severe as to cause school refusal or severe social isolation. In other regards, however, the sample appeared representative of clinical samples, with high comorbidity of anxiety disorders and with 69% in the borderline or clinical range on the CBCL (child behavior checklist) Internalizing scale. In addition, our extensive intake assessment battery, which required a total of four parent and/or child visits prior to randomization, deterred as many as 1 in 4 potential participants and may have selected for families who were especially motivated to take part in treatment.

Example 2: cluster randomised trials

In contrast to the lack of change shown in adolescents receiving creative workshops in Uganda, this intervention [in Indonesia], which includes structured creative activities as well as trauma-focused activities, did show effects on psychosocial well-being. It could therefore be considered a preliminary argument that increased structured interventions, which include trauma-focused activities, more effectively target PTSD symptoms. Corroboration of this argument can be found in the high-effect sizes of group CBT implemented in violence-affected schools in Los Angeles. However, previously mentioned qualitative research has shown the importance of addressing wider social problems caused by war, rather than purely focusing on PTSD complaints. In addition, specialized mental health professionals to implement CBT are usually unavailable in low-income settings. To resolve this tension, we propose that in complex emergencies, interventionists use a public health framework to tailor interventions to an appropriate population and referral level, based on investigated local needs, severity of complaints, available resources, and feasible and cost-effective interventions, while recognizing the importance of the social-ecological context. On the basis of these findings, the classroom-based intervention then qualifies as an appropriate intervention to target larger groups of children (especially girls) at risk, when stress-related symptoms are relevant.

22. Interpretation

Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence.

Authors should provide a brief interpretation of findings in light of the trial’s objectives or hypotheses. Authors may wish to discuss plausible alternative explanations for results other than differences in effects between interventions. Authors should contextualise results and identify the additional knowledge gained by discussing how the trial adds to the results of other relevant literature , including references to previous trials and systematic reviews. If theory was used to inform intervention development or evaluation (Item 2b), authors should discuss how the results of the trial compare with previous theories about how the interventions would work. Authors should consider describing the practical significance of findings; the potential implications of findings to theory, practice and policy; and specific areas of future research to address gaps in current knowledge. Authors should avoid distorted presentation or 'spin' when discussing trial findings.

Example/s:

The results from this trial differed from previous CBT (cognitive behavioural therapy) trials in two key areas. Only one patient (3%) did not complete the treatment. Previous IBS (irritable bowel syndrome) studies suggest that drop-out rates from CBT can be as high 40%. This may be because traditional CBT requires a substantial time commitment from patients. The most common reasons for dropping out are being unable to take time off work or childcare commitments. Having fewer sessions and sessions on the telephone may make the therapy more widely available. In addition, presenting treatment as self-management of a chronic condition rather than as a psychological therapy may be more acceptable to IBS patients. The treatment effects for symptom severity in this study are larger than those reported in many other CBT trials. This may be because of differences in the patient cohorts. As our study did not rely on GP (general practitioner) referral we may have accessed a cohort that seldom gets offered therapeutic intervention or perhaps even gets diagnosed. This is important, as our results suggest that treatment effects may be greater if patients are less disabled by their symptoms and less depressed. There is certainly evidence that depression in IBS is related to poorer treatment outcome. This study indicates that early intervention and diagnosis may not only make treatment more effective but also prevent the illness becoming more chronic and refractory to treatment.

Important information

23. Registration

Registration number and name of trial registry.

Trial registration is the posting of a minimum information set in a public database, including: eligibility criteria, all outcomes, intervention protocols, and planned analyses. Trial registration aids systematic reviews and meta-analyses, and responds to decades-long calls to prevent reporting biases. Trial registration is now required for all trials published by journals that endorse the International Committee of Medical Journal Editors guidelines and for all trials funded by the National Institutes of Health in the United States as well as the Medical Research Council and National Institute for Health Research in the UK.

Trials should be registered prospectively, before beginning enrolment, normally in a publicly accessible website managed by a registry conforming to established standards. Authors should report the name of the trial registry, the unique identification number for the trial provided by that registry, and the stage at which the trial was registered. If authors did not register their trial, they should report this and the reason for not registering. Registries used in clinical medicine (e.g. www.ClinicalTrials.gov) are suitable for social and psychological intervention trials with health outcomes , and several registries exist specifically for social and psychological interventions (http://www.bitss.org/resource-tag/registry/).

Example/s:

We registered the study with the American Economic Association (AEARCTR-0000742).

24. Protocol

Where the full trial protocol can be accessed, if available.

Details about trial design should be described in a publicly accessible protocol (e.g. published manuscript, report in a repository) that includes a record of all amendments made after the trial began. Authors should report where the trial protocol can be accessed. Guidance on developing and reporting protocols has recently been published. Authors of social and psychological intervention trials who face difficulty finding a journal that publishes trial protocols could search for journals supporting the Registered Reports format (https://cos.io/rr/) or upload their trial protocols to relevant preprint servers such as PsyArXiv (https://psyarxiv.com/) and SocArXiv (https://osf.io/preprints/socarxiv).

Example/s:

The full trial protocol is available in the Supplement: https://jamanetwork.com/data/Journals/PSYCH/935708/YOI160049supp1_prod.pdf.

25. Declaration of interests

Sources of funding and other support, role of funders. Declaration of any other potential interests.

Information about trial funding and support is important in helping readers to identify potential conflicts of interest. Authors should identify and describe all sources of monetary or material support for the trial, including salary support for trial investigators and resources provided or donated for any phase of the trial (e.g. space, intervention materials, assessment tools). Authors should report the name of the persons or entities supported, the name of the funder, and the award number. They should also specifically state if these sources had any role in the design, conduct, analysis, and reporting of the trial, and the nature of any involvement or influence. If funders had no involvement or influence, authors should specifically report this.

In addition to financial interests, it is important that authors declare any other potential interests that may be perceived to influence the design, conduct, analysis, or reporting of the trial following established criteria. Examples include allegiance to or professional training in evaluated interventions. Authors should err on the side of caution in declaring potential interests. If authors do not have any financial, professional, personal, or other potential interests to declare, they should declare this explicitly.

Example/s:

Example 1

This work was supported by a research grant from the National Institute on Drug Abuse (R01 DA015183-05) with cofunding from the National Cancer Institute, the National Institute of Child Health and Human Development, the National Institute of Mental Health, and the Center for Substance Abuse Prevention. The funders had no role in the design, conduct, analysis and reporting of the trial.

Example 2

The contributions of GP in reviewing the creative process instrument a few times are gratefully acknowledged. GP authored a few books such as Evidence based teaching—A practical approach (2006), Teaching today: A practical guide (2004) and How to be better at creativity (1996). He works for the Learning Skills Development Agency as a consultant on their Raising Quality and Achievement programme, assisting Action Research Development Projects in colleges, and assisting the Quality Improvement Team. He is a visiting examiner for the Institute of Education at London University. His experience includes physics teaching, managing teacher training, Inclusive Learning Facilitator, being a staff development officer, and managing college lesson observation.

26a Stakeholder involvement

Any involvement of the intervention developer in the design, conduct, analysis, or reporting of the trial.

Intervention developers are often authors of trial reports. Because involvement of intervention developers in trials may be associated with effect sizes, authors should report whether intervention developers were involved in designing the trial, delivering the intervention, assessing the outcomes, or interpreting the data. Authors should also disclose close collaborations with the intervention developers (e.g. being a former student of the developer, serving on an advisory or consultancy board related to the intervention), and any legal or intellectual rights related to the interventions, especially if these could lead to future financial interests.

Example/s:

The evaluation was conducted by the same team of people that designed the intervention. In order to promote objectivity of the evaluation we submitted the trial protocol for publication before the study began and submitted statistical analysis plans with prespecified primary outcomes before data were inspected, and these plans were scrutinized by an independent data-monitoring committee. As with all new projects, it is likely that the intervention was managed and implemented with greater expertise and enthusiasm by the project team than could be expected in subsequent iterations and scaling up of the intervention.

26b Stakeholder involvement

Other stakeholder involvement in trial design, conduct, or analyses.

Researchers are increasingly called to consult or collaborate with those who have a direct interest in the results of trials, such as providers, clients, and payers. Stakeholders may be involved in designing trials (e.g. choosing outcomes) , delivering interventions, or interpreting trial results. Stakeholder involvement may help to better ensure the acceptability, implementability, and sustainability of interventions as they move from research to real-world settings. When applicable, authors should describe which stakeholders were involved, how they were recruited, and how they were involved in various stages of the trial. Authors may find reporting standards on public involvement in research useful.

Example/s:

The SKCHH [Seattle–King County Healthy Homes] project was designed as a community-based participatory research project with overall sponsorship by Seattle Partners for Healthy Communities, an Urban Research Center funded by the U.S. Centers for Disease Control and Prevention. Seattle Partners is a multidisciplinary partnership of community agencies, community activists, public health professionals, academics, and health providers that supports community-based participatory research addressing social determinants of health…. Both the Seattle Partners Board and the steering committee sought to assure that the project benefited all participants. This led to the staggered intervention design with low- and high-intensity groups. This design assured that low-intensity group participants initially received some immediate benefit (including interventions known to be useful, such as bedding encasements) while ultimately receiving all the benefits accorded the high-intensity group. While this design may have reduced the study’s power to demonstrate an effect of the high- intensity intervention relative to a “pure” control group receiving no intervention, we felt such a design was not ethical.

26c Stakeholder involvement

Incentives offered as part of the trial.

Incentives offered to participants, providers, organisations, and others involved in a trial can influence recruitment, engagement with the interventions, and quality of intervention delivery. Incentives include monetary compensation, gifts (e.g. meals, transportation, access to services), academic credit, and coercion (e.g. prison diversion). When incentives are used, authors should make clear at what trial stage and for what purpose incentives are offered, and what these incentives entail. Authors also should state whether incentives differ by trial group, such as compensation for participants receiving the experimental rather than the comparator interventions.

Example/s:

As part of an incentive package to each participating school, all teachers received free OCR program materials and professional development supports throughout the year. Control schools from five districts received a core math program (Everyday Mathematics) and related professional development at no charge, whereas two districts received a cash incentive of $5,000 per school in each year of the study. In this sense, the treatment and control schools received a similar amount of resources, with treatment schools receiving additional supports for their literacy programs and control schools receiving additional supports for their core math or other educational programs. Additionally, the study provided incentives to participants. Teachers received $15 for completing a survey in the fall and spring of each year. In addition, teachers received up to $45 for completing an interview and for allowing researchers to observe their classrooms up to three times per year. A staff person from each school volunteered as a school liaison and received a $500 stipend per year for coordinating research activities at each school.

To acknowledge this checklist in your methods, please state "We used the CONSORT-SPI checklist when writing our report [citation]". Then cite this checklist as Montgomery P, Grant S, Mayo-Wilson E, Macdonald G, Michie S, Hopewell S, Moher D; on behalf of the CONSORT-SPI Group. Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension. Trials. 2018;19:407..

The CONSORT-SPI checklist is distributed under the terms of the Creative Commons Attribution License CC-BY