ii. Secondary outcomes: Did the intervention change rates of occurrence for one or more other outcomes? For example, did patients with open fractures in the intervention group report improved or poorer outcomes than in the control group? These should be as discrete as possible but could be qualitative.
2. Prospective cohort comparison study (nonrandomized controlled trial):
a. Gathers data moving forward for similar patients provided two or more differing treatments determined by other factors besides randomization.
b. Less controlled studies at risk of selection bias (e.g., surgeon preference, patient desires, etc.).
Fig. 5.1 Diagram of different types of study design.
B. Observational
1. Descriptive:
a. Retrospective case series:
i. Report of a group of patients with a similar condition and/or treatment without any comparison group.
ii. Often represents a report of a single individual’s or institution’s experience.
iii. May be beneficial if reporting a group of patients with bad outcomes, in that it can help guide physicians away from dangerous interventions (e.g., Kirschner wire fixation of femur fractures results in 100% nonunion rates and 95% infection rates).
iv. Limited value if reporting a group of patients with good outcomes, in that it does not provide evidence that the intervention is better or worse than other commonly accepted interventions.
2. Analytical:
a. Prospective cohort study:
i. Gathers data moving forward on a novel treatment without a different intervention group.
ii. Patients are identified based on exposure (e.g., femur fracture) and followed over time to determine who develops a particular outcome of interest (e.g., infection, nonunion).
iii. Prospective cohort study with historical controls. Data collected are analyzed and compared to data already in existence at a given institution or to historical reports in the literature.
b. Retrospective cohort comparison studies:
i. Data is already in existence at the time of study development.
ii. Normally entail medical record review (and radiographic review if applicable).
iii. Two or more different treatments are then compared based upon data already in existence with respect to the development of an outcome(s) of interest.
iv. Disadvantage—if data points do not exist, then a potentially important question may not be answered.
c. Case-control study:
i. Retrospective study that determines if an exposure is associated with an outcome.
ii. Patients with a specific outcome or disease such as arthritis (“the cases”) are compared to patients without arthritis (“the control”) and the incidence of potential risk factor(s), such as obesity, are explored in both groups.
iii. Better for rare outcomes as smaller numbers are necessary.
d. Cross-sectional study.
III. Levels of Evidence
A. Types of studies
1. Diagnostic—investigates a diagnostic test/protocol.
2. Prognostic—investigates a characteristic of patients and its effect on disease outcomes.
3. Therapeutic—most common in orthopaedics; investigates the results of a treatment.
4. Economic—generally related to cost/value proportions.
B. Retrospective versus prospective
1. A retrospective study has the study question formulated AFTER data acquisition.
2. A prospective study has the study question formulated PRIOR to acquisition of any data.
C. Levels (for diagnostic, prognostic, and therapeutic studies)
1. Level I—randomized controlled trials, inception cohort studies, testing of previously developed diagnostic tests.
2. Level II—prospective cohort (comparative) studies, development of diagnostic criteria (rigorous standards of references and blinding), dramatic effect observational studies.
3. Level III—case-control studies, retrospective cohort (comparative) study, diagnostic studies without consistently applied reference standards.
4. Level IV—case series, patient series with historical control group, poor reference standard diagnostic studies.
5. Level V—opinions (reasoning).
6. Systematic reviews/meta-analyses—level is determined based upon quality of evidence reviewed.
a. These types of manuscripts represent studies of results from at least two previously published studies.
b. Level I—review of randomized controlled studies (homogeneity of studies is necessary).
c. Level II—review of cohort studies (or heterogeneous [inconsistent results noted between] randomized controlled studies).
d. Level III—review of case-control studies.
IV. Basic Statistical Interpretation
A. Definitions
1. Null hypothesis:
a. According to this, in a population, two interventions (or an intervention and a nonintervention) will result in no difference in outcomes.
b. Often presented in the negative (i.e., an intervention being studied will NOT affect the outcome).
2. Alternative hypothesis:
a. In a population, an intervention will result in a difference in outcome.
b. Often presented in the positive (i.e., an intervention being studied WILL affect the outcome).
3. P-value:
a. A probability that the null hypothesis will be accepted (and the alternative hypothesis will be rejected).
b. Often set at < 0.05 for statistical significance (i.e., there is < 5% chance that the null hypothesis will be accepted).
4. Power:
a. A trial should be big enough to detect a statistically significant effect, if it exists, and to be reasonably sure that no effect exists if none detected by the trial.
b. Calculation based upon data in existence (such as previously published) or based upon assumptions.
c. Authors need to determine a minimum clinically important difference (MCID) in order to perform this calculation.
d. Underpowered studies may not be clinically relevant, even if the p-values indicate statistical significance (a larger sample size may cause a change in the results).
5. Fragile p-value:
a. Beware when one group in a comparison study has zero events.
i. Were there no events because there never will be events, or were there no events because the sample size was not big enough?
ii.