Experimental and observational studies
The statistical analysis of a dataset is always dependent on how the data have been collected. An experimental study, e.g. a randomised clinical trial, can be designed in such a way that validity problems (selection bias, misclassification bias, confounding bias) are prevented, for example using concealed treatment allocation, randomisation of patients to treatment, and masking of the treatment. The statistical analysis can then focus entirely on precision issues such as estimating sample size and statistical power, testing null hypotheses and estimating effect size. Such interventions are, however, impossible in an observational study. Validity issues need instead to be addressed in the statistical analysis, usually by adjustments, but this can be successful only when the analyst knows what to adjust for and if the necessary data have been available in the dataset. This is generally not the case.
One consequence of these differences in study design is that the results from experimental studies are considered more accurate and reliable than results from observational studies. Another consequence is that even with the same statistical methods, different analyses strategies may be necessary. For example, regression analyses are often used in an experimental study to account for randomisation stratification factors and for baseline value when estimating change from baseline, but the purpose of using regression analysis in an observational study is usually to adjust estimated effect sizes from confounding by association with competing risk factors.
Common mistakes in manuscripts include the usage of trial terminology (e.g. primary and secondary outcomes and intention-to-treat) in observational studies, where these terms have no relevant definition, and the usage of observational analysis strategies (confounding adjustments) in randomised trials, where confounding already has been prevented by randomisation.