The “New” Statistics: effect sizes and meta-analytic methods

Posted on

The “New” Statistics: effect sizes and meta-analytic methods

  • Instructor(s): Christina Bergmann Radbout University Nijmegen
  • Available also as advanced: No

“Classical statistics”, most prominently p-values as an indicator of whether a finding is “significant” (often taken to mean interesting), have recently come under fire. An ongoing discussion is thus whether there should be a shift of focus away from p-values and towards effect sizes and meta-analyses.

But what are effect sizes, how can a single researcher benefit from meta-analyses, and (how) are they better than p-values? In this tutorial we will first discuss why p-values have been criticized, including practices that diminish the informativeness of p-values. Then we will go over the logic and interpretation of the effect sizes. More specifically, we will discuss how effect sizes and meta-analyses can be useful for two main purposes: (1) For theory building and evaluation, meta-analyses can aggregate over diverse studies and pinpoint when abilities emerge. (2) It is possible to use meta-analytic methods to make practical decisions during study design, such as deciding a priori how many participants are necessary to run a sufficiently powered study and which method to choose. Particularly sample size decisions are often under-estimated, as flexibility in data collection vastly increases the risk of false positives (significant findings in the absence of an effect) and too small samples at the same time lead to a higher risk of false negatives (null results in the presence of an effect). In sum, this tutorial will help interpreting completed studies and improve experiment planning and interpretation.

Further reading:
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Med, 2(8), e124.

Return to the list of tutorials

Sidebar