What "significant" mean? Or: are there alternatives to the "alternative" hypothesis?
Are you already subscribed?
Login to check
whether this content is already included on your personal or institutional subscription.
Abstract
The paper points out the methodological shortcomings of the experimental approaches, aimed to verify the "significance" of the alternative vs. the null hypothesis, when they are used in the applied research. The difference between statistical "significance" and practical (e.g., clinical) "meaningfulness" of the results is underlined, when: 1) the number of cases is small (problem of "power"); 2) many intervening variables can not be controlled (problem of "error variance"); 4) the generalization is difficult (problem of probabilistic inference); 3) the complexity of the object of the research needs an epistemology different from that based on reductionism and linear causality. Some alternative methods to verify the hypotheses are proposed: use of "effect sizes" and cumulative analyses, also for single-cases studies; action-research with its specific procedures; qualitative approaches. In conclusion, to take into account the complexity when testing the hypotheses we need a multiplicity of methods suitable non only to reach "significant " results, but to allow the emerging of "meaningfulness".
Keywords
- Methodology
- significance
- power
- effect size
- generalization