Examine the statistical significance of the statistics you are analyzing. This may be from an analysis of variance (ANOVA), regression, or other statistical procedure. You need a statistically significant association, such as between income and level of education, to give only one example, before calculating an effect size to determine the strength or practical significance of that association.
Locate the mean values of the two groups or samples you are studying. You'll need these to calculate an effect size. The two groups are usually referred to as the experimental or intervention group and the control group. The results of your statistical procedure will display the mean values in the descriptive statistics section or table.
Combine your two groups into one and compute a standard deviation. This is known as the pooled standard deviation because it results from pooling the two separate groups. A standard deviation shows the level of spread in a distribution of values or scores.
Calculate your effect size. One of the best-known measures of effect size is "Cohen's d", named for the American statistician and psychologist Jacob Cohen. To calculate the value of "d", you subtract the mean of the control group from the mean of the experimental group and divide by the pooled standard deviation.
Interpret the results of your calculation in Step 4. Effect sizes in value are less than 1. Interpretations vary, but in general, a "d" value of 0.2 indicates only a small effect; 0.5, a medium effect; and 0.8 or greater, a large effect. The "d" score indicates the practical significance of the association you are exploring.