Master Thesis Lab

Statistics & Methods Centre - Elementary statistics



Comparison of means: t-test
The t-test is used in many ways in statistics. The more common uses are (1) comparing one mean with a known mean, (2) testing whether two means are distinct, (3) testing whether the means from matched pairs are equal. Also called Student's t test (equal variances) or Welch's t test (unequal variances). Applicable for means from one sample, two independent samples, and paired samples. See analysis of variance for comparing more than two means. The t-test is also used in other contexts.
Basic reading
Moore & McCabe, Chapter 7: Inference for Distributions.
Field, Chapter 7: Comparing two means.
Advanced reading
Moore & McCabe, Chapter 7: Inference for Distributions.
Software
SPSS => Analyze => Compare Means => One-Sample t Test
SPSS => Analyze => Compare Means => Independent-Samples t Test
SPSS => Analyze => Compare Means => Paired-Samples t Test
Annotated output independent sample t-test - BGSU
Reporting t-tests in publications
The results of a t-test are reported with the appropriate degrees-of-freedom between brackets, the value of t in two decimals followed by the descriptive level of significance (p) and the size of the groups. Also report effect sizes, which however are not available in SPSS; see Field for details or use the effect size calculator below.
Example: A statistically significant effect for sex was observed, t(58) = 7.76, p < .001, in particular men scored lower than women (Mmen = 7.5, SD = 1.00, N = 30; Mwomen = 8.5, SD = 1.00, N = 30). The effect size measured with Cohen's d was 1.00.
Never report significance levels as p = .0000; correct is p < .0001.

Power: Power calculator
Effect size, independent samples: Effect size calculator

Top


Correlation
The (Pearson) correlation coefficient is a measure of the strength of the linear relationship between two interval or numeric variables. Other correlation coefficients exist to measure the relationship between ordinal two variables, such the Spearman's rank correlation coefficient. The highest value of the correlation coefficient is 1 or -1 (perfect relationship), the lowest value 0 (no relationship). The t-test is used to test whether a sample Pearson correlation differs from 0.
Basic reading
Moore & McCabe, Chapter 2.2 - 2.4x:
Field, Chapter 4: Correlation
Advanced reading
Moore & McCabe, Chapter 2.2 - 2.4:.
Software
SPSS => Analyze => Correlate => Bivariate Pearson correlation
SPSS => Analyze => Correlate => Bivariate Spearman rank correlation
SPSS => Analyze => Correlate => Kendall's nonparametric correlation coefficient
Annotated output correlation - UCLA
 
Reporting correlations in publications
The correlation is reported with the appropriate degrees-of-freedom between brackets (N-2 for the Pearson and Spearman correlation), the value of r in two decimals followed by the descriptive level of significance (p). The SPSS test is two-sided.
Example: A statistically significant correlation between length and weight was found (r(1998) = .58, p < .0001), in particular taller persons tended to be heavier and vice versa. As an effect size such a correlation is considered to be large based on Cohen (1992) criteria.

Power: Power calculator
Effect size: The correlation itself is an effect-size measure.

Top


Chi-square tests
The (Pearson) chi-square coefficient is primarily used with one or two categorical variables. The coefficient is a measure of difference between observed and expected scores.

One categorical variable: In case of one categorical variable the test measures whether the observed values can reasonably come from a known distribution (or model). In other words the observed values are compared to the expected values from this known distribution. In such cases the test is primarily used for model testing.

Two categorical variables: In case of two categorical variables the expected values usually are the values under the nulhypothesis that there is no relationship between the two variables. Therefore the chi-square coefficient of two variables is a measure of relationship.

The chi-square coefficient is tested by comparing it with the chi-square distribution given the degrees of freedom. Other coefficients to measure the relationship between two variables in two-way contingency tables exist as well (for a list, see for instance the SPSS output with Crosstabs).
Note that if possible exact p-values are preferred over the standard (asymptotic) ones.
Basic reading
Moore & McCabe, Chapter 9.2.
Field, Chapter 16.1-16.4.
Software
One categorical variable:SPSS => Analyze => Nonparametric Tests => Chi-Square
Two categorical variables:SPSS => Analyze => Descriptive => Crosstabs => (1) Exact => Exact (2) Statistics => Chi-square (also check Phi and Cramér's V to assess the effect size)
Annotated output chi-square test
 
Reporting chi-square-tests in publications
When reporting the result of a chi-square test always give the chi-square value, the number of degrees-of-freedom, and the (exact) p-value. Also give an interpretation of the outcome in terms of the frequencies via odds-ratios, percentage, type of assocation, etc.
Example: There was a significant association between variable A and variable B χ2(3) = 12.2 (p < .001). The interpretation is that ..... (It is hard to give a general formulation, because there are many designs in contingency tables.) Examples in Field, Section 16.4.6; Pallant, Section ; Tabachnik & Fidell, Section.

Power:
The chi-square test for one categorical variable has low power unless N is large.
Effect size: Often measured with the Cramér's V and/or odds ratio's - see e.g. Field, Section 16.4.5.

Top


Non-parametric statistics
In several mostly elementary situations when the assumptions of parametric tests cannot be met, one may resort to non-parametric tests rather than parametric tests such as the t-test, the Pearson correlation test, analysis of variance, etc. In such situations the power of the non-parametric or distribution-free tests is often as good as the parametric ones or better. It often is a good idea to use both types of tests if they are available and compare the resulting p-values. If these values are roughly the same there is little to worry about, if they are different there is something to be sorted out.

Unfortunately, appropriate non-parametric techniques are not available for all comparable parametric techniques (see, however, the method selection charts for comparable tests).
Basic reading
Moore & McCabe, Chapter x:
Field, Chapter 13
Advanced reading
Siegel, S. & Castellan, N.J. (1988). Nonparametric statistics for the behavioral sciences (2nd edition). New York: McGraw Hill.
Software (Test for equality of distributions)
SPSS => Analyze => Nonparametric tests =>
  • Test equality of distributions: 1-Sample (K)olmogorov-(S)mirnov-
  • Test equality of two distributions: 2 independent samples - Mann-Whitney (Wilcoxon summed rank) test)
  • Test equality of K-distributions: K independent samples - Kruskal-Wallis (anova) test
  • Test equality of two distributions: 2 related samples - Wilcoxon signed rank test
  • Test equality of K-distributions: K related samples - Friedman test
Annotated output non-parametric tests - UCLA
 

Top