Fundamental stuff to know...
In research, statistical tests don’t prove an alternate
hypothesis is true. Instead, we use
statistical analysis to provide evidence that supports rejecting or failing to
reject the null hypothesis.
P-value is a measure of the strength of evidence against the
null hypothesis. Alpha (α) is the cutoff
for the p-value. Alpha = .05 in my
research, which is standard for most dissertations.
- when p-value < α (in my case, α=.05), reject the null hypothesis
- when p-value > or = α, we do not have evidence to reject the null hypothesis
When you draw conclusions about a population based on a
sample, you have opportunity for error.
You hope your sample represents the population, but our world is not
perfect...damnit.
- · Type I error = α = probability of rejecting a null hypothesis when you shouldn’t. We reject a null hypothesis that is true. You see effect when there really isn’t any effect. It’s a false positive...we reject a true null hypothesis.
- · Type II error = β = the probability of not rejecting a null hypothesis, i.e. we accept a null hypothesis when we should not accept it (it is false)...we accept a false null hypothesis. Effect is there, but we don’t find it.
You can reduce the likelihood of committing a Type I error
by making alpha smaller. You can reduce
the likelihood of committing a Type II error by increasing the sample size.
The power of the hypothesis test = 1 – probability of
committing a Type II error = probability of rejecting a null hypothesis when it
should be rejected. The power represents
the likelihood of detecting effect that is real.
No comments:
Post a Comment