When conducting a hypothesis test, there are four possible outcomes. Rejecting a false null hypothesis would constitute a . Retaining a null hypothesis would also be a correct decision. However, if the researcher rejects a true null hypothesis, he would make a . It would be a error if the researcher retains a false null hypothesis. The null hypothesis states that there is no , contradicting the research hypothesis. When H0 is true, the hypothesized sampling distribution qualies as the sampling distribution. However, when a randomly selected sample mean originates from the rejection region just by chance, then H0 is rejected and the researcher has made a error. The probability of a type I error equals . The probability of a correct decision equals Type I errors are often called because decision may be made, money may be spent, or further research may be prompted when none of these is truly warranted. When H0 is false, an incorrect decision or type II error is called a because the effect goes undetected. The probability of a type II error is . Whenever the effect is , the probability of a correct decision is high and equals . On the other hand, when the effect is , the probability of a correct decision is lower and the probability of type II error increases. One way to increase the probability of detecting a false null hypothesis is to increase . This is true because increasing sample size causes a reduction in the . An extremely large sample size will thus produce a very sensitive hypothesis test. This is not always desirable because the test would detect even a small effect that has no practical importance. The power of a hypothesis test equals the probability of detecting an . To determine appropriate sample size, the researcher must decide (1) what is the smallest effect that merits detection and (2) what is an appropriate detection rate. When these two questions have been answered, the researcher determines sample size by consulting