The standard deviation (or, as it is usually called, the standard error) of the sampling distribution for

Question:

The standard deviation (or, as it is usually called, the standard error) of the sampling distribution for the sample mean, x, is equal to the standard deviation of the population from which the sample was selected, divided by the square root of the sample size. That is, sx = s 1n

a. As the sample size is increased, what happens to the standard error of x? Why is this property considered important?

b. Suppose a sample statistic has a standard error that is not a function of the sample size. In other words, the standard error remains constant as n changes. What would this imply about the statistic as an estimator of a population parameter?

c. Suppose another unbiased estimator (call it A) of the population mean is a sample statistic with a standard error equal to sA = s 1 3 n Which of the sample statistics, x or A, is preferable as an estimator of the population mean? Why?

d. Suppose that the population standard deviation s is equal to 10 and that the sample size is 64. Calculate the standard errors of x and A. Assuming that the sampling distribution of A is approximately normal, interpret the standard errors. Why is the assumption of (approximate) normality unnecessary for the sampling distribution of x?

Fantastic news! We've Found the answer you've been seeking!

Step by Step Answer:

Related Book For  book-img-for-question

Statistics For Business And Economics

ISBN: 9781292413396

14th Global Edition

Authors: James McClave, P. Benson, Terry Sincich

Question Posted: