New Semester
Started
Get
50% OFF
Study Help!
--h --m --s
Claim Now
Question Answers
Textbooks
Find textbooks, questions and answers
Oops, something went wrong!
Change your search query and then try again
S
Books
FREE
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Tutors
Online Tutors
Find a Tutor
Hire a Tutor
Become a Tutor
AI Tutor
AI Study Planner
NEW
Sell Books
Search
Search
Sign In
Register
study help
business
econometrics
Econometrics 1st Edition Bruce Hansen - Solutions
Consider the model Y Æ X0¯Åe with E[e j Z] Æ 0 with Y scalar and X and Z each a k vector. You have a randomsample (Yi ,Xi ,Zi : i Æ 1, ...,n).(a) Assume that X is exogenous in the sense that E[e j Z,X] Æ 0. Is the IV estimator b¯iv unbiased?(b) Continuing to assume that X is exogenous, find
Suppose that price and quantity are determined by the intersection of the linear demand and supply curves Demand : Q Æ a0 Åa1P Åa2Y Åe1 Supply : Q Æ b0 Åb1P Åb2W Åe2 where income (Y ) and wage (W) are determined outside the market. In this model are the parameters identified?
Take the linear model Y Æ X0¯Åe with E[e j X] Æ 0 where X and ¯ are 1£1.(a) Show that E[Xe] Æ 0 and E£X2e¤Æ 0. Is Z Æ (X X2)0 a valid instrument for estimation of ¯?(b) Define the 2SLS estimator of ¯ using Z as an instrument for X. How does this differ fromOLS?
For Theorem 12.3 establish that bV ¯ ¡!p V ¯.
In the structural model Y Æ X0¯Åe with X Æ ¡0Z Åu and ¡ `£k, ` ¸ k, we claim that a necessary condition for ¯ to be identified (can be recovered from the reduced form) is rank(¡) Æ k.Explain why this is true. That is, show that if rank(¡) Ç k then ¯ is not identified.
The reduced formbetween the regressors X and instruments Z takes the form X Æ ¡0Z Åu where X is k£1, Z is `£1, and ¡ is `£k. The parameter ¡ is defined by the population moment condition E£Zu0¤Æ 0. Show that the method of moments estimator for ¡ isb¡Æ¡Z0Z¢¡1 ¡Z0X¢.
Take the linear model Y Æ X0¯Åe. Let the OLS estimator for ¯ be b¯ with OLS residual bei .Let the IV estimator for ¯ using some instrument Z be e¯ with IV residual eei Æ Yi ¡ X0 ie¯. If X is indeed endogenous, will IV “fit” better than OLS in the sense that PniÆ1 ee2 iÇPniÆ1 be2 i
Take the linear model Y Æ X0¯ Å e with E[e j X] Æ 0. Suppose ¾2(x) Æ E£e2 j X Æ x¤is known. Show that the GLS estimator of ¯ can be written as an IV estimator using some instrument Z. (Find an expression for Z.)
Consider the single equationmodel Y Æ Z¯Åe where Y and Z are both real-valued (1£1).Let b¯ denote the IV estimator of ¯ using as an instrument a dummy variable D (takes only the values 0 and 1). Find a simple expression for the IV estimator in this context.
The observations are i.i.d., (Y1i ,Y2i ,Xi : i Æ 1, ...,n). The dependent variables Y1 and Y2 are real-valued. The regressor X is a k-vector. The model is the two-equation system Y1 Æ X0¯1 Åe1 E[Xe1] Æ 0 Y2 Æ X0¯2 Åe2 E[Xe2] Æ 0.(a) What are the appropriate estimators b¯1 and b¯2 for ¯1
Take themodel Y Æ ¼0¯Åe¼ Æ E[X j Z] Æ ¡0Z E[e j Z] Æ 0 where Y is scalar, X is a k vector and Z is an ` vector. ¯ and ¼ are k £1 and ¡ is `£k. The sample is(Yi ,Xi ,Zi : i Æ 1, ...,n) with ¼i unobserved.Consider the estimator b¯ for ¯ by OLS of Y onb¼Æb¡0Z whereb¡is the OLS
Prove Theorem11.6.
Prove Theorem11.5.Hint: First, show that it is sufficient to show that Eh X0 Xi³E hX 0§¡1X i´¡1 Eh X0 Xi· E hX 0§X i.Second, rewrite this equation using the transformations U Æ §1/2X and V Æ §1/2X, and then apply the matrix Cauchy-Schwarz inequality (B.33).
Prove Theorem11.4.
Show that (11.17) follows from the steps described.
Show that (11.16) follows from the steps described.
Prove Theorem11.3.
Prove Theorem11.2.
Show (11.14) when the regressors are common across equations Xj Æ X and the errors are conditionally homoskedastic (11.8).
Show (11.13) when the regressors are common across equations Xj Æ X.
Prove Theorem11.1.
Show (11.12) when the regressors are common across equations Xj Æ X and the errors are conditionally homoskedastic (11.8).
Show (11.11) when the regressors are common across equations Xj Æ X.
Show (11.10) when the errors are conditionally homoskedastic (11.8).
In Exercise 4.26 you extended the work from Duflo, Dupas, and Kremer (2011). Repeat that regression, nowcalculating the standard error by cluster bootstrap. Report a BCa confidence interval for each coefficient.
In Exercise 7.28 you estimated a wage regression with the cps09mar dataset and the subsample of whiteMale Hispanics. Further restrict the sample to those never-married and live in theMidwest region. (This sample has 99 observations.) As in subquestion (b) let µ be the ratio of the return to one
In Exercise 9.27 you estimated the Mankiw, Romer, and Weil (1992) unrestricted regression.Let µ be the sum of the second, third, and fourth coefficients.(a) Estimate the regression by unrestricted least squares and report standard errors calculated by asymptotic, jackknife and the bootstrap.(b)
In Exercise 9.26 you estimated a cost function for 145 electric companies and tested the restriction µ Æ ¯3 ů4 ů5 Æ 1.(a) Estimate the regression by unrestricted least squares and report standard errors calculated by asymptotic, jackknife and the bootstrap.(b) Estimate µ Æ ¯3 ů4
Themodel is Y Æ X0¯Åe with E[Xe] 6Æ 0. We know that in this case, the least squares estimator may be biased for the parameter ¯.We also know that the nonparametric BC percentile interval is(generally) a good method for confidence interval construction in the presence of bias. Explain whether
The RESET specification test for nonlinearity in a random sample (due to Ramsey (1969))is the following. The null hypothesis is a linear regression Y Æ X0¯Åe with E[e j X] Æ 0. The parameter ¯ is estimated by OLS yielding predicted values b Yi . Then a second-stage least squares regression is
The model is i.i.d. data, i Æ 1, ...,n, Y Æ X0¯Åe and E[e j X] Æ 0. Does the presence of conditional heteroskedasticity invalidate the application of the nonparametric bootstrap? Explain.
Take the model Y Æ X1¯1 Å X2¯2 Åe with i.i.d observations, E[Xe] Æ 0 and scalar X1 and X2. Describe how you would construct the percentile-t bootstrap confidence interval for µ Æ ¯1/¯2.
Take the model Y Æ X1¯1 Å X2¯2 Åe with E[Xe] Æ 0 and scalar X1 and X2. The parameter of interest is µ Æ ¯1¯2. Show how to construct a confidence interval for µ using the following three methods.(a) Asymptotic Theory.(b) Percentile Bootstrap.(c) Percentile-t Bootstrap.Your answer should
Suppose a Ph.D. student has a sample (Yi ,Xi ,Zi : i Æ 1, ...,n) and estimates by OLS the equation Y Æ Z®Å X0¯Åe where ® is the coefficient of interest. She is interested in testing H0 : ® Æ 0 against H1 : ® 6Æ 0. She obtains b®Æ 2.0 with standard error s(b®) Æ 1.0 so the value of
The model is Y Æ X0 1¯1ÅX0 2¯2Åe with E[Xe] Æ 0, and both X1 and X2 k£1. Describe how to test H0 : ¯1 Æ ¯2 against H1 : ¯1 6Æ ¯2 using the nonparametric bootstrap.
The model is Y Æ X0 1¯1 Å X0 2¯2 Åe with E[Xe] Æ 0 and X2 scalar. Describe how to test H0 : ¯2 Æ 0 against H1 : ¯2 6Æ 0 using the nonparametric bootstrap.
Take themodel Y Æ X0¯Åe with E[Xe] Æ 0. Describe the bootstrap percentile confidence interval for ¾2 Æ E£e2¤.
The observed data is {Yi ,Xi } 2 R£Rk , k È 1, i Æ 1, ...,n. Take the model Y Æ X0¯Åe with E[Xe] Æ 0.(a) Write down an estimator for ¹3 Æ E£e3¤.(b) Explain how to use the percentile method to construct a 90% confidence interval for ¹3 in this specific model.
Consider the model Y Æ X0¯Åe with E[e j X] Æ 0, Y scalar, and X a k vector. You have a random sample (Yi ,Xi : i Æ 1, ...,n). You are interested in estimating the regression function m(x) ÆE[Y j X Æ x] at a fixed vector x and constructing a 95% confidence interval.(a) Write down the standard
Take the normal regression model Y Æ X0¯Åe with e j X » N(0,¾2) where we know the MLE equals the least squares estimators b¯ and b¾2.(a) Describe the parametric regression bootstrap for this model. Show that the conditional distribution of the bootstrap observations is Y ¤i j Fn »N¡X0
Suppose that in an application, bµ Æ 1.2 and s(bµ) Æ 0.2. Using the nonparametric bootstrap, 1000 samples are generated from the bootstrap distribution, and bµ¤ is calculated on each sample.The bµ¤ are sorted, and the 0.025th and 0.975th quantiles of the bµ¤ are .75 and 1.3,
You want to test H0 : µ Æ 0 against H1 : µ È 0. The test for H0 is to reject if Tn Æ bµ/s(bµ) È c where c is picked so that Type I error is ®. You do this as follows. Using the nonparametric bootstrap, you generate bootstrap samples, calculate the estimates bµ¤ on these samples and then
Show that if the percentile-t interval for ¯ is [L,U] then the percentile-t interval for aÅc¯is [a ÅbL,a ÅbU].
Take p¤ as defined in (10.22) for the BC percentile interval. Show that it is invariant to replacing µ with g (µ) for any strictly monotonically increasing transformation g (µ). Does this extend to z¤0 as defined in (10.23)?
Consider the following bootstrap procedure for a regression of Y on X. Let b¯ denote the OLS estimator and bei Æ Yi ¡X0 ib¯ the OLS residuals.(a) Draw a random vector (X¤,e¤) from the pair {(Xi , bei ) : i Æ 1, ...,n} . That is, draw a random integer i 0 from [1,2, ...,n], and set X¤ Æ Xi
Let Yi be i.i.d., ¹ Æ E[Y ] È 0, and µ Æ ¹¡1. Let b¹ Æ Y n be the samplemean and bµ Æ b¹¡1.(a) Is bµ unbiased for µ?(b) If bµ is biased, can you determine the direction of the bias E£bµ¡µ¤(up or down)?(c) Is the percentile interval appropriate in this context for confidence
Prove Theorem10.8.
Prove Theorem10.7.
Prove Theorem10.6.
Consider the following bootstrap procedure. Using the nonparametric bootstrap, generate bootstrap samples, calculate the estimate bµ¤ on these samples and then calculate T ¤Æ (bµ¤¡ bµ)/s(bµ), where s(bµ) is the standard error in the original data. Let q¤®/2 and q¤1¡®/2 denote the
Show that if the percentile interval for ¯ is [L,U] then the percentile interval for a Åc¯ is[a ÅcL,a ÅcU].
Showthat if the bootstrap estimator of variance of b¯ is bV boot b¯ , then the bootstrap estimator of variance of bµ Æ a ÅC b¯ is bV boot bµÆC bV boot b¯ C0.
A two-step estimator such as (12.49) is b¯ Æ¡PniÆ1 cWicW0 i¢¡1 ¡PniÆ1 cWiYi¢where cWi Æ bA 0Zi and bA Æ¡Z0Z¢¡1 Z0X . Describe how to construct the jackknife estimator of variance of b¯.
Show that if the jackknife estimator of variance of b¯ is bV jack b¯, then the jackknife estimator of variance of bµ Æ a ÅC b¯ is bV jack bµÆC bV jack b¯C0.
Find the jackknife estimator of variance of the estimator b¹r Æ n¡1PniÆ1 Y r i for ¹r Æ E£Y r i¤.
Using the cps09mar dataset and the subsample of non-Hispanic Black individuals (race code = 2) and white individuals (race code = 1) test the hypothesis that the returns to education is common across groups.(a) Allow the return to education to vary across the four groups (white male, white female,
Using the cps09mar dataset and the subsample of non-Hispanic Black individuals (race code = 2) test the hypothesis that marriage status does not affectmean wages.(a) Take the regression reported in Table 4.1. Which variables will need to be omitted to estimate a regression for this subsample?(b)
In Section 8.12 we reported estimates from Mankiw, Romer and Weil (1992). We reported estimation both by unrestricted least squares and by constrained estimation, imposing the constraint that three coefficients (2nd , 3rd and 4th coefficients) sum to zero as implied by the Solow growth theory.Using
In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric companies.Nerlov was interested in estimating a cost function: C Æ f (Q,PL,PF,PK) where the variables are listed in the table below. His data set Nerlove1963 is on the textbook website.C Total Cost Q Output PL Unit
The data set Invest1993 on the textbook website contains data on 1962 U.S. firms extracted from Compustat, assembled by Bronwyn Hall, and used in Hall and Hall (1993).The variables we use in this exercise are in the table below. The flow variables are annual sums. The stock variables are beginning
Do a Monte Carlo simulation. Take the model Y Æ ®Å X¯Åe with E[Xe] Æ 0 where the parameter of interest is µ Æ exp(¯). Your data generating process (DGP) for the simulation is: X isU[0,1], e » N(0,1) is independent of X, and n Æ 50. Set ® Æ 0 and ¯ Æ 1. Generate B Æ 1000 independent
Let T be a test statistic such that under H0, T ¡!dÂ23. Since P£Â23È 7.815¤Æ 0.05, an asymptotic 5% test of H0 rejects when T È 7.815. An econometrician is interested in the Type I error of this test when n Æ 100 and the data structure is well specified. She performs the followingMonte
You have a random sample from the model Y Æ X¯1 Å X2¯2 Åe with E[e j X] Æ 0 where Y is wages (dollars per hour) and X is age. Describe how you would test the hypothesis that the expected wage for a 40-year-old worker is $20 an hour.
Take themodel Y Æ X1¯1 ÅX2¯2 ÅX3¯3 ÅX4¯4 Åe with E[Xe] Æ 0. Describe how to test H0 :¯1¯2Ư3¯4 against H1 :¯1¯2 6Ư3¯4.
You are reading a paper, and it reports the results from two nested OLS regressions:Yi Æ X0 1i e¯1 Å eei Yi Æ X0 1i b¯1 ÅX0 2i b¯2 Å bei .Some summary statistics are reported:Short Regression Long Regression R2 Æ .20 R2 P Æ .26 niÆ1 ee2 iÆ 106 PniÆ1 be2 iÆ 100# of coefficients=5 # of
An economist estimates Y Æ X0 1¯1 Å X2¯2 Åe by least squares and tests the hypothesis H0 : ¯2 Æ 0 against H1 : ¯2 6Æ 0. Assume ¯1 2 Rk and ¯2 2 R. She obtains a Wald statistic W Æ 0.34. The sample size is n Æ 500.(a) What is the correct degrees of freedom for the Â2 distribution to
The observed data is {Yi ,Xi ,Zi } 2 R£Rk£R`, k È 1 and ` È 1, i Æ 1, ...,n. An econometrician first estimates Yi Æ X0 ib¯Å bei by least squares. The econometrician next regresses the residual bei on Zi , which can be written as bei Æ Z0 i e°Å eui .(a) Define the population parameter °
You have two regressors X1 and X2 and estimate a regression with all quadratic terms included Y Æ ®Å¯1X1 ů2X2 ů3X2 1 ů4X2 2 ů5X1X2 Åe.One of your advisors asks: Can we exclude the variable X2 from this regression?How do you translate this question into a statistical test? When
Consider two alternative regression models Y Æ X0 1¯1 Åe1 (9.21)E[X1e1] Æ 0 Y Æ X0 2¯2 Åe2 (9.22)E[X2e2] Æ 0 where X1 and X2 have at least some different regressors. (For example, (9.21) is a wage regression on geographic variables and (2) is a wage regression on personal appearance
You are at a seminar where a colleague presents a simulation study of a test of a hypothesis H0 with nominal size 5%. Based on B Æ 100 simulation replications under H0 the estimated size is 7%.Your colleague says: “Unfortunately the test over-rejects.”(a) Do you agree or disagree with your
Take the model Y Æ X0¯Åe with E[Xe] Æ 0 and parameter of interest µ Æ R0¯ with R k £1.Let b¯ be the least squares estimator and bV b¯ its variance estimator.(a) Write down b C, the 95% asymptotic confidence interval for µ, in terms of b¯, bV b¯, R, and z Æ 1.96 (the 97.5% quantile of
A common view is that “If the sample size is large enough, any hypothesiswill be rejected.”What does this mean? Interpret and comment.
A researcher estimates a regression and computes a test of H0 against H1 and finds a pvalue of p Æ 0.08, or “not significant”. She says “I need more data. If I had a larger sample the test will have more power and then the test will reject.” Is this interpretation correct?
Consider a regression such as Table 4.1 where both experience and its square are included.A researcher wants to test the hypothesis that experience does not affect mean wages and does this by computing the t-statistic for experience. Is this the correct approach? If not, what is the appropriate
In Exercise 7.8 you showed that pn¡b¾2 ¡¾2¢¡!d N(0,V ) as n !1 for some V . Let b V be an estimator of V .(a) Using this result construct a t-statistic for H0 : ¾2 Æ 1 against H1 : ¾2 6Æ 1.(b) Using the DeltaMethod find the asymptotic distribution of pn (b¾¡¾).(c) Use the previous
Suppose a researcher uses one dataset to test a specific hypothesis H0 against H1 and finds that he can reject H0. A second researcher gathers a similar but independent dataset, uses similar methods and finds that she cannot reject H0. How should we (as interested professionals) interpret these
You want to test H0 : ¯2 Æ 0 against H1 : ¯2 6Æ 0 in the model Y Æ X0 1¯1 Å X0 2¯2 Å e with E[Xe] Æ 0. You read a paper which estimates the model Y Æ X0 1b°1 Å(X2 ¡X1)0 b°2 Åu and reports a test of H0 : °2 Æ 0 against H1 : °2 6Æ 0. Is this related to the test you wanted to
Take themodel Y Æ X¯1ÅX2¯2Åe with E[e j X] Æ 0 where Y is wages (dollars per hour) and X is age. Describe how you would test the hypothesis that the expected wage for a 40-year-old worker is$20 an hour.
Suppose a researcher wants to know which of a set of 20 regressors has an effect on a variable testscore. He regresses testscore on the 20 regressors and reports the results. One of the 20 regressors(studytime) has a large t-ratio (about 2.5), while the other t-ratios are insignificant (smaller
Take the linear model Y Æ X0 1¯1 Å X0 2¯2 Åe with E[Xe] Æ 0 where both X1 and X2 are q £1.Show how to test the hypotheses H0 : ¯1 Æ ¯2 against H1 : ¯1 6Æ ¯2.
Let W be a Wald statistic for H0 : µ Æ 0 versus H1 : µ 6Æ 0, where µ is q £1. Since W ¡!dÂ2 qunder H0, someone suggests the test “Reject H0 if W Ç c1 or W È c2, where c1 is the ®/2 quantile of Â2 qand c2 is the 1¡®/2 quantile of Â2 q .(a) Show that the asymptotic size of the test
Let T be a t-statistic for H0 : µ Æ 0 versus H1 : µ 6Æ 0. Since jT j!d jZj under H0, someone suggests the test “Reject H0 if jT j Ç c1 or jT j È c2, where c1 is the ®/2 quantile of jZj and c2 is the 1¡®/2 quantile of jZj.(a) Show that the asymptotic size of the test is ®.b) Is this a
You have two independent samples (Y1i ,X1i ) and (Y2i ,X2i ) both with sample sizes n which satisfy Y1 Æ X0 1¯1 Åe1 and Y2 Æ X0 2¯2 Åe2, where E[X1e1] Æ 0 and E[X2e2] Æ 0. Let b¯1 and b¯2 be the OLS estimators of ¯1 2 Rk and ¯2 2 Rk .(a) Find the asymptotic distribution of pn¡¡ b¯2
Prove that if an additional regressor X kÅ1 is added to X , Theil’s adjusted R 2increases if and only if jTkÅ1j È 1, where TkÅ1 Æ b¯kÅ1/s( b¯kÅ1) is the t-ratio for b¯kÅ1 and s( b¯kÅ1) Æ¡s2[(X 0X )¡1]kÅ1,kÅ1¢1/2 is the homoskedasticity-formula standard error.
Take the linear model Y Æ X1¯1ÅX2¯2Åe with E[Xe] Æ 0. Consider the restriction¯1¯2Æ 2.(a) Find an explicit expression for the CLS estimator e¯ Æ ( e¯1, e¯2) of ¯ Æ (¯1,¯2) under the restriction.Your answer should be specific to the restriction. It should not be a generic formula
Take the linear model with restrictions Y Æ X0¯Åe with E[Xe] Æ 0 and R0¯ Æc. Consider three estimators for ¯:• b¯ the unconstrained least squares estimator• e¯ the constrained least squares estimator• ¯ the constrained efficient minimum distance estimator For the three estimator
Take themodel Y Æm(X)Åe m(x) Æ ¯0 ů1x ů2x2 Å¢ ¢ ¢Å¯p xp Eh X j e iÆ 0, j Æ 0, ...,p g (x) Æd dx m(x)with i.i.d. observations (Yi ,Xi ), i Æ 1, ...,n. The order of the polynomial p is known.(a) How should we interpret the function m(x) given the projection assumption? How should
Use the cps09mar dataset and the subsample of white male Hispanics.(a) Estimate the regression loág(wage) Æ ¯1 educationů2 experienceů3 experience2/100ů4married1ů5married2 ů6married3 ů7widowedů8divorcedů9separatedů10 where married1, married2, and married3 are the first
Suppose you have two independent samples each with n observations which satisfy the models Y1 Æ X0 1¯1Åe1 with E[X1e1] Æ 0 and Y2 Æ X0 2¯2Åe2 with E[X2e2] Æ 0 where ¯1 and ¯2 are both k£1.You estimate ¯1 and ¯2 by OLS on each sample, with consistent asymptotic covariance matrix
Verify (8.32), (8.33), and (8.34).
Verify (8.29), (8.30) and (8.31).
Prove (8.27). Hint: Use (8.26).
Verify that (8.26) is V ¯(W) withW ÆV ¡1¯ .
Prove Theorem8.8. (Hint: Use that CLS is a special case of Theorem 8.7.)
Prove Theorem8.7.
Prove Theorem8.6.
Verify (8.22), (8.23), and that the minimum distance estimator e¯md with cW Æ bQ X X equals the CLS estimator.
Prove Theorem8.4. That is, showE£s2 cls j X¤Æ ¾2 under the assumptions of the homoskedastic regression model and (8.1).
Prove Theorem8.3.
Prove Theorem 8.2, that is, E£ e¯cls j X¤Æ ¯, under the assumptions of the linear regression regressionmodel and (8.1). (Hint: Use Theorem 8.1.)
Showing 900 - 1000
of 4105
First
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Last
Step by Step Answers