No rmd file needed just the code
Note that for parts [ai] of this question, your answer should be R code that generates the appro- priate answers {note part [3] requires a written answer). Use Rmarkdown and submit your .Rmd script (HTML optional) and note will be penalties for scripts that fail to compile. Please note that you do not need to repeat code for each part (i.e., if you write a single block of code that generates the answers for some or all of the parts, that is ne, but do please label your output that answers each question!!). For parts [a-d] consider a single coin ip experiment, a random variable X that returns the 'number of tails1 of the ip, and that the true probability distribution Pr[X) is within the set of distribu- tions described by the binomial distribution indexed by parameter p 6 [0,1], such that the true distribution is X N beran) for a specic value of p. 3. Assume that the true value of the parameter p = 0.2 (i.e., an unfair coin). Write code that produces 100 distinct i.i.d samples of the random variable X each with n = 10 observations (=experimental trials], then for each sample calculate the Maximum Likelihood Estimator p = %Z;'=1 125, and using the function 'histO' plot a histogram of these 100 estimator values. Note that you do not have to simulate the original ip outcomes for each sample [e.g., [H, H, T, ..]) just the random vectors where each of the n = 10 elements correspond to a value of the random variable X (i.e.,[1,1,0,...]), where you are welcome to use the function 'rbinomO' for this purpose. Also note, that if this were a typical estimation experiment, you would not know the true value of p and you would only have ONE sample of size n = 10 (in this problem, you are simulating many possible experiments to give yourself a sense of the possibilities you could obtain for your one experiment and a. sense of what the probability distribution Pr(p] of the estimator ,6 looks like when n = 10 and p = 0.2. b. Repeat the exercise in part [a] but assume that p = 0.5 (all other aspects the same as in [a] and the notes apply) c. Repeat the exercise in part [a] but assume that p = 0.5 and n = 100 [all other aspects the same as in [a] and the notes apply] d. Consider the following specic sample i that could have been generated as one of the 100 samples you generated in part [b]: x; = [0,0, 0, 1,1, 1,0,1,1,1] and calculate the likelihood that p = 0.5 given this sample AN D calculate the likelihood that p = 0.6 given this sample. For parts [ei] consider a measlning heights in the US experiment, the reals as the sample space, a continuous random variable X that represents heights that have been scaled (i.e., still dened on the reals) and that the true probability distribution Pr(X} is within the set of distributions described by the normal distribution indexed by parameters ,u. E (oc,oc),a2 6 [0,00], such that the true distribution is X m N (a, 02} for specic parameter pair La, 02]. e. Assume that the true value of the parameters are a = 0,02 = 1. Write code that produces 100 distinct i.i.d samples of the random variable X each with n = 10 observations (=experi mental trials}, for each sample calculate the Maximum Likelihood Estimators ,0. = % 23:1 :5, it? = i 5 11(13- 5F and and then use the flmction 'hist()' to plot two histogram, one for each of these 100 estimator values. Note that you just have to simulate the random sample vectors where each of the n 2 1|] elements correspond to a value of the random variable X, where you are welcome to use the function 'rnor'mO' for this purpose. Again also note that if this were a typical estimation err periment, you would not know the true value of 31,62 and you would only have ONE sample of size n = 10 (in this problem, you are simulating many possible experiments to give yourself a sense of the possibilities you could obtain for your one experiment and a sense of what the probability distributions of the estimator 131-01) and Pr(&2} look like when 4\" = 0, 02 = 1 and nally note that you could plot the joint probability distribution of these two estimators - but not required to answer the question). f. Repeat the exercise in part [e] but assume that ,u = 1,172 = 2 (all other aspects the same as in [a] and the not. apply) g. Repeat the exercise in part [e] but assume that a = 1,02 = 2 and n = 100 (all other aspects the same as in [a] and the notes apply) h. Repeat the exercise in part [g] but instead of using the MLE for 0'2 calculate the unbiased estimator 6&2 = Z?=1(Zi i? (all other aspects the same as in [a] and the notes apply} i. Consider the following specic sample 12 that could have been generated as one of the 100 samples you generated in part [1]: x,- = [3.5870548,3.5762441, 0.918869Ei, 1.4244196,1.4955349 4.2148045, 2.8042784, 1.3732288, 2.0563001, 2.1006008] and calculate the likelihood that a = 1,02 = 2 given this sample AND calculate the likelihood that a = 2.17136, 02 = 1.977358 given this sample. j. If you did part [d] correctly, the likelihood you calculated for p = 0.6 was higher than for the correct parameter value for the experinlent p = 0.5 and similarly for part [i], the likelihood you calculated for P = 2.1'?1?u,a2 = 1.977358 was higher than for the correct parameter values a: 1,0'2 = 2. Explain why this is the case