Question
1. You start looking at studies of the incidence of prosocial behavior/inclinations in the US. You are given estimates of the prevalence of this behavior
1. You start looking at studies of the incidence of prosocial behavior/inclinations in the US. You are given estimates of the prevalence of this behavior based on two different studies - one based on representative sample derived with probability-based sampling and one based on a representative sample derived from an opt-in panel, weighted and adjusted to have a composition mirroring the population of interest. The samples are approximately the same size. Do you expect the estimates derived from the two studies (of average prosocial behavior) to differ substantially - why or why not? You may want to bring in the concepts of random error and systematic error.
2. You are told the survey item measuring prosocial behavior read: "Do you agree or disagree with the statement, one should always help others, even if doing so entails personal risk of possible harm", followed by "agree completely", 'agree somewhat', 'neither agree nor disagree', 'disagree somewhat', 'disagree completely'. You are told the 1000 respondents taking the study were tired, and not always paying close attention to the question. In particular, you think that some of them answered this question without really thinking, picking a response at random. Will this lead to measurement error and if so what kind? Would having more respondents help address the problems it gives rise to? Why or why not?
3. Thinking about the survey question, you figure out that some or many people may have answered "agree" (either completely or somewhat) because they feel pressure to appear nice, not because they truly agree. What kind of measurement error would this lead to in terms of the statistic, average prosocial tendencies, you are trying to measure? What are some ways in which it can be addressed?
4. A professor gave their research assistants very precise instructions on how to classify the tweets of politicians as "laden with conspiracy theories" or not. you do not have access to the instructions. Assistants followed the instructions, and most of them tended to agree in their decisions to classify a tweet as either conspiracy-oriented or not. Does it follow that the measure the professor came up with is reliable, or does it follow it is valid, or does it follow it is both or neither? Discuss.
5. Recall the measure proposed in q2, for a person's tendency to behave in a pro-social manner, Do you agree or disagree with the statement, one should always help others, even if doing so entails personal risk of harm? Now, thinking about face-validity, does this measure meet face validity - why or why not? For this question, ignore any concerns with social-desirability (assume it away as a problem), and speak only to its face-validity alone.
Step by Step Solution
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Step: 2
Step: 3
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started