All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
measurement theory in action
Questions and Answers of
Measurement Theory In Action
EXERCISE 16.2: IDENTIFYING RESPONSE BIASES IN ATTITUDE ITEMS OBJECTIVE: To practice identifying response biases in attitude items.The “Geoscience Attitudes.sav” data set (see Appendix B) asks
EXERCISE 16.3: DEVELOPING TEST MATERIALS AND PROCEDURES THAT WILL (HOPEFULLY) REDUCE RESPONSE BIAS IN PARTICIPANTS OBJECTIVE: To gain practice in developing strategies to reduce biases in responding
1. Would you expect to find different forms of response biases in the different populations under study? If so, which biases would you see as most prominent in each of the three scenarios?
2. What strategies would you suggest to prevent response biases in each of the three scenarios?
3. What strategies would you suggest to identify response biases in each of the three scenarios once the data have been collected?
1. Choose variables that are relatively uncorrelated with each other to minimize collinearity.
2. Report adjusted R 2 and cross-validated R 2 when appropriate so that the reader can judge stability of prediction.
5. If in question 3 you added a fifth predictor to the original regression equation, what characteristics would you want from this predictor?
1. What percentage of variance in graduate GPA is being explained by the current entrance criteria?
2. Is there any reason to expect that letters of recommendation would have a low criterion-related validity, even before conducting the statistical analysis? Explain.
3. Which of the current entrance criteria likely has the greatest criterionrelated validity? How can you tell from the given information?
4. Would it be appropriate to conclude that undergraduate GPA is unrelated to graduate GPA in the psychology master’s program at SESU?Why or why not? ( Hint: You may want to review the issues we
5. Would it be appropriate to conclude that GRE scores will act as a better predictor of graduate GPA for future graduate students at SESU than the current entrance criteria? Explain.
1. What type of criterion-related validity study does Connor plan on conducting?
2. What is the criterion-related validity of the current selection system for production workers at MiniCorp?
3. Is Connor’s plan to attempt to explain nearly 100% of the reliable variance in job performance feasible? Explain.
4. What practical concerns might Connor encounter even if he did find that a selection battery of six or more tests was useful in predicting job performance for production workers?
5. How should Connor go about attempting to identify additional useful predictors of job performance for production workers?
6. What minimum sample size would be recommended for conducting a criterion-related validity study with three predictors? Six predictors?
7. Given the number of production workers at MiniCorp, what method of criterion-related validity should Connor consider using?
EXERCISE 17.1: DETECTING VALID PREDICTORS (REVISITED)OBJECTIVE: To reexamine the validity of predictors in a data set using multiple regression.PROLOGUE: Exercise 8.2 examined a number of possible
EXERCISE 17.2: PREDICTING THE WORK MOTIVATION OF VOLUNTEERS OBJECTIVE: To examine the validity and cross-validity of a set of predictors.PROLOGUE: Because volunteers are unpaid, the work motivation
1. Examine the correlations between each of the possible predictors of work motivation. On the whole, how highly related to one another are these predictors?What is the range of magnitude of
2. Examine the correlations of work motivation with each of the possible predictors.Which predictors seem most highly related to work motivation?
3. Conduct a multiple regression analysis to determine the validity of the set of predictors for the criterion of work motivation. What is the magnitude of the multiple correlation coefficient ( R )?
4. Which predictors have significant regression weights?
5. Compute the estimated population cross-validity of the entire set of predictors.How does this compare to the initial validity estimate?
6. Had you obtained the same regression results based on a sample of only 50 volunteers, what would be the estimated population cross-validity of this set of predictors? How does this new estimate of
EXERCISE 17.3: PREDICTING SCORES USING THE REGRESSION EQUATION OBJECTIVE: To compare the accuracy of predicted criterion scores to actual criterion scores.PROLOGUE: Use the data set discussed in
1. Conduct a multiple regression analysis using all nine predictors. Choose method equals “enter.” Write out the unstandardized regression equation that would result if all nine predictors were
2. Conduct a second multiple regression analysis, this time using only those predictors that had significant regression weights in the previous equation.Write out the unstandardized regression
3. While the standard error of estimate provides an average level of prediction accuracy, we can examine the accuracy of prediction for a single individual who is included in the data set by
4. Repeat the procedure in item 3 by randomly selecting two more volunteers in the data set and computing their predicted work motivation scores using both regression equations.a. Do you consistently
5. Are you surprised by either the accuracy or the inaccuracy of prediction when using the regression equations? Explain.
1. Use EFA early on in the development of your test. EFA can be helpful in understanding the factor structure of your test, as well as helping weed out bad items.
2. Unless you have a good reason to believe that your underlying constructs are uncorrelated, you should choose oblique rotations.
3. When using EFA to evaluate items, discard items that have high loadings on multiple factors (i.e., cross-loading items), as well as items that have low communality and high uniqueness.
4. Make sure to document all of the details of your EFA analysis when reporting the results. Document the rotation and estimation techniques, as well as your criteria used to determine the number of
2. Under what conditions might you choose to use PCA? EFA?
4. What would you do if the expected dimensionality of your scale were very different from the results suggested by your factor analysis?
6. In conducting an EFA, what would you do if a factor in the rotated factor(or pattern) matrix were composed of items that seem to have nothing in common from a rational or theoretical standpoint?
1. The tryout sample is a crucial element in the test development process.Discuss the appropriateness of the following characteristics of Andra’s sample for the development of this scale.a.
2. Given the difficulty of obtaining a sizeable sample of graduate students, how could Andra obtain an appropriate sample?
3. Andra made a number of decisions in conducting the factor analysis.For each of the following decisions, discuss whether Andra’s choice was the most appropriate option.a. Choosing exploratory
4. If, following some modification of the analysis, Andra continued to find little support for her expected four factors, how would you suggest that she proceed?
1. Would Chin’s parents serve as appropriate interpreters of the accuracy of the translation of the Marital Satisfaction Index? Why or why not?
2. How could EFA be used to determine whether equivalency exists across the translation and original version?a. What types of EFA-based statistics would you examine to determine whether there were
3. Suppose that an EFA found that there were differences across the language translations. How would you act on these differences?4. Besides EFA, what other statistical methods could you use to
EXERCISE 18.1: CONDUCTING AN EFA OBJECTIVE: To conduct and interpret an EFA using SPSS.PROLOGUE: The SPSS data file “Geoscience Attitudes.sav” (see Appendix B)contains undergraduate responses to
1. Interpret the findings of the factor analysis by completing the following:a. Is the sample size in the data set sufficiently large to conduct a factor analysis of the 13 items? Explain.b. How many
2. Although the EFA has suggested possible subscales within the geoscience attitude survey, these subscales may not have high internal consistency.Again using the data set “Geoscience
EXERCISE 18.2: REPRODUCING COMMUNALITIES AND EIGENVALUES OBJECTIVE: To aid understanding of how exploratory factor analytic techniques compute extracted communalities and eigenvalues.PROLOGUE: As
EXERCISE 18.3: EVALUATION OF EFA IN THE LITERATURE OBJECTIVE: As mentioned throughout this module, there are lots of decisions that need to be made when conducting an EFA. In this exercise, we want
1. What was the rationale given for conducting the CFA?
2. What extraction/rotation method was used? Was there a rationale given for the choice of this method?
3. How did the researchers determine the number of appropriate factors to extract?
4. What criteria were used to discard items?
5. How did the EFA help the researchers better understand the construct?
6. What unresolved questions do you have about the procedures conducted by the researchers?
1. Use CFA methods after you have developed a good theoretical and empirical understanding of your scale via scale development and exploratory practices.
2. Compare a variety of GIF indexes when judging whether your model fits the data.
3. Use modification indexes to gather insight into your scale, and make modifications with caution, focusing on whether the modifications make theoretical sense. Use the indexes to gain insight into
4. If major modifications were made, estimate your modified model on a new data set to avoid capitalization on chance.
5. Consider using item bundles to improve model fit and to reduce the number of parameters to estimate.
5. How should modification indexes be used in revising a model? What are the dangers in using them?
7. How can CFA be used to support validity evidence for a scale?
1. What concerns about the interpretation of an MTMM matrix are better addressed by CFA rather than the guidelines of Campbell and Fiske(1959)?
2. In what ways are the four CFA models proposed by Widaman (1985)similar? In what ways do these models differ from one another?
3. What are correlated uniquenesses?
4. What might cause a CFA model to be poorly defined?
5. What methods can be used to evaluate alternative CFA models? Are some methods more appropriate than others?
6. How can researchers who use advanced statistical analyses communicate with those who are less statistically savvy?
1. What is the role of EFA with CFA?
2. Which of the following types of equivalence (configural, metric, and scalar) are more important?
3. How does the CFA approach to testing equivalence compare to other methods of testing cross-cultural equivalence mentioned in Module 11?
4. How do you untangle whether differences are due to poor translations versus true cross-cultural differences?
5. How can you make sure that your results estimated in one sample will generalize to other samples?
6. How can you test generalizability across cultures if you have small samples for at least one culture?
EXERCISE 19.1: TUTORIAL IN STRUCTURAL EQUATIONS MODELING OBJECTIVE: To provide a brief introduction to common SEM programs.LISREL is one of the most popular SEM programs for conducting CFA. The
1. Describe the underlying models being tested in each article. Determine the rationale that the author(s) used to create the models. Did they rely on previous research to justify their model, or did
2. What fit indexes and tests of model fit did the authors use?
3. Did the authors use modification indexes to revise their model? If so, how did they justify or explain their use?
4. Did the authors use item bundles? If so, how did they create those bundles?
5. How did the use of CFA help the authors to better understand their scale?
1. Consider estimating IRT item statistics when you have a roughly unidimensional test with a sample size of 250 or more.
2. Conduct an exploratory factor analysis to show that your test is roughly unidimensional before proceeding with the IRT estimation.
3. Estimate several IRT models to determine which one better fits your data. If your model misfits for many items, consider a less restrictive model.
4. Eliminate items that contribute low information in ranges of % that you wish to discriminate.
5. Choose items that span the range of % so that you have a test that discriminates across a wide spectrum.
6. If you have a small sample size, consider a more restrictive model. If you have a large sample size, choose a less restrictive model.
1. What are the major advantages of IRT over CTT-IA?
3. When might it be preferable to use CTT-IA instead of IRT?
1. What would be the advantages and disadvantages of using CTT-IA in this situation instead of IRT?
2. What would be the advantages and disadvantages of using the 1-PL IRT model? The 2-PL IRT model? The 3-PL IRT model?
3. Where should Elena start to “get up to speed” with the IRT procedures?
4. Should the four high schools be analyzed separately or together?
5. What should Elena and Professor Koshino be focusing on in their IRT computer printouts?
6. What advantages are there to examining the IRFs in this situation?
1. Does it appear that IRT is a viable option for creating a written exam(CAT or paper-and-pencil) in this situation?
2. Given the changing nature of the job of parole agent, should Dr. Agars be using questions from prior civil service exams for the selection of new parole agents?
Showing 800 - 900
of 1226
1
2
3
4
5
6
7
8
9
10
11
12
13