Answered step by step
Verified Expert Solution
Link Copied!

Question

00
1 Approved Answer

Collecting and Analyzing Diagnostic Data at Alegent Health The two applications in Chapter 4 described the cal area, while others produced visions that were entry

image text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribedimage text in transcribed
Collecting and Analyzing Diagnostic Data at Alegent Health The two applications in Chapter 4 described the cal area, while others produced visions that were entry and contracting processes at the Alegent Health more narrowly focused. (AH) organization. As a result of a recent merger and the hiring of a new CEO and chief innovation Interview Measures officer (CIO), the organization had implemented A second data set consisted of interviews with application 7.1 a series of large-group interventions, known as various stakeholder groups. Initial interviews decision accelerators (DAs), to generate innovative were conducted with executives and physicians strategies in the six clinical service areas of women's about (1) the context of change at Alegent, include and children's services, oncology, behavioral health, ing organization history, strategy, and recent neuroscience, orthopedics, and cardiology. Alegent changes; (2) their reflections on the DA process; Health then hired two OD researchers to evaluate and (3) clinical area implementation progress. The its change progress. The evaluation was intended to researchers conducted a second round of inter- help AH understand what had changed, what had views with people who were closely connected been learned, the impact of those changes, and how with the implementation of each clinical service they might extend those changes and learnings into area strategy. They were asked questions about the future. The diagnostic phase involved the col- the clarity of action plans, the level of involve- lection and analysis of unobtrusive, interview, and ment of different people, and implementation survey data progress. Finally, a third set of interviews were conducted with a sample of staff nurses who Unobtrusive Measures had not participated in the original DAs or been Immediately following each DA, the Right Track directly involved in implementation activities, office (a group set up to manage the DA experi- such as steering committees or design teams. ence) compiled a report listing participant names Each set of interview data was content analyzed for and affiliations, an agenda, instructions and elapsed key themes and perspectives. A few of the summary times for each activity and process, photographs of results from the initial interviews are presented different activities and all small-group outputs, here. and nearly verbatim transcripts of the large-group When asked, "How clear were the action plans reports outs, activity debriefings, and discussions. coming out of the DA?", the executives were evenly These reports were analyzed to understand the split in their beliefs that the action plans were clear process and outcomes associated with the each as opposed to the plans being essentially absent. DA. The researchers created a coding scheme and Executives were also asked, "What is going wellot process to capture the characteristics of the partici- so well in implementation of the different service pants, the nature of the process, and a description line strategies?" About 20% of executives believed of the DA outputs. Two coders analyzed the data to that the strategies were aligned with the mission/ ensure the reliability of the analysis. vision of the health system and that the DAs had First, the results suggested that the DAs varied in provided a clear vision to work toward. However, their composition. For example, some DAs were more than half of executives expressed concern that composed of higher percentages of physicians or the organization lacked a real change capability. community members than other DAs. Second, Executives were also concerned about being over- some DAs were more "intense" than others as whelmed by change, insufficient communication, indicated by the amount of debate over decisions and the need to involve stakeholders more. or issues, the number of different stakeholders who When asked, "What would you list as the 'high participated in the debates and discussions, and the points' or 'best success stories' of the DA process?" extent to which the DA's activities deviated from and "What have been some of the least successful the preset agenda. Finally, some DAs produced activities/concerns?", the answers were more positive comprehensive visions and strategies for the clini- than negative. Nearly all of the interviewees notedthe improved relationships with physicians, and people who attended a "review DA" for three of the more than a third of executives said there had been six clinical areas. It too measured perceptions of clini- some good learnings on how to increase the speed of cal strategy and progress. decision making. Both of these results reflected cul- The survey data were organized into three catego- tural changes in the organization that were among ries and analyzed by a statistical program. The first the purposes for conducting the DAs. On the nega- category measured five dimensions of strategy for tive side, a small percentage of executives noted the each clinical area: comprehensiveness, innovative- continued difficulties associated with coordinating ness, aggressiveness, congruence with Alegent's the operations of a multihospital system. strategy, and business focus. Both executives and Another area of interview data concerned execu- managers rated the clinical strategies highest on tive perceptions of how the DA might evolve in comprehensiveness and lowest on congruence the future. There was a strong generic belief that with Alegent's mission. Executives also rated the the DA needed to evolve to fit the changed organ- strategies lower on innovativeness. In all dimen- izational conditions and a widespread perception sions and for each clinical area, managers rated the that this should include a more explicit focus on five dimensions higher than executives did. execution, better change governance, and better The second category measured how well the imple- follow-up and communication. mentation process was being managed. Executives In addition to these initial interview results, data "somewhat agreed" that the clinical area strategies from the second round of implementation inter- were associated with a clear action plan; however, views were used to develop six cases studies, one for there was considerable variance, suggesting that each clinical service area. They described the initial some clinical areas had better action plans than DA event and the subsequent decisions, activities, others. Similarly, managers "somewhat agreed" and events for the 18 months following the forma- that change governance systems exists and that tion of the clinical strategies. Importantly, the case change was coordinated. studies listed the organizational changes that most The third category assessed implementation success. people agreed had been implemented in the first As with the strategy dimensions, managers rated 18 months. Each case study was given to the VP in overall implementation progress higher than execu- charge of the clinical area for validation. tives did, but both groups were somewhat guarded Survey Measures (between neutral and agree) in their responses. Managers were asked a more detailed set of ques- The researchers also collected two sets of survey data. tions about implementation. There was more agree- The first survey, administered during the initial round ment that the clinical strategies were the "right thing of executive and physician interviews, asked them to to do" and had helped to "build social capital" in the rate several dimensions of clinical area strategy and organization, but they were neutral with respect to progress. The second survey was administered to whether "people feel involved" in the change.also used observation and unobtrusive measures. The analysis used a combination of qualitative and quantitative techniques. Who ies. What do you see as the strengths and weak- nesses of the data collection and analysis process? Quantitative Tools Methods for analyzing quantitative data range from simple descriptive statistics of items or scales from standard instruments to more sophisticated, multivariate analysis of the underlying instrument properties and relationships among measured variables. " The most common quantitative tools are means, standard deviations, and frequency 134 PART 2 The Process of Organization Development distributions; scattergrams and correlation coefficients; and difference tests. These measures are routinely produced by most statistical computer software packages. Therefore, mathematical calculations are not discussed here. Means, Standard Deviations, and Frequency Distributions One of the most eco- nomical and straightforward ways to summarize quantitative data is to compute a mean and standard deviation for each item or variable measured. These represent the respondents' average score and the spread or variability of the responses, respectively, These two numbers easily can be compared across different measures or subgroups. For example, Table 7.3 shows the means and standard deviations for six questions asked of 100 employees concerning the value of different kinds of organizational rewards. Based on the 5-point scale ranging from 1 (very low value) to 5 (very high value), the data suggest that challenging work and respect from peers are the two most highly valued rewards. Monetary rewards, such as pay and fringe benefits, are not . highly valued. But the mean can be a misleading statistic. It only describes the average value and thus provides no information on the distribution of the responses. Different patterns of responses can produce the same mean score. Therefore, it is important to use the standard deviation along with the frequency distribution to gain a clearer understand- ing of the data. The frequency distribution is a graphical method for displaying data that shows the number of times a particular response was given. For example, the data in Table 7.3 suggest that both pay and praise from the supervisor are equally valued with a mean of 4.0. However, the standard deviations for these two measures are very dif- ferent at 0.71 and 1.55, respectively. Table 7.4 shows the frequency distributions of the responses to the questions about pay and praise from the supervisor. Employees' responses to the value of pay are distributed toward the higher end of the scale, with no one rating it of low or very low value. In contrast, responses about the value of praise from the supervisor fall into two distinct groupings: Twenty-five employees felt that supervisor praise has a low or very low value, whereas 75 people rated it high or very high. Although both rewards have the same mean value, their standard devia- tions and frequency distributions suggest different interpretations of the data. In general, when the standard deviation for a set of data is high, there is considerable disagreement over the issue posed by the question. II the standard deviation is small, the data are similar on a particular measure. In the example described above, there is disagreement over the value of supervisory praise (some people think it is important, but others do not), but there is fairly good agreement that pay is a reward with high value. Scattergrams and Correlation Coefficients In addition to describing data, quantita- tive techniques also permit OD consultants to make inferences about the relation- ships between variables. Scattergrams and correlation coefficients are measures of the[Table 7.3) Descriptive Statistics of Value of Organizational Rewards STANDARD DEVIATION ORGANIZATIONAL REWARDS MEAN 0.76 Challenging work 4.6 0.81 Respect from peers 4.4 0.71 Pay 4.0 1.55 Praise from supervisor 4.0 0.95 Promotion 3.3 1.14 Fringe benefits 2.7 Number of respondents = 100 1 = very low value; 5 - very high value 135 CHAPTER 7 Collecting and Analyzing Diagnostic Information {Table 7.4] Frequency Distributions of Responses to "Pay" and "Praise from Supervisor" Items Pay (Mean - 4.0) NUMBER CHECKING RESPONSE EACH RESPONSE GRAPH* (1) Very low value 0 (2) Low value 0 (3) Moderate value 25 XXXXX (4) High value 50 XXXXXXXXXX 5) Very high value 25 XXXXX Praise from Supervisor (Mean = 4.0) NUMBER CHECKING RESPONSE EACH RESPONSE GRAPH* (1) Very low value 15 XXX (2) Low value 10 XX 3) Moderate value 0 (4) High value 10 XX (5) Very high value 65 XXXXXXXXXXXX "Each X- five people checking the response." strength of a relationship between two variables. For example, suppose the problem being faced by an organization is increased conflict between the manufacturing depart- ment and the engineering design department. During the data collection phase, infor- mation about the number of conflicts and change orders per month over the past year is collected. The data are shown in Table 7.5 and plotted in a scattergram in Figure 7.3. A scattergram is a diagram that visually displays the relationship between two vari- ables. It is constructed by locating each case (person or event) at the intersection of137 CHAPTER 7 Collecting and Analyzing Diagnostic Information wherein no relationship between the two variables is apparent. In the example shown in Figure 7.3, an apparently strong positive relationship exists between the number of change orders and the number of conflicts between the engineering design department and the manufacturing department. This suggests that change orders may contribute to the observed conflict between the two departments. The correlation coefficient is simply a number that summarizes data in a scattergram. Its value ranges between +1.0 and -1.0. A correlation coefficient of +1.0 means that there is a perfectly positive relationship between two variables, whereas a correlation of - 1.0 signifies a perfectly negative relationship. A correlation of 0 implies a "shotgun" scatter- gram where there is no relationship between two variables. Difference Tests The final technique for analyzing quantitative data is the differ- ence test. It can be used to compare a sample group against some standard or norm to determine whether the group is above or below that standard. It also can be used to determine whether two samples are significantly different from each other. In the first case, such comparisons provide a broader context for understanding the meaning of diagnostic data. They serve as a "basis for determining how good is good or how bad is bad.'" Many standardized questionnaires have standardized scores based on the responses of large groups of people. It is critical, however, to choose a comparison group that is similar to the organization being diagnosed. For example, if 100 engineers take a standardized attitude survey, it makes little sense to compare their scores against standard scores representing married males from across the country. On the other hand, if industry-specific data are available, a comparison of sales per employee (as a measure of productivity) against the industry average would be valid and useful. The second use of difference tests involves assessing whether two or more groups differ from one another on a particular variable, such as job satisfaction or absentee- ism. For example, job satisfaction differences between an accounting department and a sales department can be determined with this tool. Given that each group took the same questionnaire, their means and standard deviations can be used to compute a difference score (t-score or z-score) indicating whether the two groups are statistically different. The larger the difference score relative to the sample size and standard devia- tion for each group, the more likely that one group is more satisfied than the other. Difference tests also can be used to determine whether a group has changed its score on job satisfaction or some other variable over time. The same questionnaire can be given to the same group at two points in time. Based on the group's means and stan- dard deviations at each point in time, a difference score can be calculated. The larger the score, the more likely that the group actually changed its job satisfaction level. The calculation of difference scores can be very helpful for diagnosis but requires the OD practitioner to make certain assumptions about how the data were collected. These assump tions are discussed in most standard statistical texts, and OD practitioners should consult them before calculating difference scores for purposes of diagnosis or evaluation.16138 PART 2 The Process of Organization Development NOTES 1. S. Mohrman, T. Cummings, and E. Lawler III, in The 1980 Handbook for Group Facilitators, ed. "Creating Useful Knowledge with Organizations: J. Pleiffer (San Diego: University Associates, 1980); Relationship and Process Issues, " in Producing Useful W. Dyer, Team Building: Issues and Alternatives Knowledge for Organizations, eds. R. Kilmann and (Reading, Mass.: Addison-Wesley, 1977); J. Hackman K. Thomas (New York: Praeger, 1983): 613-24; and G. Oldham, Work Redesign (Reading, Mass.: C. Argyris, R. Putnam, and D. Smith, eds., Action Addison Wesley, 1980); K. Cameron and R. Quinn, Science (San Francisco: Jossey-Bass, 1985); E. Lawler Diagnosing and Changing Organizational Culture III. A. Mohrman, S. Mohrman, G. Ledford Jr., and (Reading, Mass.: Addison-Wesley, 1999). T. Cummings, Doing Research That is Useful for Theory 9. J. Fordyce and R. Weil, Managing WITH People, 2d ed. and Practice (San Francisco: Jossey-Bass, 1985). (Reading, Mass.: Addison-Wesley, 1979); W. Wells 2. D. Nadler, Feedback and Organization Development; "Group Interviewing, " in Handbook of Marketing Research, Using Data-Based Methods (Reading, Mass.: Addison- ed. R. Ferder (New York: McGraw-Hill, 1977); Wesley, 1977): 110-14. R. Krueger, Focus Groups: A Practical Guide for Applied 3. W. Nielsen, N. Nykodym, and D. Brown, "Ethics Research, 2d ed. (Thousand Oaks, Calif.: Sage Publica- and Organizational Change," Asia Pacific Journal of tions, 1994). Human Resources 29 (1991). 10. S. Lohr, Sampling: Design and Analysis (Pacific 4. Nadler, Feedback, 105-7. Grove, CA: Duxbury Press, 1999). 5. W. Wymer and J. Carsten, "Alternative Ways to 11. W. Deming, Sampling Design (New York: John Gather Opinion," HR Magazine (April 1992): 71-78. Wiley & Sons, 1960); L. Kish, Survey Sampling (New 6. Examples of basic resource books on survey method- York: John Wiley & Sons, 1995). ology include W. Saris and I. Gallhofer, Design, Evaluation, 12. K. Krippendorf, Content Analysis: An Introduction and Analysis for Survey Research (New York: Wiley- to Its Methodology, 2d ed. (Thousand Oaks, Calif.: Sage Interscience, 2007); L. Rea and R. Parker, Designing and Publications, 2003). Conducting Survey Research: A Comprehensive Guide (San Francisco: Jossey-Bass, 2005); S. Seashore, E. Lawler 13. K. Lewin, Field Theory in Social Science (New York: III, P. Mirvis, and C. Cammann, Assessing Organizational Harper & Row, 1951), Change (New York: Wiley-Interscience, 1983); J. Van 14. A simple explanation on quantitative issues in OD Mannen and J. Dabbs, Varieties of Qualitative Research can be found in: S. Wagner, N. Martin, and C. Hammond, (Beverly Hills, Calif.: Sage Publications, 1983); and "A Brief Primer on Quantitative Measurement for the E. Lawler III, D. Nadler, and C. Cammann, Organizational OD Professional," OD Practitioner 34 (2002): 53-57. Assessment: Perspectives on the Measurement of Organizational More sophisticated methods of quantitative analysis Behavior and the Quality of Worklife (New York: Wiley- are found in the following sources: W. Hays, Statistics Interscience, 1980). (New York: Holt, Rinehart, & Winston, 1963); 7. J. Taylor and D. Bowers, Survey of Organizations: A J. Nunnally and I. Bernstein, Psychometric Theory, 3d ed. Machine Scored Standardized Questionnaire Instrument (New York: McGraw-Hill, 1994); E. Kerlinger, Founda- (Ann Arbor: Institute for Social Research, University tons of Behavioral Research, 2d ed. (New York: Holt, of Michigan, 1972); C. Cammann, M. Fichman, Rinehart, & Winston, 1973); J. Cohen and P. Cohen, G. Jenkins, and J. Klesh, "Assessing the Attitudes and Applied Multiple Regression/Correlation Analysis for the Perceptions of Organizational Members," in Assessing Behavioral Sciences, 2d ed. (Hillsdale, N.J.: Lawrence Organizational Change: A Guide to Methods, Measures, and Eribaum Associates, 1983); E. Pedhazur, Multiple Practices, eds. S. Seashore, E. Lawler III, P. Mirvis, and Regression in Behavioral Research (New York: Harcourt C. Cammann (New York: Wiley-Interscience, 1983): Brace, 1997). 71-138. 15. A. Armenakis and H. Field, "The Development of 8. M. Weisbord, "Organizational Diagnosis: Six Organizational Diagnostic Norms: An Application of Places to Look for Trouble with or without a Theory," Client Involvement," Consultation 6 (Spring 1987): Group and Organization Studies 1 (1976): 430-37: 20-31. R. Preziosi, "Organizational Diagnosis Questionnaire," 16. Cohen and Cohen, Applied Multiple Regression.9:55 6 12:56 . . . X Content - OrganizationalChange&Develp Combined 19430/... blackboard.uvi.edu Courses Content Week 4 Assignment 1 Read chapter 7- Designing Interventions: After reading chapter 7, write a one page summary of the four interventions theories discussed in the chapter 7. You will need to give an overview of interventions when answering the questions

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access with AI-Powered Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Entrepreneurship

Authors: Andrew Zacharakis, William D Bygrave

5th Edition

9781119563099

Students also viewed these General Management questions