Answered step by step
Verified Expert Solution
Link Copied!

Question

...
1 Approved Answer

1. What are the two predominant methods for survey data collection? The two predominant methods for survey data collection are mail surveys and online surveys.

1. What are the two predominant methods for survey data collection?

The two predominant methods for survey data collection are mail surveys and online surveys.

2. Young (1996:55) highlights the decline of mail survey methods in management accounting, after a period in excess of 25 years during which it has been the pre-eminence research method in use in the discipline. He attributes the decline to three major factors: What are they?

First factor isa growing interest in alternative forms of research, which can provide richer data sources

Second factor is the increasing difficulty of having mail survey studies published in major refereed journals

Third factor is doubts about the usefulness of survey research in accounting since it has failed to yield a cohesive body of knowledge concerning accounting and control practices, despite 25 years of trying to do so.

3.Young's analysis of mail survey studies published in major journals over a period of 1985 - 94 and he identified a number of common difficulties. List the seven common difficulties that he talked about.

The first common difficulty is low target populations (average only 207)

The second common difficulty is low numbers of respondents (average only 146)

The third common difficulty is few studies using follow-up procedures to increase the sample size

The fourth common difficulty is absence of analysis of non-response bias

The fifth common difficulty is absence of studies using both subjective and objective measures of performance

The sixth common difficulty is absence of the use of sampling procedures, and

The seventh common difficulty is failure to collect both factual and psychological data within the same study, making it impossible to link practices with behavioral variables.

4. Young identifies seven improvement opportunities: what are they?

1. Research programs to establish a framework for research. There remains the pressing need to develop a coherent body of knowledge in management accounting, to match those in finance and financial accounting. Opportunistic research to date has limited developments in this regard, so 'budget impact' remains the major area in the discipline that has been extensively researched.

2. Sampling methods leading to more powerful theory testing. Random sampling is usually not practicable, so convenience sampling predominates. Such criticisms apply to accounting research generally and make the application of standard statistical testing fraught with danger.

3. The use of Dillman-type methods to achieve larger sample sizes. Follow-up procedures (Dillman, 1978, 2007 suggests sending two reminders) and sponsorship by participating organizations to increase response will incur extra costs and a lack of research control. If we are to guarantee anonymity in the conduct of the survey, as ethical guidelines require, then the survey response should not include the name or position (or perhaps company) of the respondent, to prevent any link between a particular individual and a specific survey response. This will increase the costs associated with follow-up reminders since letters will be sent to both respondents and non-respondents (often much to the annoyance of the former). There is the temptation to adopt unethical numbering systems, color coding, or even invisible ink, to identify participants while appearing to deliver anonymity, but the use of a dual-response mechanism (anonymous questionnaire plus named postcard) usually satisfies both cost and ethical concerns. In one of the relatively rare examples of recent use of mail surveys Tucker and Parker (2014) report the adoption of Dillman-type methods in examining the research-practice gap in management accounting; their research design incorporated pre-contact with potential participants, mail-outs with attractive cover letters, well-prepared surveys, follow-ups, reminders and phone-calls to encourage a high response rate.

4. Addressing the issue of non-response bias. The absence of any reference to non-response in many published papers, or the ubiquitous footnote to the effect that non-response was not considered a problem, are of concern. They suggest that no serious attempt has been made to examine the issue.

5. Moving away from outmoded survey instruments. It is still common to see papers published in 2019 using standard instruments like those developed by Mahoney et al. (1963) for self-reported performance, Milani (1975) for budgetary participation, and MacDonald (1970) for tolerance of ambiguity. Their age suggests that it must be possible to generate more relevant current instruments, but the incentives to do so are slim. Any new instrument is vulnerable and needs extensive testing, and there are no corroborative studies using the same instrument. We observe the classic trade-off between reliability and construct validity, the choice of a well-accepted and reliable instrument that may only approximately capture the construct of interest. Thus, Merchant (in Brownell, 1995: 149) uses the LBDQ (leader behavior description questionnaire) adapted by Aiken and Hage (1968) from Halpin's instrument developed in the 1950s. The LBDQ measures two dimensions of leadership (consideration and task orientation), while Merchant (1985) uses it to measure 'detail orientation', even though he subsequently suggests that this particular instrument may have been less than optimum. Heiman et al. (2010) provide an intriguing insight into the problems with development of new instruments in their analysis of alternatives to traditional scales for the measurement of 'tolerance of ambiguity'.

6. The development of surveys on the basis of improved organizational knowledge. If the survey instrument does not correspond with the 'language' of the firms involved, then response will be limited because of the perceived irrelevance of the survey.

7. Moving away from subjective self-reported measures to more objective evaluations. Young observes an almost total absence of, arguably, more objective superior ratings. However, other authors have suggested that such criticism of self-rated performance is overstated, and that superior ratings are just as likely to be in error because of the range of subordinates under the control of a single supervisor."

5. A subsequent study involving Young over the 20-year period (1982-2001) assessed the quality of survey design in management accounting research according to the five key elements suggested by Diamond (2000) What are they?

The first key element is purpose and design of the survey

The second is population definition and sampling

The third is survey questions and other research method issues

The fourth is accuracy of data entry, and

The fifth is disclosure and reporting.

6.A number of fundamental questions need to be answered at the designed stage. List and summarize the seven questions in design and planning issues.

The first question is What sort of survey are we contemplating? The requirements of the research question and the impact of cost differentials, for example, will both be important in determining whether a conventional mail survey is appropriate, or if surveys should be conducted by telephone, email, Skype or using online platforms to provide superior and/or more cost-effective outcomes. Mail and online questionnaires should each allow a large enough sample to reduce sampling error to acceptable levels, at considerably lower costs than either telephone or face-to-face interviews. In addition, online and mail surveys provide no opportunity for interviewer bias, a potentially serious problem in both face-to-face and telephone interviews. Anonymity and confidentiality continue to be issues affecting email and internet-based studies, especially the former.

The second question is What sort of respondent are we targeting? It will make a great deal of difference at the planning stage, depending on whether we are targeting the population in general or a very specific proportion - for instance, particular professional groupings, or even CEOs. The narrower the grouping, the more essential it is that we have up-to-date mailing details of the individuals who are to be contacted. If we wish to contact very specific members of the population (e.g. sets of twins for environmental studies where we wish to eliminate the impact of heredity), we may have to advertise for participants.

The third question is What questions do we want answers to? It may appear obvious, but it helps in this respect if we have carefully specified research question(s) and hypotheses to direct expected responses. Too often in research papers and dissertations it appears that the survey has been conducted first, perhaps because of the opportunistic availability of access, without the research question really having been thought through. This quickly becomes apparent when key questions that should have been asked are found not to have been asked in the subsequently developed research questions. Roberts (1999) suggests that best practice in the development of instruments and questionnaires dictates that an extensive review of related instruments is undertaken first, and that where instruments need to be purpose-built or adapted, pilot testing is required to address issues of relevance and wording. It cannot be overemphasised: the theory and literature drive the research question and hypotheses; the survey instrument is merely the vehicle that we employ to test theory and hypotheses. There must be a direct and transparent link between the three.

The fourth question is What response categories are we contemplating? For example, are we asking for opinions, judgements or knowledge? Are we setting questions which are closed (requiring yes/no, Likert scale responses or tick-a-box type answers) or open (allowing a considered narrative response)? We need to address these issues early on or they can come back to haunt us. If we are expecting a narrative response, for instance, then we must provide the respondent with enough room to give it; if we conduct a mass survey, we need an efficient coding system to deal with all the closed questions; if we are asking for knowledge, then questions must refer to items that we can reasonably assume a respondent to know without having to search or look up the details. One of the most serious criticisms of survey research (e.g. Chua, 1996) is that the questions asked are often so complex that the survey questionnaire ceases to be the most appropriate method of data collection.

The fifth question is What sequence of questions should we pursue? There are varying opinions as to whether the easiest and shortest questions should be at the beginning of the questionnaire or at the end. Some authors (e.g. Parker, 1992) suggest that short and easy questions should be used at the beginning, leading to the meatiest questions in the middle of the survey, followed by relatively shorter and easier questions towards the end in order to encourage completion of the whole survey document; others (e.g. Bryman, 2001: 117) suggest that early questions should clearly be relevant to the research topic and should not address personal issues like age, experience and educational background. At this stage we must also consider whether there will be an order effect; that is, would we have generated different answers by ordering the questions differently? If we think that this is likely, we should re-run the survey with a smaller pilot audience to determine whether or not our fears are justified. One major advantage of online surveys is that a potential 'ordering effect' bias can easily be addressed by scrambling the responses and testing the outcomes for significant differences.

The sixth questions is What form should the layout of the survey instrument take? Most authors concur that the survey should not be too long, but that, more importantly, it should be made interesting and relevant to the target audience. Long questionnaires are more cost-effective, but only if they are actually returned! For mail surveys the optimum length depends on the format of the survey instrument (e.g. the desire to leave white space, or the requirement to provide gaps for narrative inputs) but should not normally be greater than four pages for the general population. Specialist groups may tolerate something slightly longer. For both mail and online surveys it is essential that we maintain the interest and motivation of the typical respondent, suggesting that they should be able to complete the instrument in less than 20 minutes. Online we are just one click away from a non-response so that a well-designed instrument maintaining both flow and interest is paramount.

The seventh question is How do we select the sample? This is important and a weakness in many papers, where the issues seem to have been brushed aside - probably because they have not been adequately addressed in the first place. One key consideration is whether we know the size of the population and its constituent items. In many accounting research projects the answer to this question is clearly 'no'. As a result scientific methods of sample selection are precluded, and we need to appeal to the opportunistic or convenience samples so common in the literature - even though they may be 'dressed up' to look like something more systematic. If we have a known population, we will probably have a readily available sampling frame (a stock exchange yearbook, for example, for companies, or an electoral register for individuals). We can then sample randomly from this population (perhaps using a random number generator) or choose every nth item to deliver the required sample size, or stratify the population according to its characteristics in order to ensure that we deliver a representative sample. There are mathematical formulae for calculating the required sample size to deliver the necessary statistical accuracy of estimates, but it is usually easier to return to the research question and hypotheses. We should be able to specify the tests we intend to perform, and the number of ways that the data will be split; we can identify all the cells of the analysis and we would like at least ten (20 is better) items in each to give us confidence in conducting the intended statistical tests. If we cannot adequately resolve order issues at the pilot stage of the survey, then the requirement for multiple versions of the final instrument will expand the required sample size by the same multiple. For example, we do not want to be in the position where we want to test for gender effects, say, with data on individuals, and then find we have too few females in the sample (which happens more often than it should with accounting data). Similarly, when testing for industry effects with company data, we may have too few retail representatives, though with some countries (e.g. Australia and New Zealand) the populations themselves may not be big enough to yield the samples required to conduct tests of all desirable relationships.

7. In pilot testing, a number of important issues will arise at this stage which must be addressed satisfactorily. What are the issues?

1. The questions must be clear, simple and easily understood.

2. The questions and covering letter should be targeted towards the respondent's viewpoint so that they are clearly relevant to the target audience - any jargon or industrial terminology employed should be technology-specific so as to improve response rates.

3. The choice of words must be careful, avoiding slang, abbreviations and any terms with potentially ambiguous meanings.

4. There should be no 'double-barreled' questions because if more than one answer appears to be sought, confusion will result.

5. Double negatives should be avoided, as they will frequently be misunderstood.

6. However, wording reversals should be employed to prevent respondents from unthinkingly 'ticking the right-hand box', say, without paying due reference to the precise meaning of the question; such techniques may help to flag the use of 'robot' responses on online convenience sampling platforms.

7. Respondents must have the knowledge and skills to equip them to answer the questions. This is currently an issue in auditing research which is targeted at subordinates in accounting firms - the levels of complexity in some surveys place unrealistic demands on those responding to the survey.

8. Those questions which are incapable of producing reliable answers must be eliminated. Thus, questioning individuals in social research about their sexual behavior, gambling habits, drug or alcohol abuse is unlikely to produce accurate responses. In accounting research, questions relating to fraudulent practices, dysfunctional behavior, income and even precise position in the hierarchy may elicit misleading answers. Similarly, questions relating to religion may cause difficulties in cross-cultural research.

9. Attention to the time taken by pilot respondents in completing the survey should give an early indication of whether individual questions, or whole sections, need to be pruned."

8. A number of further considerations arise when researchers focus in more detail on the collection of data. List and explain the six considerations.

A relevant and up-to-date mailing list is essential. Use of an existing mailing list may require sponsorship by a host organization (e.g. one of the professional accounting bodies). The 'cost' of sponsorship may be recognition of the host organization in any publication and/or some loss of control in the conduct of the survey in that the host organization handles completions and returns so that there are no guarantees as to exactly who has completed the survey. Alternatively, the purchase of a reputable database may be required. This can be expensive for a narrowly focused mailing list. The development of a mailing list of one's own from 'scratch' is extremely time-consuming and labor intensive. It is also an ongoing involvement because vigilance must be exercised in the maintenance of the mailing list. There is nothing likely to cause more consternation among survey recipients than if one of the named targets is already deceased. Such problems have advanced the use of mass online convenience samples as an alternative, but with a questionable impact on the representativeness of responses.

The survey should target specific named respondents. There is a wealth of evidence which suggests that surveys that are addressed to the 'occupant' or the 'manager' or some other unnamed individual are those most likely to be consigned to the waste bin. The research literature (e.g. Dillman, 1978, 2007) suggests that surveys should be targeted by both name and position, and that if there are any doubts in these respects they should be confirmed in advance by telephone prior to delivery of the survey. Dillman further suggests the use of a clear covering letter, ideally on headed notepaper and with a handwritten signature from a recognized dignitary; the letter should provide unambiguous instructions, a guarantee of confidentiality, and a demonstration of the importance of the survey and its relevance to the respondent. Such practices could be simulated through electronic means, though imperfectly. Merchant (1985) customizes his research instruments by varying the technological terms to fit the target audience. In so doing he ensures the relevance of the survey to the recipient, increasing the response rate to 95%, but jeopardizes reliability through variable instruments. Edwards et al. (2002) noted the following factors which significantly increased their response rate to a mail survey:

use of a brown envelope (rather than white)provision of monetary incentives for completion short questionnaire, and recorded delivery of mail-out and responses.

All of these suggestions are consistent with creating an impression that the survey is important, and is deserving of urgent attention from the respondent; however, their adoption must be balanced against both cost and complexity - with respect to incentives, and the inclusion of additional variables into the research!

How do we record the answers? This should be established early to make the most of the media employed. If we are dealing with a mail survey, then manual methods will predominate. However, if we have verbal (i.e. interview or telephone response) or written responses (i.e. narrative answers in a manual survey, or email, online or internet responses), then opportunities exist to conduct a detailed qualitative analysis of the narrative through the content analysis of the text, even though, for interviews, this may have to be transcribed from tape recordings.

Feedback to respondents? The offer of aggregated results to respondents may provide an incentive, which encourages completion of the survey. This is often more successful than the offer of prizes or nominal rewards for return of the survey. In any event, a letter of thanks to respondents is good manners, and may elicit increased willingness for further involvement. For example, it may encourage respondents to make themselves available for follow-up interviews, to provide both clarification and detail. If response is still the major objective, then stamped addressed envelopes, preferably with real stamps rather than bar codes, are preferable. Dillman (1978) recommends the sending of two follow-up reminders to elicit further responses, as well as a careful monitoring of holiday or busy business periods so that these may be avoided for the survey distribution (e.g. avoidance of company surveys close to the financial year-end, the end of the tax year, Christmas or Easter).

Organization. It is better to think ahead. At the planning stage we should be aware of the coding necessary for closed answers, and also of the methods of analysis to be employed. Ideally, the responses should readily be transferable to spreadsheets for manipulation, or to SPSS input, to facilitate more detailed analysis.

Non-response problems. The biggest concern in survey research is lack of response. If respondents are unrepresentative and response rates are extremely low, then doubts will arise about the validity of the findings and the potential for biases being introduced. Response rates of less than 25% are common in accounting research; the question that is difficult to answer is whether respondents differ significantly from non-respondents. Non-response is only a problem if we can demonstrate that there are systematic differences between respondents and non-respondents, and that such differences will impact on the findings. This latter condition may be difficult to demonstrate. Merchant (1985) reports the use of a 'postcard' method both to guarantee the anonymity of respondents and to distinguish between respondents and non-respondents. Participants are asked to complete the questionnaire and a separate postcard, which are independently mailed back to the researcher. Assuming that respondents do indeed return both items, then the identities of non-respondents will be known and their characteristics can be compared with those of the known respondents. In the absence of such a device we are forced to estimate the characteristics of non-respondents by proxying their characteristics from those respondents who were the last to respond after the final reminder. The implication is that these last-minute, almost reluctant, respondents will resemble those who did not bother to respond at all. Such a technique would need to be adapted for differing time constraints if it were to be applied to an estimate of non-response in online surveys."

9. Andrew (1984) Specifies three kinds of measurement errors. What are they?

Andrews (1984) specifies three kinds of measurement error: bias, random errors and correlated (or systematic) errors.

10. A number of interview formats are common in the accounting literature and are addressed in our text book. List and explain the three interview formats.

Maturation.

The first interview format is the structured interview. This is the format which most closely resembles that of the self-completion questionnaire. Opportunities for interviewer bias are restricted by seeking a common context: the same questions, in the same order, with the same cues and prompts permitted, and all within a specific, closed-question framework. The use of closed questions makes the coding of answers easier and has advantages for the subsequent analysis. Closed questions also eliminate the opportunities for error associated with open questions, as well as the chance of 'missed' questions where order differences are permitted. But closed questions also sacrifice the comparative advantage of the interview method by failing to include the flexibility and richness of response offered by open-ended questions. In the accounting literature, Lowe and Shaw (1968), Onsi (1973) and Marginson and Ogden (2005) provide examples of the adoption of a structured interview approach.

The second interview format is the semi-structured interview. This format allows a series of questions to be asked, but in no fixed order. Additional questions may also be asked, as the interviewer sees fit, to examine associated issues that arise in the course of the interview. Lillis (1999) provides an example of the use of the semi-structured approach.

The third interview format is the unstructured interview. This format commences with a series of topics for discussion, rather than specific questions to be asked. It may develop into a directed conversation, with the interviewer able to adopt a 'free-wheeling' approach, as long as the required topics are all covered. The actual words and phrases used may therefore vary significantly between interviews, but this approach may put interviewees at their ease sufficiently to induce them to make disclosures that would not have emerged under different conditions. The unstructured interview approach is illustrated by Merchant (1985) and Malina and Selto (2001).

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access with AI-Powered Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Fundamentals Of Human Resource Management

Authors: David A DeCenzo, Stephen P Robbins, Susan L Verhulst

12th Edition

9781119032748

Students also viewed these Accounting questions