Skip to navigation – Site map

HomeIssues90Ongoing Research/Recherche en coursQuality Issues in Interview Surve...

Ongoing Research/Recherche en cours

Quality Issues in Interview Surveys – Some Contributions

Des problèmes de qualité dans des enquêtes par entretien, quelques contributions
Lilli Japec
p. 26-42

Abstracts

This article is the introduction to a thesis with the same title (done at the Department of Statistics of the University of Stockholm) which deals with some of the issues associated with the quality in interview surveys. The interviewer has many tasks, and some of them are extremely error-prone. The author describes some general themes and how these are usually handled. She also suggests some new approaches and ideas for further work, both when it comes to gaining cooperation and when it comes to doing a good job in administering the question-answer process. She also evaluates a specific procedure to reduce nonresponse errors in interview surveys in terms of costs and error reduction.

Top of page

Full text

Introduction

1According to Eurostat, survey quality can be seen to consist of six dimensions: the relevance of the statistical concepts, the accuracy of estimates, the timeliness and the punctuality in disseminating results, the accessibility and the clarity of the information, the comparability, and the coherence. In interview surveys all these dimensions apply and interviewers affect mainly the accuracy component but to some extent also timeliness. The effect on the latter, however, is more of an indirect one since it is the responsibility of management to estimate the time needed to complete a survey in the prescribed manner and monitor the procedure to ensure that this estimate is correct. The accuracy of statistics can be viewed in terms of total survey error which is the total effect of various specific sources of error associated with the survey process. The sources can be viewed as operations performed in a certain sequence, such as specification of a research problem, defining a target population, designing a sampling scheme, developing a measurement instrument, choosing a data collection mode, and choosing a set of data processing operations such as data capture, editing, coding, weighting, and disclosure avoidance. When the sources are viewed like this, in terms of survey operations and the execution of them, each operation adds to an estimate’s mean squared error (MSE). According to Biemer and Lyberg (2003) a simple decomposition of the MSE, for an estimate of, say, an average could be:

MSE = Bias2 + Variance
= (B
specification + Bframe + Bnonresponse + Bmeasurement + Bdataprocessing)2
+ Varsampling + Varmeasurement + Vardataprocessing

2That is, each survey operation contributes to the bias or to the variance or sometimes both. Ideally we would like to design a survey and our processes so that we get an MSE that is as small as possible given a certain budget. To estimate the MSE components, however, special evaluation studies are needed and these are often associated with large costs. Therefore data quality indicators are often used as proxies for estimates of MSE components. For example, nonresponse rates are often used as proxies for nonresponse bias. As a consequence in many interview surveys a large proportion of the survey budget is allocated to reduce nonresponse rates through the use of elaborate strategies with many call attempts.

3There are many actors involved in the design and execution of survey operations that can contribute to the total survey error. Actors include managers, interviewers, respondents, and survey methodologists. The actor effect cannot be illustrated in the same straightforward way, as was the case with the sequence of survey operations. Still, viewing various actors’ influence on total survey error is very important to gain guidance on issues related to tradeoffs in survey planning. In interview surveys interviewers are very important actors. They are involved in the process of gaining respondent cooperation, being in charge of the questionanswer process, “training” the respondent (Groves et al. 2004; Fowler 1991), and sometimes even in the frame construction through being in charge of listing units that should be included in the frame. Some sampling designs demand that the interviewer to select a respondent from a sampling unit comprising more than one potential respondents. Such schemes include the Kish method and “birthday” methods (Oldendick et al. 1988). Thus, there is an interviewer influence on the total survey error that is best handled by installing work processes that come close to the ideal.

4When trying to improve survey quality different perspectives must be kept in mind. First, an assessment of the relative importance of error sources must be made. Second, such an assessment calls for studies of error rates and of error structures, and knowledge about quality indicators and actual quality. Third, developing processes that are stable, predictable and successively approach the ideal seems to be a reasonable complement to measuring error sizes reflecting an MSE decomposition.

5General descriptions of survey quality along these varying lines are to be found in Hansen et al. (1964); Groves (1989); Biemer and Trewin (1997); Biemer and Stokes (1991); Biemer and Lyberg (2003); Groves et al. (2004); and Morganstein and Marker (1997).

6This thesis deals with some of the issues associated with the quality in interview surveys. The interviewer has many tasks, and some of them are extremely error-prone. I describe some general themes and how these are usually handled. I will also suggest some new approaches and ideas for further work, both when it comes to gaining cooperation and when it comes to doing a good job in administering the question-answer process. We also evaluate a specific procedure to reduce nonresponse errors in interview surveys in terms of costs and error reduction.

Interviewers and Survey Quality

Characteristics of the Survey Interview

7The interview mode can be face-to-face or telephone and with or without computer assistance. Face-to-face interviewing is characterized by great flexibility and as offering great opportunities to convert refusals and assist respondents in the question-answer process. On the downside we have large costs, a situation that is difficult to control, and the risk of social desirability bias. The latter problem, however, can be taken care of with CASI, computer assisted self-interviewing. When sensitive questions are to be answered the computer is turned over to the respondent, thereby reducing the risk of social desirability bias. Telephone interviewing is cheaper but less flexible than face-to-face interviewing. When telephone interviewing is performed from call centers the opportunities for quality control are much better than in the case of face-to-face interviews. The computer assisted variants of interviewing decrease the risk for certain types of errors, such as skip errors. Computer assisted interviewing also allows a certain amount of on-line editing, which moves some of the editing to a more cost-effective upstream location.

Two Schools of Interviewing

8There are two main schools of interviewing, standardized and conversational. The idea with standardization is that all respondents should receive the same stimuli so that each interviewer’s influence on the response is minimized. In practice this is very difficult to achieve and the interview situation with a distinct element of rigidity in the interaction between interviewer and respondent can be unnatural. In standardized interviewing the comprehension is left to the respondent and only probes that are scripted are allowed. Conversational interviewing aims for answers that are as accurate as possible. This means that the interviewer can change the question wording if that is necessary for the respondent to understand the question. Nonscripted probes are also allowed with the same justification. These schools have been discussed extensively in the literature (Beatty 1995; Suchman and Jordan 1990) and it is not clear whether one is superior to the other. The conversational school, however, requires better-trained interviewers who thoroughly understand the concepts that underlie the questions and also larger survey budgets because of the need for better interviewer training and longer interviews. In practice an interview contains elements of both schools. Conrad and Schober (2000) have shown that conversational interviewing can be efficient when respondents’ situations are such that these persons do not correspond clearly to a response category. Other studies show that deviations from standardized scripts are not always harmful (Dykema et al. 1997).

9Interviewing is a very complex survey operation since it involves a cognitive interaction between the interviewer and the respondent. The operation is described in many books and articles (Gubrium and Holstein 2001; Schaeffer 1991; Turner et al. 1998, Sykes and Collins 1992, Cannell et al. 1977, Houtkoop-Steenstra 2000, Dykema et al. 1997; Schober and Conrad 1997; De Leeuw and Collins 1997; Lyberg and Kasprzyk 1991). These publications cover issues such as the anatomy of survey interviews, how interviewer and respondent characteristics can best be matched, methods that can change interviewer behavior, comparisons between different interview modes, links between behavior coding and actual data quality, the importance of interviewer expectations and experience, and interaction in general. The survey literature discussing different aspects of the interview process is very extensive partly because so many disciplines are involved in developing interviewing techniques.

Interviewer Effects

10Some types of surveys could not be carried out by any other mode than interviewing. For example, a mail survey with many questions, equivalent to an interview that lasts about one hour, would probably have such a high nonresponse rate that it would not be viewed as very trustworthy. Interviewers are very valuable in getting participation in surveys and response rates in interview surveys are typically much higher than in other data collection modes. When choosing the data collection mode trade-offs must be made, for example between nonresponse errors and interviewer errors.

11Interviewers, however, contribute to the total survey error in a number of ways. Interviewers vary in their ability to gain cooperation and their contribution to the nonresponse bias will vary. Gaining cooperation comprises two steps: first, tracking and finding a sampled unit; second, getting participation. During the first step, noncontacts are handled by repeated call attempts.The call scheduling literature (Kulka and Weeks 1988; Greenberg and Stokes 1990) tells us how to administer a given number of call attempts over time. Finding someone at home has become more difficult today than it was 30–40 years ago (Weeks et al. 1980) and it has become a very costly process. During the second step the interviewer must gain cooperation from the unit. This can be made easier using efficient “doorstep” strategies (Campanelli et al. 1997; Morton-Williams 1993) or tailoring strategies based on compliance principles (Cialdini 1990; Groves and Couper 1998). Doorstep should be interpreted in a wide sense. Also in telephone surveys the very first seconds of the interaction are crucial and serve as the doorstep to overcome to be able to gain cooperation.

12Interviewers also affect the respondents and their answers. All respondents that are interviewed by the same interviewer tend to have more similar responses. This clustering effect within interviewer assignments is measured by the so-called interviewer variance (Kish 1962; Groves 1989; O’Muircheartaigh and Campanelli 1999). There are many reasons for this clustering effect. For example, interviewers use different probing techniques, interviewers reword poorly design questions differently, and interviewers may use different strategies for accepting a “don’t know” answer. These types of differences between interviewers are due to the fact that standardization is not working properly. This type of personal interviewer bias causes an intra-interviewer covariance term contributing to the variance of the estimate. Also, respondents may choose not to give an honest answer or not to go through the entire cognitive process needed to answer a question because the specific interviewer-respondent interaction characteristics are not optimal. There are also other examples of error that can occur in data collection performed by interviewers, such as listing population members, tracking sample members, selecting a specific respondent from a sampling unit and correctly recording answers. However, the main interviewer effects are related to nonresponse and measurement error.

13To estimate interviewer variance an interpenetrated sample design is needed. This is achieved by randomizing the assignment of interviewers to subsamples of respondents (Mahalanobis 1946). There are some slightly different measurement models describing interviewer effects (Cochran 1968; Fellegi 1974; Hansen et al. 1961; Kish 1962). The difference between the models is to be found in the assumptions made about the effects and the model components.

14Using the Kish model the actual increase in variance of the estimates due to betweeninterviewer variance is a function of the average interviewer workload m and ρint where the latter is the ratio of the between-interviewer variance and the sum of the between- and withininterviewer variance. The actual increase in variance due to interviewers is obtained by multiplying the standard variance estimate by (m-1) ρint. Most values of ρint are usually so small (see Groves 1989) that most studies find that ρint is not significantly different from zero. But small values of ρint combined with moderate workloads can generate considerable increases in the variance of estimates. Traditional variance formulas do not take the interviewer contribution into account. Essentially what ρint reflects is the interviewer-specific influence on each assignment that resembles a cluster effect in cluster sampling. By his or her unique habits, style, and approach each interviewer affects the respondents in his or her assignment in uniform ways, resulting in between-interviewer variance.

15There is also a vast literature on interviewer expectations, preferences, attitudes, confidence, demographic characteristics, style, experiences, accent, and voice (Dijkstra and Van der Zouwen 1987; Oksenberg and Cannell 1988; Rice 1929; O’Muircheartaigh 1997) and corresponding effects on data quality. Some of the findings can be turned into robust design principles but there are also examples of findings that are inconclusive.

Quality Control

16Most survey organizations have some type of quality assurance system in place. There are two main forms of quality control. One set of methods that is used on a continuing basis involves for example monitoring of interviews either live or tape-recorded, reinterviewing a certain portion of each interviewer’s workload (Forsman and Schreiner 1991; Biemer and Forsman 1992), keeping track of response rates by interviewer, and scan edit of paper questionnaires filled in by interviewers. The second type of quality control involves more elaborate methods such as mock interviews, observations in the field and special studies aiming at estimating ñint, which usually can only be performed on an ad hoc basis. The introduction of computer assisted methods has opened up possibilities of using technology in quality control. For instance, it is possible to use keystroke files and trace files (Couper 1998) to check how interviewers have used various keys to go back and forth in the instrument. On-line editing is also possible as part of a quality program. Last but not least, training is an important quality control tool. The literature on training is not very rich and it is not clear how interviewers should best be trained in the various tasks I have listed. Groves and McGonagle (2001) and Fowler and Mangione (1985) are important references in this field.

17By and large, it is safe to say that interviewer errors are in general underappreciated and if you look for them you will find them (Fowler 1991). Interviewing is probably the most complicated survey operation of them all and one must bear in mind that enumerator error was the most important reason why the U.S. Census Bureau changed its main data collection procedure in the 1960 U.S. Census of Population and Housing from interviewing to mail.

Some Recurring Issues in Interview Surveys

18Despite the extensive literature on interviewers and interview surveys and related issues the field is by no means over-researched. Many lines of development need to be explored more fully.

19The issue of interviewers’ ability to gain cooperation relies to some extent on the use of the compliance principles laid out by Cialdini 1984 and Groves and Couper 1998. It is not clear, however, how efficient these principles are, either viewed one by one or as a set. It would seem that reciprocation is very powerful, which is manifested by the extensive literature on incentives (Singer et al. 1999). It is less clear how efficient some of the other principles are. For instance, a principle such as authority, social validation or liking might work differently for different subpopulations or in different cultures. In some countries referring to authority might even be counter-productive. There is need for studies that can evaluate the relative importance of the compliance principles as a means to get participation.

20Quality control efforts need to take advantage of recent thinking on process quality. For instance, when analyzing nonresponse rates for individual interviewers it is important to use tools from the theory of statistical process control (Ryan 2000). One such tool is the control chart where individual nonresponse rates are plotted making it possible to distinguish between common and special cause variation. Special cause variation is a sign of individual poor interviewer performance and should be eliminated. Remaining common cause variation is due to system characteristics such as bad questions, questionnaires, interviewer working conditions, training procedures, and feedback operations. If the survey manager is not pleased with the amount of common cause variation then a process improvement is necessary that can decrease this variation. Some quality control strategies do not distinguish between these main forms of variation, resulting in “ghost-chasing” and erroneous decisions regarding the assessment of interviewers. The same tool can be applied to data obtained from monitoring and observations of the interview process. Thus, deviations from agreed interview procedures might be due to the system rather than to individual interviewers’ skills. In this thesis there will be examples of the application of control charts for interview processes, which I have not found elsewhere.

21Some of the existing models of the response and interview processes tend to neglect the role of the interviewer. For instance, phenomena such as comprehension, information retrieval, satisficing, sloppy cognitive work, burden and degree of interest in topic among respondents undoubtedly have their counterparts among interviewers. In this thesis the interview process is described from the interviewer’s perspective and it is found that many respondent behaviors are matched by similar interviewer behaviors, although the effects might be different. I develop the concept of interviewer burden to demonstrate that interviewing can be burdensome, that interviewer burden can have adverse effects on data quality, and that it is possible to develop work procedures that both decrease the interviewer burden and increase the data quality. Interviewers are multi-tasking, and we must find methods that can make life easier for them. Interviewer burden is a concept that has evolved during recent years but much of the discussion concerns physical aspects of burden (such as carrying laptops) or decreasing workload (such as asking fewer questions). I develop the concept in more detail and identify a number of burden components, some of them of a more cognitive nature. It seems as if there is a need for a concept that matches the burden concept in the case of the respondent.

22Many of the current practices to reduce nonresponse rates such as elaborate fieldwork strategies, use of incentives, advance letters, and refusal conversion strategies need to be evaluated in terms of their effects on nonresponse bias. Many of these methods are costly and could perhaps be used in a more efficient way than today. We evaluate one such method used in the Swedish Labor Force Survey, the fieldwork strategy. The MSE perspective is important and there is a tendency to focus on the data quality indicators that are easy to measure, such as response rates. We should strive to develop new indicators that are perhaps more difficult to measure but that will give us a better idea of the effects on data quality.

Summary of Papers

23The thesis consists of four papers. Paper I is an overview of some of the research that has been conducted during recent decades. This paper sets the stage for the more detailed papers that follow. Paper II deals with the interview process, with a concentration on the concept of interviewer burden and its implications for interview surveys. Paper III takes the concept of interviewer burden one step further. Using data from the Swedish part of the European Social Survey it is possible to speculate about the effects of interviewer burden on data quality. Paper IV describes the effects of field efforts on nonresponse bias and cost in the Swedish Labor Force Survey, which is an interview survey. That way two of the main problems with interview surveys are covered: measurement error and nonresponse.

Paper I: Research Findings in Interview Surveys – Some Implications and Research Needs

24In this paper there is a brief overview of the “state of the art” in methodological research related to interview surveys and some implications for survey practices are discussed. I study the literature on strategies to get participation in surveys, including call scheduling and refusal conversion strategies. I also cover the literature on interviewer effects that can occur during the interview, methods for quality assurance in interview surveys, and interviewer training. Furthermore there is a description of methods that are used to study quality aspects of interview surveys. The briefness of the overview should be emphasized, since the discussion is designed to highlight areas where more research is needed, rather than provide a complete account of what is going on in the field.

Paper II: The Interview Process and the Concept of Interviewer Burden

25In order to reduce survey errors caused by respondents, theories have been developed to help us understand why respondents behave the way they do. Reducing respondent burden, both cognitive burden and other types of burden, is viewed as being of the utmost importance. Interviewer errors have also been recognized and described in the survey methodology literature. This literature, however, has not to a great extent addressed the question of why interviewer errors occur or why interviewers behave the way they do. I believe that in order to reduce interviewer errors we need to understand the mechanisms behind these errors. In this paper I therefore address these issues.

26The interview process is modeled and interviewers’ cognitive processes are addressed. This model is different from earlier models since it is more overarching and combines both interviewer and respondent cognitive processes within a social context. The interviewer’s tasks during an interview are to get participation, pick up cues and probe, ask the question, and record the answer. In order to perform a good-quality interview the interviewer needs to understand the questions, understand the respondent’s problem with a question, retrieve definitions and concepts from memory, formulate probes, understand the response provided by the respondent, make a judgment and then record the answer. The interviewer’s access to concepts and definitions is also reflected in the model. The model developed by Beatty and Herrmann (2002) regarding respondent knowledge is adapted to fit interviewer knowledge of concepts and definitions.

27Interviewer knowledge can be in one of the following four cognitive states in relation to a certain question:

  • Available: the concepts and definitions can be retrieved with minimal effort

  • Accessible: the concepts and definitions can be retrieved with some effort

  • Generatable: the concepts and definitions are not exactly known, but may be guessed using other information in memory from previous surveys

  • Not available: the requested concepts and definitions are not known.

28The accuracy of this step is affected by, for example the number of concepts and definitions the interviewer has to remember, the cues the instructions provide, and how long ago the interviewer last had to recall the concepts and definitions. The first cognitive state, when concepts and definitions are easily available, is the least demanding state for the interviewer. The other cognitive states are probably more demanding and interviewers are likely to handle these states very differently. For example, if a definition is not known exactly (generatable) the interviewer might recall a similar problem from an earlier survey/interview and adopt the same definition for this new survey/interview. Some interviewers might choose to adopt this strategy while others might choose to record a “don’t know” answer. In any case, interviewer burden increases as they go from the first to the last cognitive state.

29In addition, the interviewer is expected to be able to handle the survey instrument during the interaction with the respondent. This task can vary in complexity according to mode. For instance, if the mode is computer assisted the computer can cause problems for the interviewer that affect the interaction between interviewer and respondent. The interviewer is also expected to follow social norms and adequately represent the survey organization, and to be flexible in various ways. There is no doubt that the interviewing task can be multifaceted and burdensome and I believe that this has not been sufficiently recognized in the survey literature. Interviewer burden is sometimes referred to but is then often restricted to parts of the survey task or just the number of assigned cases.

30The discussion in this paper focuses on interviewer burden with the components administration and survey design, respondent and interviewer characteristics, social environment, and tasks. I define the concept of interviewer burden in a model and its effects are also included in the model. One such effect is interviewer satisficing, the phenomenon that occurs when the interviewer does not expend the necessary effort to complete an interview as prescribed. Satisficing has been discussed in the survey methodology literature but in relation to respondents only.

31I use data from two interviewer surveys that we have conducted to illustrate some aspects of interviewer burden. From interviewers’ own reports it is clear that interviewer burden exists and that interviewers develop their own strategies to deal with burden. All these strategies will affect data quality.

Paper III: Interviewer Burden and Its Effects on Data Quality in the Swedish Part of the European Social Survey (ESS)

32In this paper some components of interviewer burden are further explored. I also study the effects of interviewer burden components and strategies on data quality in the Swedish part of the European Social Survey (ESS). To model effects due to respondents and to interviewers, multilevel regression analysis and the software MLwiN were used. I replicate the Hox and de Leeuw (2002) study, where they found that interviewers with a positive attitude towards refusal conversion have higher response rates than other interviewers.

33We carried out a survey among Statistics Sweden’s interviewers, asking them about what situations they find stressful, their opinions about surveys they work on, strategies they use and their attitudes. Besides the data from the interviewer survey, data from four additional sources was used in the analysis: data from the Swedish part of the ESS, interviewer data about each interview carried out in the ESS, data about sampled persons’ characteristics from the register of the total population, and data from the interviewer time reporting system.

34The analysis revealed that the number of surveys an interviewer is working on during the same time period affects both the length of the interview, the probing frequency, and the ”don’t know” frequency in a survey. Working on both face-to-face and telephone surveys during the same time period can be stressful and it was found to affect the noncontact rate. These results imply that we should not burden interviewers with too many surveys and that we should not mix surveys with different modes too much.

35Interviewer interest affects the response rate, refusal rate, interview length, and “don’t know” frequency. This suggests that increasing interviewers’ interest and motivating interviewers are important. Another implication of the study results is that we should monitor interview time since it can tell us something about the data quality. Extremely short interviews are most likely susceptible to either respondent or interviewer satisficing.

36I also found that interviewers with a positive attitude towards persuasion tend to have lower refusal rates and higher response rates than other interviewers. This finding is consistent with the Hox and de Leeuw 2002 study.

37Furthermore it was found that interviewer strategies affect both refusal and noncontact rates. Interviewers that avoid asking “why” after an initial refusal tend to have a lower refusal rate than other interviewers. One reason for this could be that a reluctant sampled person feels cornered by the interviewer. If the sampled person first answers the question why and provides arguments against participating, it is hard for him or her to step back on it and agree to participate. Interviewers that always leave a message in face-to-face interviews if the sampled person is not at home tend to have a higher noncontact rate than other interviewers. This could be an effect of for example interviewers just waiting for the sampled person to get in touch.

38The data analysis also revealed that interviewer burden, as I define it, affects data quality. For example, interviewers with a heavy burden tend to have lower response rates, higher refusal and noncontact rates, and shorter interview time, and they probe less.

Paper IV: Effects of Field Efforts on Nonresponse Bias and Costs in the Swedish Labor Force Survey

39In this paper the focus is on fieldwork strategies and nonresponse bias. Nonresponse rate has traditionally been seen as an indicator of nonresponse bias. In interview surveys a common way to reduce nonresponse rates is to make more call attempts at different times during the day or to persuade reluctant persons to participate. These methods are successful in increasing response rates. In recent years, however, we have seen a shift in focus, from nonresponse rate to nonresponse bias reduction. There are some studies that illustrate this, for example those of Keeter et al. (2000) and Lynn et al. (2002). The focus of these studies is not only on response rates but also on effects on survey estimates.

40We study the effects of fieldwork on nonresponse bias in the Swedish Labor Force Survey (LFS). With the current strategy 12 call attempts should be made for each sampled person and at least two of these call attempts should be made during a weekend. A key survey estimate in the LFS is the proportion of persons employed in Sweden. This variable is highly correlated with information submitted by employers to the tax board about their employees and this information is available in the employment register for the year 2001(SREG 2001). To evaluate if the nonresponse bias is reduced with the current strategy we estimate the proportion employed according to SREG for the LFS sample. Since data from SREG are available for both respondents and nonrespondents in the LFS, we are able to compare estimates of the proportion employed based on the entire sample, i.e., both respondents and nonrespondents, with estimates based on respondents only. To evaluate the effect of each call attempt on the estimates we make 12 estimates, one for each call attempt.

41The generalized regression estimator (GREG) is applied using strong auxiliary information in the estimation procedure. The purpose of using auxiliary information in the estimation procedure is to reduce the nonresponse bias. Since strong auxiliary information is not always available in surveys we also carry out the same analysis assuming that a simple random sample was drawn and that no auxiliary information is available. Furthermore we study noncontacts and refusals to address the question whether there is a bias caused by either group of nonrespondents. Finally we estimate the cost of the current fieldwork strategy as compared with that of less elaborate strategies.

42The figure below shows the proportion of persons employed according to SREG 2001. We can see that the current strategy overestimates the proportion by about two percentage points and the nonresponse bias is very stable after the fourth call attempt.

Figure: Proportion of persons employed according to SREG 2001

43In the analysis it was found that with the current strategy the nonresponse bias is only reduced for a few population subgroups, that it remains stable after the fourth or fifth call attempt for most population subgroups, and that it even increases for some population subgroups. This result holds for estimates whether we use auxiliary information in the estimation procedure or not. If we use strong auxiliary information in the estimation procedure, however, we reduce the error level dramatically. The largest nonresponse bias when we use auxiliary information after the twelfth call attempt is around 4 percent (non-Swedish Nordic citizens). Without the auxiliary information the nonresponse bias for this same group is almost 12 percent.

44It was also found that it is the noncontacts that contribute most to the nonresponse bias and not the refusals when we assume a simple random sample design and use no auxiliary information in our estimation procedure. However, if we use the GREG estimator with strong auxiliary information we are able to reduce the bias considerably for the noncontacts (from 2 to 14 percentage points). The remaining nonresponse bias is caused to the same extent by the noncontacts and refusals.

45A less elaborate fieldwork strategy, with four call attempts instead of twelve, would reduce the monthly cost of the LFS. The resources could be used more efficiently by allocating them to increase response in population subgroups that contribute most to the nonresponse bias.

Top of page

Bibliography

Beatty, P. (1995). Understanding the Standardized/Non-Standardized Interviewing Controversy. Journal of Official Statistics, 11, 147-160.

Beatty, P. and Hermann, D. (2002). To Answer or Not to Answer: Decision Processes Related to Survey Item Nonresponse. In Survey Nonresponse, R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little (eds). New York, Wiley, 71-85.

Biemer, P.P. and Forsman, G. (1992). On the Quality of Reinterview Data with Applications to the Current Population Survey. Journal of the American Statistical Association, 87, 420, 915-923.

Biemer, P.P. and Lyberg, L. (2003). Introduction to Survey Quality. New York: Wiley.

Biemer, P. P. and Stokes, L. (1991). Approaches to the Modeling of Measurement Errors. In Measurement Errors in Surveys, P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, and S. Sudman. New York: Wiley and Sons. 487-517.

Biemer, P. and Trewin, D. (1997). A Review of Measurement Error Effects on the Analysis of Survey Data. In L. Lyberg, P. Biemer, M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, and D. Trewin (eds), Survey Measurement and Process Quality, New York: Wiley and Sons, 603- 632.

Campanelli, P., Sturgis, P., and Purdon, S. (1997). Can You Hear Me Knocking: An Investigation into the Impact of Interviewers on Survey Response Rates. Social & Community Planning Research, London, UK.

Cannell, C. F., Oksenberg, L., and Converse, J. M. (1977). Striving for Response Accuracy: Experiments in New Interviewing Techniques. Journal of Marketing Research, XIV, 306-315.

Cialdini, R.B. (1984). Influence: The New Psychology of Modern Persuasion. New York: Quill.

Cialdini, R.B. (1990). Deriving Psychological Concepts Relevant to Survey Participation from the Literatures on Compliance, Helping, and Persuasion. Paper presented at the First International Workshop on Household Survey Nonresponse, Stockholm.

Cochran, W. G. (1968). Errors of Measurement in Statistics. Technometrics, 10, 55-83.

Conrad, F. G. and Schober, M. F. (2000). Clarifying Question Meaning in a Household Telephone Survey. Public Opinion Quarterly, 64, 1-28.

Couper, M.P. (1998). Measuring Survey Quality in a CASIC Environment. American Statistical Association, Proceedings of the Survey Research Methods Section, 41-49.

De Leeuw, E. and Collins, M. (1997). Data Collection Methods and Survey Quality: An Overview. In L. Lyberg, P. Biemer, M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, and D. Trewin (eds), Survey Measurement and Process Quality, 199-220, New York: Wiley and Sons.

Dijkstra, W. and van der Zouwen, J. (1987). Styles of Interviewing and the Social Context of the Survey-Interview. In H. Hippler, N. Schwarz, and S.Sudman (eds), Social Information Processing and Survey Methodology, 200-211, New York: Springer Verlag.

Dykema, J., Lepkowski, J. M., and Blixt, S. (1997). The Effect of Interviewer and Respondent Behavior on Data Quality: Analysis of Interaction Coding in a Validation Study. In L. E. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, and D. Trewin (eds), Survey Measurement and Process Quality, New York: Wiley, 287-310.

Fellegi, I. (1974). An Improved Method of Estimating the Correlated Response Variance. Journal of the American Statistical Association, 69, 496-501.

Forsman, G. and Schreiner, I. (1991). The Design and Analysis of Reinterview: An Overview. In P. Biemer, R. Groves, L. Lyberg, N. Mathiowetz, and S. Sudman (eds), Measurement Errors in Surveys, New York: Wiley, 279-302.

Fowler, F.J. Jr (1991). Reducing Interviewer-Related Error Through Interviewer Training, Supervision, and Other Means. In P. Biemer, R. Groves, L. Lyberg, N. Mathiowetz, and S. Sudman (eds), Measurement Errors in Surveys, New York: Wiley, 259-278.

Fowler, F.J. and Mangione, T.W. (1985). The Value of Interviewer Training and Supervision. Final Report to the National Center for Health Services Research, Grant #3-R18-HS04189.

Greenberg, B.S. and Stokes, S.L. (1990). Developing an Optimal Call Scheduling Strategy for a Telephone Survey. Journal of Official Statistics, 6, 421-435.

Groves, R.M. (1989). Survey Errors and Survey Costs. New York: Wiley.

Groves, R.M. and Couper, M.P. (1998). Nonresponse in Household Interview Surveys. New York: Wiley.

Groves, R.M. and McGonagle, K.A. (2001). A Theory-Guided Interviewer Training Protocol Regarding Survey Participation. Journal of Official Statistics, 17, 249-265.

Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., and Tourangeau, R. (2004). Survey Methodology. New York: Wiley.

Gubrium, J. F. and Holstein, J. A. (eds) (2001). Handbook of Interview Research. Sage.

Hansen, M., Hurwitz, W., and Bershad, M. (1961). Measurement Errors in Censuses and Surveys. Bulletin of the International Statistical Institute, 38, 2, 359-374.

Hansen, M., Hurwitz, W.N., and Pritzker, L. (1964). The Estimation and Interpretation of Gross Differences and the Simple Response Variance. In Contributions to Statistics, presented to professor P.C. Mahalanobis on the occasion of his 70th birthday, 111-136.

Houtkoop-Steenstra, H. (2000) Interaction and the Standardized Survey Interview: The Living Questionnaire. Cambridge: Cambridge University Press.

Hox, J. and de Leeuw, E. (2002). The Influence of Interviewers’ Attitude and Behavior on Household Survey Nonresponse: An International Comparison. In Survey Nonresponse, R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little (eds). New York, Wiley, 103-120.

Keeter, S., Kohut, A., Groves, R. M., and Presser, S. (2000). Consequences of Reducing Nonresponse in a National Telephone Survey. Public Opinion Quarterly, 64, 125-148.

Kish, L. (1962). Studies of Interviewer Variance for Attitudinal Variables. Journal of the American Statistical Association, 57, 92-115.

Kulka, R.A. and Weeks, M.F. (1988). Toward the Development of Optimal Calling Protocols for Telephone Surveys: A Conditional Probabilities Approach. Journal of Official Statistics, 4, 319-332.

Lyberg, L., and Kasprzyk, D. (1991), Data Collection Methods and Measurement Error. In P.Biemer, R. Groves, L.Lyberg, N. Mathiowetz, and S.Sudman (eds), Measurement Errors in Surveys, 237-257, New York: Wiley and Sons.

Lynn, P., Clarke, P., Martin, J., and Sturgis, P. (2002). The Effects of Extended Interviewer Effects on Nonresponse Bias. In Survey Nonresponse, R.M. Groves, D.A. Dillman, J.L. Eltinge, and R.J.A. Little (eds), New York, Wiley, 135-148.

Mahalanobis, P.C. (1946). Recent Experiments in Statistical Sampling in the Indian Statistical Institute. Journal of the Royal Statistical Society, 109, 325-378.

Morganstein, D. and Marker, D. (1997). Continuous Quality Improvement in Statistical Agencies. In L.E. Lyberg, P. Biemer, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz, and D. Trewin (eds), Survey Measurement and Process Quality, New York: Wiley, 475-500.

Morton-Williams, J. (1993). Interviewer Approaches. Aldershot, Dartmouth Publishing Company Limited.

Oksenberg, L. and Cannell, C. (1988). Effects of Interviewer Vocal Characteristics on Nonresponse. In Telephone Survey Methodology, R. M. Groves, P. Biemer, L. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds), New York, Wiley, 257-269.

Oldendick, R.W., Bishop, G. F., Sorenson, S.B., and Tuchfarber, A.J. (1988). A Comparison of the Kish and Last Birthday Methods of Respondent Selection in Telephone Surveys. Journal of Official Statstics, 4, 4, 307-318.

O’Muircheartaigh, C. (1997), “Measurement Errors in Surveys: A Historical Perspective. In L. Lyberg, P. Biemer., M. Collins, E. De Leeuw, C. Dippo, N. Schwarz, and D. Trewin (eds), Survey Measurement and Process Quality, pp. 1-25, New York: Wiley and Sons.

O’Muircheartaigh, C. and Campanelli, P. (1999). A Multilevel Exploration of the Role of Interviewers in Survey Nonresponse. Journal of the Royal Statistical Society, Series A, 162, 437-446.

Rice, S.A. (1929). Contagious Bias in the Interview. American Journal of Sociology, 35, 420- 423.

Ryan, T.P.(2000). Statistical Methods for Quality Improvement. New York: Wiley.

Schaeffer, N.C. (1991). Conversation with a Purpose or Conversation? Interaction in the Standardized Interview. In P.P. Biemer, R.M. Groves, L.E. Lyberg, N.A. Mathiowetz and S. Sudman (eds), Measurement Errors in Surveys. New York: Wiley, 367-392.

Schober, M.F. and Conrad, F.G. (1997). Does Conversational Interviewing Reduce Measurement Error? Public Opinion Quarterly, 61, 576-602.

Singer, E., Van Hoewyk, J., and Gebler, N. (1999). The Effect of Incentives on Response Rates in Interviewer-Mediated Surveys. Journal of Official Statistics, 15, 217-230.

Suchman, S. and Jordan, B. (1990). Interactional Troubles in Face-to-face Survey Interviews. Journal of the American Statistical Association, 85, 232-241.

Sykes, W. and Collins, M. (1992). Anatomy of the Survey Interview. Journal of Official Statistics, 8, 277-291.

Turner, C.F., Forsyth, B.H., O’Reilly, J.M., Cooley, P.C., Smith, T.K., Rogers, S.M., and Miller, H.G. (1998). Automated Self-Interviewing and the Survey Measurement of Sensitive Behaviors. In M.P. Couper, R.P. Baker, J.Bethlehem, C.Z.F. Clark, J. Martin, W.L. Nicholls, II, and J.M. O’Reilly (eds), Computer Assisted Survey Information Collection, 455-473, New York: Wiley.

Weeks, M., Jones, B.L., Folsom, R.E., and Benrud, C.H. (1980). Optimal Times to Contact Sample Households. Public Opinion Quarterly, 44, 101-114.

Top of page

References

Bibliographical reference

Lilli Japec, Quality Issues in Interview Surveys – Some ContributionsBulletin de méthodologie sociologique, 90 | 2006, 26-42.

Electronic reference

Lilli Japec, Quality Issues in Interview Surveys – Some ContributionsBulletin de méthodologie sociologique [Online], 90 | 2006, Online since 20 March 2008, connection on 29 March 2024. URL: http://journals.openedition.org/bms/654

Top of page

About the author

Lilli Japec

Statistiska centralbyrån-Statistics Sweden, lilli.japec@scb.se

Top of page

Copyright

The text and other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search