Measuring Customer Satisfaction:

More on Corporate Surveys as Practice

 

Judith M. Tanur

State University of New York at Stony Brook

 

Brigitte Jordan

Xerox Palo Alto Research Center and Institute for Research on Learning

 

Paper Prepared for Presentation at the Annual Conference of the American

Association for Public Opinion Research, May 17, 1996, Salt Lake City, Utah

 

Published in the 1996 Proceedings of the Section on Survey Research Methods.

American Statistical Association, vol.2, pp. 917-922, Washington, D.C.

 

 

 I. INTRODUCTION

 

            A team from the Institute for Research on Learning (IRL) and Xerox Palo Alto Research Center (Xerox PARC) carried out a holistic, system-wide study ("systemic assessment") of one of the business divisions of a Fortune 50 company (Aronson et al., 1995). Because the corporation's priorities include satisfaction of its customers and employees, the team took as part of its mission an investigation of the functioning of the major surveys the corporation uses to measure customer and employee satisfaction and hence included a survey methodologist among its members. In the course of that investigation, which involved extensive ethnographic fieldwork, we found several interesting issues relating specifically to customer satisfaction. This paper briefly describes the division we studied, discusses the surveys used there, and lays out an agenda of research for addressing the issues we have identified. We expect to find a field site and a corporate partner that will permit us to begin our study of these issues in the near future.

 

            The literature on customer satisfaction is voluminous; Peterson and Wilson (1992, p.61) estimate that more than 15,000 academic and trade articles had appeared in the preceding two decades. We have not yet carried out anything approaching a thorough review of these publications, but we have found that most often customer and employee satisfaction are measured by surveys (McNeal and Lamb, 1979), and much of that literature deals with the validity and reliability of those surveys. For example, one major puzzle is that the distribution of "satisfaction scores" resulting from most surveys (whether of customer satisfaction or of satisfaction in more general life domains) is highly negatively skewed, with the modal response often being in the response category that denotes the highest degree of satisfaction (Peterson and Wilson, 1992). The questions we wish to raise here and to investigate in the field are somewhat less technical.

 

 II. THE DIVISION AND ITS CUSTOMER SATISFACTION SURVEYS

 

            The division we studied is an outsourcing business that handles customers' copying, printing, and document networking needs. It conducts its business in two sorts of locations: "Centers" in major locations across the country which process a variety of small and large orders, and "Facilities Management" sites or FMs, installations the division runs at customer premises. FMs are staffed by division employees and attempt to take care of the entire range of customers' document services needs. This variety of kinds of work gave rise to a similar variety of customer satisfaction surveys; indeed, there are three separate instruments sent to the divisions Center Services and FM customers.

 

1. The Center Services Survey

 

            First, there is a Center Services, job-by-job survey. This brief form is to be filled out by someone who has recently received a completed job back from a Center. Sampling is done via job tickets and carried out bi-monthly. Responses are aggregated over a quarter and reported to each Center. The Center Services Survey asks for levels of satisfaction on

 

           ease of arranging a job

            sales representative assistance

            customer support representative assistance

            completion of job to specifications

            quality of the job

            meeting the deadline

            accuracy of job invoice

            understandability of invoice

            price for value received

 

as well as an overall satisfaction question "considering your most recent experience" and a question on satisfaction with problem resolution if there has been a problem in the last six months. Each of these questions is answerable on a 5-point scale that ranges from "very satisfied" through "satisfied", "neither satisfied nor dissatisfied", and "dissatisfied", to "very dissatisfied". In addition there is an explicit "don't know/not applicable" option offered. Two other global questions on likelihood of continuing to use the Center and likelihood of recommending the Center to others are answerable on 5-point scales ranging from "definitely" through "probably", "undecided", and "probably not", to "definitely not". Finally there is a question about the existence of a currently unresolved problem.

 

2. The Facilities Management Survey

 

            Next there is the Facilities Management Survey, done semi-annually and attempting a census of all FM sites. The FM Survey questionnaire is considerably more intricate than that for the Center Services Survey. It is divided into seven sections. The first four (FM Service Quality, FM Sales Support, Your Operators, and FM Billing and Administrative Support) each present a list of desirable attributes of service and ask the respondent to identify and rank the three most important to him/her. (The respondent may also write in other attributes.) Then the respondent is asked to rate the quality of each of the attributes (whether of greatest importance to him/her or not) on a 4-point scale ranging from "excellent" though "good" and "fair" to "poor", with an explicit "don't know/not applicable" option. The short FM Backup Services section asks respondents to rate the importance of this service, note if they have ever used it, and then rate its quality on the same 4-point scale. The Problem Resolution Support section is to be answered only by those who have experienced a significant problem with the FM in the preceding six months. They are asked to choose a description of the problem and then rate their overall satisfaction with problem handling as well as with the speed and outcome of the resolution of their problem. This rating is done on the usual 5-point scale of satisfaction, with an explicit "don't know/not applicable" option. The final section is on overall satisfaction. The explicit overall satisfaction question uses the same 5-point satisfaction scale as does the Center Services Survey, while the questions on likelihood of renewing the FM contact and recommending FM to other organizations are answerable on the same 5-point likelihood scale as is used in the Center Services Survey.

 

3. The Competitive Benchmarking Survey

 

            Finally, there is a Competitive Benchmarking Survey, comparing the division to its competitors. Done by an outside vendor, this survey uses a sample of the division's FM sites as a comparison with a census of sites reputed to be serviced by competitors. The list for the census of competitors' accounts is developed from information supplied by the division's sales department. The sample of FM sites is drawn from the same list as is the FM survey. The Competitive Benchmarking Survey is carried out annually. In its questionnaire the first questions deal with identification of the FM company servicing the customer and the process that led to the customer's choice of this particular supplier. The next section asks whether each of a menu of services is provided, whether the respondent wishes they were provided, and their importance to the respondent. The respondent is then asked to rate his/her satisfaction on the usual 5-point scale. S/he is instructed to use the "not applicable" option if the service is not provided; there is no provision for a "don't know" response. Similar importance and satisfaction questions are asked about FM performance, FM personnel, and billing. Overall satisfaction, and likelihood to renew and recommend are asked in the same way as on the FM survey.

 

4. Reporting Results

 

            For the reporting of the results for most questions on these surveys, a Satisfaction Index (SI) is calculated. Each respondent to such a question is given a numerical score calculated as Very Satisfied = 10, Satisfied = 7.5, Neither = 5, Dissatisfied = 2.5, Very Dissatisfied = 0. These scores are then averaged over respondents and the resulting average taken as the SI. The SI is then interpreted according to the original satisfaction scale (Very Satisfied = 10, etc.).

 

            Reports for the Center Services Survey and the FM survey highlight the overall satisfaction question, presenting that basic information three different ways: as the SI, as a distribution of satisfaction scores, and as the percent of customers satisfied. Results for this and other questions are compared with results from earlier time periods as well as with those from other centers in the current period. Write-in comments are reported and classified by commenting customer, and problems are flagged. For the FM survey, a "Vulnerable Report" is immediately issued for any customer reporting a currently unresolved problem. Reports of the Comparative Benchmarking Survey concentrate on comparisons between the division we studied and its competitors.

 

III. THE ISSUES

 

            The sorts of issues we found in our study and which we believe bear further investigation include the following:

 

1. Who is the customer, and hence who ought to answer the surveys?

 

            Why does (or should) the company care about customer satisfaction? Presumably for two reasons: 1. to improve the design of its products and the nature of its services, and 2. to ensure repeat orders and renewal of contracts. The former suggests that one wants to look at the user as the customer while the latter suggests looking at the one who makes purchase decisions. For household purchases or in small or decentralized companies, user and decision maker are often the same person or, at least, are in close enough contact so that the decision-maker knows what the user thinks about the product or service. However, the larger the customer account is, the more likely it is that user and decision-maker are separate individuals or groups of individuals.

 

            In the division we studied, orders often are placed by people who are acting on behalf of others -- a secretary for a boss, an administrative assistant for a department, a purchasing department for one or more other departments. These are the people who would make up the most easily assembled sampling frame for a survey of customer satisfaction (and indeed constitute the frame currently used), but are they the people whose satisfaction ought to be measured? They can perhaps attest to the courtesy or lack thereof of the corporation's sales staff and to the promptness and accuracy or lack thereof of delivery. But these are at most only some dimensions of what constitutes customer satisfaction. Should the frame instead be made up of those on whose behalf the orders are placed? Do we then mean the originators of the orders or the people who actually receive the output of the service? It is the latter who can really attest to the efficacy of the service, but the former who probably have the authority to make decisions about reorders and contract renewals. In either case, the listing that would constitute a sampling frame is much more difficult to construct than is the one constituted of those who actually place the orders as is the current practice. A good deal of cooperation from the customer organization would be required in the effort.

 

            An additional and related problem is that it is often difficult to tell whether the survey form (which is administered via mail) has been filled out by the person to whom it was addressed and whether that person is in a position to make decisions about future purchases, recommendations to other potential customers, or to provide feedback about the redesign of the product.

 

2. What does the corporation mean by customer satisfaction and how does that relate to what customers themselves may understand by satisfaction?

 

            As we noted above, it is difficult to define who the customer really is, and hence doubly difficult to understand what a prototypical customer's definition of satisfaction might be. On the other hand, the definition a corporation holds for satisfaction of its customers ought to be operationalized in its surveys and hence deducible from an examination of the questions it uses. (We address below a possible mismatch between concept and operationalization, but for the moment we take the content of the survey as a valid measure of the division's concept of customer satisfaction.) We have listed those dimensions addressed by those questions above in our description of the surveys in the division we studied. Thus because we can infer a meaning for customer satisfaction for the corporation but not for the customer, the definitions of satisfaction held by corporation and customer may really diverge, but we currently have no information about any such divergence.

 

            This and other issues that require customer input are particularly difficult to study, as corporations quite naturally are reluctant to bother their customers. In a previous study (Tanur and Jordan, 1995), we investigated the workings of both customer satisfaction surveys and employee satisfaction surveys. We were able to learn a great deal more about the former than about the latter, because we were able to run focus groups of employees and collect think-aloud protocols from employees at many levels of the division in addition to the continual contact with employees entailed in the ethnographic fieldwork. These techniques, the usual workhorses of the survey researcher, are less easily applicable to customers. The personal relationships established during ethnographic participant observation, if carried out by the right people with the right sensitivities, might ameliorate the problem. In particular, "shadowing" of users and decision-makers might be productive, in order to understand how the products and services the division provides enter into their worklife. Rather than data extraction for somebody else's interest (which is what surveys often feel like to respondents), the serious interest in and attention to respondents' daily work evidenced by the ethnographer's efforts to track and understand, not only tend to build a positive relationship with respondents but also might generate a collaborative approach to the question: "in your company, who do we have to talk to in order to understand what customer satisfaction means for you."

 

3. Attitudes vs. behavior

 

            We noted above that a major interest for a corporation in measuring customer satisfaction must be to predict customers' future behavior -- in particular reorders, contract renewals, and recommendations to potential new customers. But informal reports from the division we studied noted that there is only a slight correlation between customer satisfaction as measured by the surveys and these behavioral responses. If indeed the correlation is as low as we have been led to believe, two issues, one methodological and one substantive arise.

 

            The first issue is whether the correlation is only artifactually low. One line of reasoning suggests that its weakness could be, at least in part, a result of the typically low response rates. (Response rates for the Center Services Survey were about 30%; for the FM Survey response rates are hard to come by, but the little data we have suggests they are about 30%; in the Competitive Benchmarking Survey the response rates for the division's own installations is about 30% and for competitor's installations 27%). Those customers who are extremely dissatisfied might well be differentially unlikely to respond, so that respondents represent a self-selected group of customers who are experiencing middling to high satisfaction (see Lebow, 1982). Such a restriction of range could indeed attenuate a correlation even if the instrument were measuring "real" customer satisfaction. Peterson and Wilson (1992) concur with this judgment that measurement artifacts, in particular skewness and range restrictions in the distribution of satisfaction scores, are likely to attenuate correlations between measured satisfaction and other variables. However, to test the premise that more satisfied customers are more likely to respond, these authors examined 34 published studies on satisfaction with mental health treatment and 15 on customer satisfaction in the marketing literature. They found correlations between response rate and average level of satisfaction to be less than .10 in both cases. Peterson and Wilson point out that their study has many limitations (including the small number of studies available that present data on both response rate and satisfaction percentage and the lumping together of surveys carried out by phone, by mail and in person), but it does suggest that a low response rate explanation is not sufficient to account for the informally observed low correlation between satisfaction expressed in the survey and subsequent behavior. (Low response rates themselves constitute an issue and will be discussed below.)

 

            The second issue asks why the corporation is not measuring behavior and carrying out formal correlational analysis. Could not records of reorders or renewals be related to earlier survey results? And could not the surveys ask not only about the likelihood of recommending the service but also about whether the respondent has actually made such a recommendation? More generally, the all too usual disjunction between expressed attitudes and behavior is one that survey researchers have struggled with for decades; a classic treatment is by Deutscher (1973). Hence part of a research agenda on customer satisfaction surveys ought to be a careful dissection exploring what questions, if any, predict behavior and a search for indicators that are efficacious for such prediction.

 

 4. What does the corporation's concern with customer satisfaction look like to front-line workers such as sales people, service people, account managers, help-call receivers?

 

            There are two separate questions here. The first one is about the impact of the corporation's emphasis itself on the perceptions and behaviors of the employees vis a vis customers. The accepted notion is that a corporation's professed interest in customer satisfaction in its mission statement and in its attempts to measure that satisfaction will "trickle down" to employees, making them "better" with customers in some sense. Conversely, it is often argued that one of the important functions of surveying customer satisfaction is to make an impact on customers by sending a positive signal to customers that their satisfaction is important to the querying company. How true either of these accepted notions is ought to be investigated.

 

            The other question is how the particulars of measurement of customer satisfaction affect the behavior of employees (and perhaps of customers). We know in general that people's behavior tends to be shaped by what is being measured (teachers are likely to teach to the test; employees work towards targets set to evaluate their performance even if that behavior does not maximize the company's interests, etc.). How does that tendency manifest itself in a corporation's employees who know that customer satisfaction is being measured and know the questions that are used in measurement? Do they make special efforts to serve customers well or do they perhaps attempt to ingratiate themselves with customers in less routine ways or even in ways that do not serve the corporation's interests?

 

            Although we have no idea how widespread the practice is, in our ethnographic field research we occasionally heard tales of customers using threats and promises about their responses on the satisfaction surveys to influence the behavior of division employees. What are employees' responses to such threats and promises? At the same time, we have also heard division employees protest that certain customers will never express themselves as satisfied. Are such customers treated differently by employees, creating a sort of self-fulfilling prophecy? More generally, we might well ask, how does feedback about customer problems or dissatisfaction affect employees? Does it cause them to become disgruntled or does it inspire them to remedy the situation? Or do they first do one and then the other?

 

5. What, if any, is the relationship between employee satisfaction and customer satisfaction?

 

            Implicit in a corporation's attempts to measure and improve employee satisfaction is a model of attitudes and behavior that sees a satisfied worker as more productive than a dissatisfied one and that in turn relates that productivity to the bottom line of the corporation. And as we have noted, the implicit theory behind attempts to measure customer satisfaction is the understanding that there is a relationship between the attitudinal dimension of satisfaction and the behaviors of reordering and contract renewing. Thus we wonder if the third leg of the presumed causal triangle exhibits a relationship between employee satisfaction and customer satisfaction, either on the individual or on the aggregate level.

 

 6. Mail vs. other types of surveys of customer satisfaction and issues of response rates.

 

            We referred above to the low response rates in the customer satisfaction surveys. In the division we studied, customer satisfaction surveys were routinely done by mail and they (like other mail surveys) often get very low response rates (sometimes less than 30%, see above). Yet we found that upper management uses the results of the surveys as if those responding were representative of the target population. This conviction that the results of the surveys can be taken at face value underlies the notion that it makes sense to set targets for increasing customer satisfaction. For example, the Chairman of the division's parent company in his Directions Management Communique listed as the first of five objectives for 1996 "a 20 percent improvement in the number of geographies and business areas in which we are Number 1 in customer satisfaction." This seems to us a risky situation, and one that calls not only for a better understanding of who the customer is and what satisfaction means to him or her, but also for an investigation of ways to increase response rates. Although telephone surveys might yield higher response rates and hence more valid findings, there are corporate concerns that bothering customers via telephone might irritate and alienate them. We are aware that other service corporations, notably telephone companies, routinely survey their customers by phone. It might be worth the division's while to experiment with telephone surveys in a systematic manner to see if response rates can be improved without undue customer irritation.

 

 IV. Conclusion             We have laid out a series of issues that we feel are ripe for careful study. Our next steps include a much more comprehensive review of the literature to see which have already been addressed. We are eager to undertake the study of these issues -- all we need are a field site and resources. And of course, others need no invitation from us to explore in these directions; we would be eager to learn about what they find.

 

V. References

 

Aronson, Meredith, Libby Bishop, Melissa Cefkin, Brigitte Jordan, Nancy Lawrence, Lindy Sullivan, Connie Preston and Julia Oesterle 1995  Reflections on a Journey of Transformation: Learning, Growth, and Change at Xerox Business Services. Systemic Assessment Project, Final Report. Palo Alto, CA: Institute for Research on Learning. (June)

 

Deutscher, Irwin 1973 What We Say/What We Do Glenview, Illinois: Scott, Foresman

 

Lebow, Jay L. 1982 Consumer Satisfaction with Mental Health Treatment Psychological Bulletin, 91:244-259

 

McNeal, James U. and Charles W. Lamb 1979 Consumer Satisfaction as a Measure of Marketing Effectiveness Akron Business and Economic Review 10:41-45

 

Peterson, Robert A. and William R. Wilson 1992 Measuring Customer Satisfaction: Fact and Artifact  Journal of the Academy of Marketing Science 20:61-71

 

Tanur, Judith and Brigitte Jordan 1995  Measuring Employee Satisfaction: Corporate Surveys as Practice. Pp. 426-431 in: Proceedings of the Survey Research Methods Section, Alexandria, VA: American Statistical Association. Also available as IRL Tech Report #11.0028 (November). Palo Alto, Institute for Research on Learning.