Welcome to the Sampson Wiki!

From Dr. Scott Sampson's Understanding Services Businesses Book (click for table of contents)
—[in Unit 14: Measuring Service Quality and Productivity]— Next⇒SBP 14b: A Measure of Motivation

SBP 14a: Measuring Customers

With services, we often measure quality by measuring customers. Unfortunately, customer measurement is often far from precise.

Why it occurs

This principle occurs because of variability in (a) customer-provided specifications, (b) customer-supplied inputs and © output which is heterogeneous.


How do we measure customers? We measure their perceptions of the service relative to company or customer-provided specifications. We measure the impact of the service process on their selves, their belongings and/or their information. We also measure affective outcomes–the impact of the service on customer's attitudes. The methods for taking such measurements are customer surveys or other types of customer feedback.

Why is customer measurement far from precise? The following are some Service Business Principles that explain the imprecision of customer measurement:

  • Subjective Rulers - With services, customer measures of quality are generally subjective. Making a subjective measure numerical does not make it objective.

For example, asking customers “On a 5-point scale, how satisfied were you with the timeliness of service?” may get a numerical response, but even the meaning of each scale point is a subjective judgment.

  • Intrusive Measurement - With services, the act of customer-measurement of quality can influence perceptions.

The idea that measurement influences outcomes was observed many years ago as the “Hawthorne Effect.” For example, be careful about asking customers for complaints or problems with the service, since research has shown that such questions inspire negative thinking and can promote dissatisfaction that otherwise would not be recognized. It is justified to ask for complaints if the service provider is willing and able to appropriately act on them.

  • Resistance to Measurement - With services, most customers do not consider quality measurement to be value adding, therefore resist providing measurements. This resistance increases as the customers' cost of providing measurement increases.

For example, many customers consider it a hassle to fill out customer satisfaction surveys. Giving thoughtful feedback to the company requires mental effort, which many customers are not willing to expend. As a result, response rates for customer satisfaction surveys may be no higher than 5 to 15 percent.1)

  • The Halo Effect - With services, customers automatically combine individual components of quality into an overall quality perception. Attempted measurements of individual components may actually have more to do with the overall perception than the individual components.

Often, service providers desire to know which components of the service delivery process are in need of improvement and which are okay. Given the halo effect, the problem is that customers form overall opinions and bias their report of each component based on that overall opinion.

  • Self-Selected Sampling - With services, customer-measurement makes it is possible to influence sampling, but very difficult to control sampling. When we control sampling we know how survey-responding customers (sampled) compare with customers in general. Strategies to increase response rates, such as awards or drawing for prizes, influence some types of customers more than others. Therefore, it is important to consider sample bias.

Sampling bias describes how customers for which we have measures compare with customers in general. How do we know if the customers who give opinions represent customers in general? The answer is “very often, we don't.” Nevertheless, it is reasonable to assume that customer survey responses are biased, differing in some way from the attitudes of customers in general. For example, we may believe that customers in a hurry are less likely to give a quality evaluation than customers with time on their hands. Therefore, we wind up surveying a disproportionate number of those with time on their hands, under-representing the attitudes of customers in a hurry. The responses to questions like “Was our service fast enough?” would not capture what hurried customers think.

  • Interpreting the Interpretations - With services, customer measurement requires the customer to interpret both their perceptions and the measurement scale. Two customers with identical perceptions might interpret the measurement scale differently, resulting in different measurements.

It is a common fallacy to think that two customers who mark “good” on a “excellent-good-fair-poor” scale have the exact same opinion. Some customers may consider “good” as sufficiently adequate, whereas others may consider “good” to be substandard. As with Subjective Rulers, it is presumptuous to think that a customer mark on a defined scale is a precise, comparable measurement.

Another problem with customer measurement is that even if we are able to gather data that is interpretable, that data will not likely tell us how to fix the problem. (As compared with what is experienced in manufacturing: If a part is measured as being too big, the solution is to make it smaller, etc.) A service survey that reveals customers are dissatisfied may reveal little about how to increase satisfaction. Often the data we collect is used to identify if a quality problem appears to be occurring, and perhaps generate some suggestions for how it might be addressed.

How it effects decisions

Service providers must decide how they will measure quality, and how they will analyze and use the measurements.

What to do about it

Some measurements of services quality might be objective, which makes things much easier. For example, a measure of quality at an accounting firm is the ability to balance the books–either they balance or they do not. Or, a measure of quality at an investment bank is the ability to generate a high return on investment, which can have an easily calculated value.

However, for most services, quality measurement is more complex than that. It is usually good to gather multiple measures. Some measures are internal, describing the service performance based on company-defined standards. Other measures are external, involving gathering perceptions of quality from customers.

It is pointless to measure service quality and then do nothing with the data. It can also be ineffective to use the data in the wrong part of the organization. For example, some multi-location companies concentrate their customer feedback gathering at the corporate office, with the hope that useful information will “trickle down” to the various locations. Typically, it is a good idea for service quality measurement data to be employed at the location in the company where quality improvement can occur, which is often the front lines. It is therefore good if the quality measurements can be fed to the lowest decision points in the organization.

Appendix D contains the paper “An Empirically Derived Framework for Customer Feedback System Design.” That paper discusses in detail the design of customer measurement systems, including data gathering, analysis, and use.

For example

Were the Department of Motor Vehicles (DMV) concerned about service quality, it could collect multiple measures. Examples of internal measures would be the number of forms submitted with errors, the number of times problems are solved on the first visit, and the number of customers waiting in line at any given time. Examples of external customer measures are clarity about DMV procedures, perceived courtesy and helpfulness of DMV employees, and overall satisfaction with the process. These latter measures could be ascertained through a survey form handed to customers as they depart, a comment card box by the DMV office door, or telephone surveys of recent customers.

My airline example

Airlines might be particularly concerned about the quality of the interactions between employees and customers. They may want to track the interactions over time, to identify when more training or attention to these interactions is needed. The airline might periodically survey passengers to gather quality data. Some airlines include a comment card in the seat pocket in front of the passengers, allowing self-selected customers to offer their opinions.

How manufacturing differs

With manufacturing, measuring quality often involves measuring a standard product, which measurements are mostly objective.

Analysis questions

  1. Given an appropriate definition of quality, how might it be measured?
  2. What objective measurements are available? Are they valid and relevant to customer-defined quality?
  3. What subjective measurements are available? Are they comparable, one measurement to another?

Application exercise

Redraw the flowchart of your service process. What internal quality measures could be taken and at what locations in the process? What external quality measures would be useful to know? Design a customer comment card that gathers customer perceptions of quality. Design a simple procedure for collecting, analyzing, and using the data as part of a quality improvement effort.

1) Sampson, S. E., 1996. “Ramifications of Monitoring Service Quality Through Passively Solicited Customer Feedback.” Decision Sciences, vol. 27, no. 4. -and- Sampson, S. E., and Weiss, E. N. (1993). “Merchant's Tire and Auto.” , University of Virginia, Charlottesville, Virginia.

[up to index]

== Public sections == * [[usb:toc|Understanding Service Businesses]] book. * [[ibm:ssme:ust|UST paradigm for Service Science]] * [[ibm:ssme:cambridge07|Cambridge 2007 notes]] ---- * [[:start]] * [[http://services.byu.edu/sw/doku.php?do=index|Site map]] * [[http://services.byu.edu/sw/doku.php?do=recent|Recent Changes]] * [[:wiki:dokuwiki|Help]] == Private sections == * [[gscm:pub|BYU GSCM student recruiting]] * [[ibm:scm|IBM SCM case study]] * [[cos:top|Commoditization of Services]] research

Personal Tools