Home arrow BPIR Partners arrow Customer Satisfaction Surveys
Customer Satisfaction Surveys
Article Index
Customer Satisfaction Surveys
Expert Opinion
Research Data
Measure and Evaluate
Example Cases
Summary
References
 

Expert Opinion

Customers are most organisations' main source of revenue but with the variety of activities that occur in organisations it is easy for their 'voice' to get lost or for assumptions to be made. Craig Bailey (2002) founder of Customer Centricity (a customer relationship consulting company) notes "Make no mistake, business is about numbers. In my opinion, there are two sets of numbers that every company must track and manage: the financials and customer satisfactions levels. If executives of a corporation only care about the financial indicators, the company will lose sight of their source of revenue - the customer". As a result, it is wise to track and manage customer satisfaction metrics (perceived performance against expectations) as well as other measures, and companies are encouraged to include customer satisfaction results among the key performance indicators that are reviewed by executive leadership on a regularly planned basis.

An understanding of customer satisfaction can be gained through the use and analysis of customer satisfaction surveys (CSS). Surveys are a valuable tool and can provide a wealth of information back to an organisation on how their customers perceive what they do. Measuring customer satisfaction levels via surveys enables organisations to obtain feedback on how they are performing, on what they could change or improve as seen through their customer's eyes.

CSS's can be used as market research tools and are frequently part of product/service improvement programs that can then be used to improve the quality of processes within an organisation that have an end effect on the products, services and relationships offered. Customer surveys may include the use of questionnaires, personal interviews, telephone surveys, on-line form completion, and seminars to monitor customer satisfaction. Systematising the use of these tools can then provide a baseline for comparing results over extended time periods and enable fact-based decision-making.

Typically, during a survey, a group of customers (mostly past or current, but sometimes prospective) are asked specific questions in order to gauge their judgements about the organisation, its products and/or services offered, processes, and staff.

Conducted effectively, a CSS can provide a wealth of information that can assist organisations in meeting and measuring the mission, vision, goals, and strategic plans. They also enable organisations to communicate with, and respond to, their customers in ways that can increase customer satisfaction, loyalty, and future revenue. However, Bruce Katcher (2003), a specialist in conducting employee opinion and CSS's notes that "Improperly conducted, such surveys do little more than provide interesting but not very useful information". Jeffrey Cole, Assistant Vice President Global Quality of NCR Corporation and Mark Walker, Vice President of Walker Information (2003) take these thoughts further, "The shelves of corporate America are littered with binders full of survey results. They're fine looking documents (although they're probably longer than necessary), full of wonderful charts, graphs, and tables. Many, however, are collecting dust! Sure, they generated some excitement for 24-48 hours. But, six months later, any improvement projects have either augered into the ground, crashing and burning, or have simply vanished with a wimper - fallen victim to shifting priorities, the latest re-org, or basic lack of interest". In such instances Bailey suggests that there are two options:

  • "Stop the survey process (save your money and your customers time)
  • leverage the customer satisfaction survey results as a catalyst for continuous improvement".

CSS information must be meaningful, usable, and lead to improvements if it is to be leveraged fully and classed as best practice. The information that a survey can provide includes customer perceptions on:

  • What features they like best about current products and/or services and what features are unnecessary;
  • Which products and/or services hold the most interest to specific customers, or groups of customers;
  • What new products/services they want/need or expect;
  • Their satisfaction with day-to-day services provided by the organisation;
  • The quality and strength of their relationship with your organisation;
  • Billing, warranty, complaint management, account management and service issues;
  • Knowledge, general performance and helpfulness of the organisation;
  • The organisation and staff's responsiveness;
  • Their future purchasing plans;
  • Their loyalty and commitment to your organisation, product and/or service;

Quantitative values, generally percentage figures or overall ratings, that result from respondents providing rankings (e.g. least to most important) or a numerical figure (e.g. 1 through 5) to specific attitudinal statements or product/service attributes can provide an indication of trends in satisfaction and allow for easy correlations to be made. These figures can help to identify, in a numerical format, the drivers of customer satisfaction for the organisation. It has been generally believed, over the last decade, that the higher/lower the index number is for a particular attitude or attribute, the more loyal the customer is, the less likely they will be to move to the competition, the more satisfied they are, the greater the chances of increasing the companies revenue is etc.

CSS's can be used to feed into formulae that produce a numerical 'index value' to indicate the overall level of customer satisfaction of an organisation. Information used in the formula may or may not include other key measures that indicate a satisfaction value. Such an index can be benchmarked against other similarly designed indexes from other organisations allowing vital comparisons to be made that can lead to understanding performance in the market, this intelligence can then be used strategically.

The interest in benchmarking levels of customer satisfaction between organisations is illustrated by the existence of two independently operated indexes, the American Satisfaction Index and the Eurpoean Satisfaction Index. These indexes administer independent surveys to the public in regard to companies throughout the two continents.

Considerable value can be gained from knowing such information and being able to determine key drivers of satisfaction. However, Lawrence Crosby and Sheree Johnson (2002) of Symmetrics Marketing Corporation in the USA note "If a customer loyalty index is to serve any purpose in guiding management decisions, it must be a reliable indicator of customer attitudes and intentions" and "Effective customer measures must be adapted to the peculiarities of the firm and the firm's strategies". They stress that standardized, do-it-yourself (DIY) surveys and survey templates that can be found in such places as the internet may not be effective as "A concept like customer loyalty can be quite different depending on the industry and a given firm's strategy, value proposition, and target segments".

The index number(s) is but one indication of customer satisfaction. Crosby and Johnson also comment "Unfortunately, some companies have spent millions of dollars on customer surveys, assuming improvements in a customer satisfaction metric would translate into better customer retention, increased market share, and greater profitability" and "The factors driving satisfaction aren't necessarily the same ones driving customer loyalty and business results" with companies now tracking value, loyalty, relationship strength and what they want the customer to do e.g. consult their organisation on their needs and wants, treat the organisation as their first port of call, purchase X percentage (%) of their products/services from the organisation.

Every survey, and measurement that the survey aims to collect, should have a clear and rational goal and pay attention to the areas where satisfaction is high as well where there are problems. For example, the survey should assess the customers' perceptions of the company's performance in terms of the various drivers of satisfaction and loyalty. Where possible the drivers of customer satisfaction - which customers consider to be the most important - should be converted into operational measures within the organisation. Typical drivers include product or service cost, product or service quality, after sales service, value for money, your price vs. your competitions price, responsiveness, on time delivery performance etc. Measures such as recommendations to others and repeat purchasing can also be used to indicate satisfaction. After establishing such a baseline (or benchmark) a company must set measurable and achievable goals in terms of where it wants to be.

Katcher has identified seven guidelines to help establish a useful survey programme - one that will provide meaningful input into an organisations strategic planning process:

  • Establish clear, quantifiable objectives related to your strategic plans i.e. the survey should be developed with specific objectives in mind, it should test these objectives and provide data to assist with future planning
  • Any survey and improvement programme will only be successful if senior management are involved. This includes setting the survey's objectives, monitoring the data gathering process, interpreting the results, and actively implementing the solution
  • The survey must provide an opportunity for customers to voice their opinions, both positive and negative. Customer input in the survey design and development process will ensure that data is collected on issues of importance - conducting pre-survey interviews with customers and asking front-line employees for their input are two ways of achieving this.
  • Actively encourage customers to respond and make it easy for them to do so:
    • Use multiple methods e.g. mail, internet, telephone surveys
    • Provide an incentive to reply e.g. a gift or gift certificate, a donation to a charity
    • Over-communicate e.g. send personalised letters regarding the importance of the survey, follow up with a phone call of thanks or a reminder if a response has not been received by the due date
    • Personalise the survey to different customer groups so that you avoid asking irrelevant questions
    • Make it easy for customers to respond e.g. use postage-paid pre-addressed envelopes, toll free numbers etc
  • Develop an action plan for the survey's implementation. The plan should outline not only who is involved in the surveys design and development, but who will review the results and be responsible and accountable for acting on them - this will include senior managers assessing the results against the vision, mission, goals and strategic plans for the organisation, and altering plans, strategies and processes as necessary
  • Communicate the results widely. The results of the survey, as well as the actions that will be taken should be communicated to:
  • All customers, this will include acknowledgement of their participation in the survey and what measures will be taken to address any areas of concern
  • Specific customers: customers with specific concerns should be contacted personally and be informed how their exact issues will be addressed
  • Employees: including the outcomes of the survey and what their role is in implementing any opportunities for improvement that have been identified
  • Establish an ongoing survey process and aim to make surveying customers how your organisation does business - not just a random event from time to time. By surveying on a regular basis trends can be tracked, and assessments can be made of the effects of changes made in response to prior surveys and the viability of the organisations strategic plans.  

 As a result of their 12 years experience in conducting customer surveys at NCR (an IT-based business solutions company in the USA), Cole and Walker have identified eight practices that have helped them to get the most out of their survey process:

  • Build it into the objectives: at NCR delighting the customer is paramount and the voice of the customer is represented in management objectives. To reinforce this, 'Delighting the Customer' is one of 10 objectives of the CEO and this cascades to the Heads of each Business Unit and their Leadership Teams. The CEO is also held accountable for results for this objective, as are Business Unit Heads, Leadership Teams and individual sales force members.
  • Leadership Team and Business Unit reviews: reviews re progress in Delighting the Customer occur monthly and quarterly.
  • Leverage technology: NCR uses technology to track progress towards their objective, conduct their web-based surveys, report the results of their surveys, and to manage the survey processes.
  • Refresh the process: the survey instrument, report, and process are 'refreshed' at 3 yearly intervals to ensure it remains benchmarked with best practice and is customer friendly, etc.
  • Treat every customer like they were your only customer: to assist in meeting their objective of Delighting the Customer, to build relationships and take greatest advantage of the survey process, a follow-up call or visit is made to every survey respondent. The purpose is to thank the customer for their feedback, review what the company is doing overall about the results, discuss their specific concerns or issues, and see what actions can be put in place to make the relationship even better. These follow up calls are tracked using technology and reported on at Leadership Team and Business Unit Reviews.
  • Build a network of internal consultants: at NCR they have teams whose role is not only to manage the logistics of the survey and the survey processes, but who are involved in making the results come to life e.g. by giving presentations on the survey, assisting in determining areas for improvement, and helping work teams make the improvements.
  • Build the links: to help cement the survey and its results into actions taken by the organisation NCR suggest building the CSS into key business strategies/processes e.g. the CSS becomes and integral measure in the company's Balanced Scorecard.
  • Custom views for different audiences: NCR suggest assessing the CSS reports that are issued and the audiences they go to in order to determine if improvements can be made e.g. different audiences need different levels of information and summary reports, presentations, web-sites etc may be more effective and useful for some recipients than full hard copy CSS reports.

There are writers who consider that there are better ways of knowing what your customers want and how they perceive you than through a survey process. Pam Mitchell (2003) a strategic planning consultant believes "Surveys may be effective for understanding how customers rate us in specific areas, but they do little to tell us what product features or additional services they may value or how they make buying decisions". Along with Mark Frigo (2003), editor of Strategic Finance Magazine, and Mark Graham Brown (2000) author of two best selling books on the Baldrige Award Criteria, the suggestion is made that alternative strategies to CSS are used such as 'customer symposiums', phone interviews, focus groups, 'think tanks', panels, face-to-face customer interviews, or a 'day-in-the-life of a customer' (where time is spent with key clients or key representative consumers to see just how products and/or services are being used). It is felt that these strategies may be better means of determining what customers want, and how the organisation can truly solve their needs and problems. Mitchell notes "The cost of a symposium with a small group is generally about the same as a marketing survey to the entire customer base. However, surveys are flat. You never get to ask 'why', and people hate to complete surveys. People love to attend symposiums. They generally leave energised by what they have contributed and by what they have learned …With a symposium, your sample size is much smaller, but the information is much deeper". To make this strategy successful she suggests:

  • Creating an initial agenda and then soliciting additional agenda items from the participants;
  • Having a group size of 8-20;
  • Inviting prospective and current customers to obtain a broader view;
  • Inviting decision makers and workers so that a range of needs, challenges and problems can be identified;
  • Making it fun and using social activities - the information sharing and relationship building that occurs during informal activities is also valuable;
  • Using an external professional to facilitate as this can help ensure increased productivity and outputs from the day but also ensure your organisation's 'neutrality'.
  • Document the salient points and distribute these notes to attendees

It is also suggested by David Swaddling and Charles Miller (2002) of Insight - a consulting firm specialising in measuring and managing customer perceived value, that customer satisfaction measures really only reflect data collected about retrospective events from 'actual' customers. Prospective (future orientated) information is often not sought and prospective customers are generally never included. As such they state "Customer satisfaction questions usually focus on a past experience, because that's the orientation of customer satisfaction. A customer can only describe how satisfied he or she is with a prior experience…That's a valuable starting place for understanding how customers think but it's not nearly enough". They suggest that organisations should be collecting data on 'customer perceived value' (CPV) and the criteria and preliminary evaluations that may be used when making a purchasing decision. They define CPV as "The prospective customer's evaluation of all the benefits and all the costs of an offering as compared to that customer's perceived alternatives". The methods outlined above e.g. symposiums, focus groups etc can be used to obtain this type of information since questions can be posed more readily regarding impending purchase decisions and will give a clearer indication of loyalty, repurchasing etc than the usual data collected via CSS, and the insights gained will be more meaningful and helpful in strategic planning and developing strategic hypotheses.

Of course while the above infers there may be two schools of thought, the thrust and design of any CSS is of course entirely individual to the designer and there are no doubt many well designed CSS's that cater to both of these approaches and may still be called a Customer Satisfaction Survey.

Whatever method is used to determine customer satisfaction, needs, and wants, it is important that;

  • The process is driven from the highest levels in the organisation;
  • Reasons for collecting data are known and linked to strategic plans and goals;
  • The information is used to drive improvements;
  • Baseline data is used to provide a benchmark for measuring improvements;
  • Results are widely distributed;
  • Data collection is a regular event.

_________________________________________________________

You are reading a Best Practice Report in html-format. Become a member of the BPIR to receive a new report in PDF-format every month (see examples: Benchmarking & Business Excellence). PDF-format can be saved on your hard drive, emailed to work colleagues, and are much easier to read and print out!.. For BPIR updates and best practices sign up to our FREE newsletter. 

 



 
< Prev   Next >