Exploring Survey Design: Case Study 1, Part II

This is part of a series of case studies I’ll be doing in this blog. This first case study goes through my interaction with a company (actual name withheld for their privacy) and their need to do some pricing research. If you haven’t had a chance to read Part I of the case study, you’ll want to catch up first.

Survey Design Case Study Continued

A leading price optimization and management solutions company, TriBlend Solutions, needed information to guide its marketing efforts. They were planning to do a survey of customers and prospects in order to better understand their market and their target audience.

Designing a Pricing Survey – Part II

Recall TriBlend Solutions (TS) was partnering with the Professional Pricing Society (PPS) to do some pricing research. The survey was shaping up but it was a long way from being finalized. The survey still needed detailed refinement, deployment online, more reviews of the online version and testing before it would be ready to launch.

Version 4 was sent to Lori and Eric for review. Lori and Eric took a few days and another conference call was held on August 23 to go over comments and produce the next draft.

This round of changes was fairly extensive. Of the approximately 40 questions in the survey only three questions remained untouched. I won’t go through every change but will give some of the highlights.

We refined the introduction wording to make it more focused. For example, “ . . . the state of the current technology, challenges and trends in our industry,” was changed to “ . . . the state of the current approaches to and tools for pricing across industries.” This minor change was more in tune with what was to follow thus providing better setup for the respondent.

The definitions were fine tuned to bring them more in line with the specific questions being asked.

The questions were organized into three sections, “Pricing at Your Company”, “Pricing Tools and Adoption of Price Management Software” and “Pricing Effectiveness Challenges”. A demographics and closing section were added as well.

The question, “How would you rate the overall effectiveness of the pricing tools at your company?” seemed out of place. Would it not be better to ask this question after we asked what pricing tools are used at the company? Moving this question would make the data we received much more meaningful.

The question, “What are your company’s TOP TWO objectives regarding pricing?” was changed to: “What business goals are the BIGGEST drivers of your company’s pricing strategy?” The change focuses respondent attention on the intent of the question. The original wording seemed to be all right but was just a little too broad to provide meaningful information.

“Where does the pricing responsibility reside in your company for each of the following pricing processes/activities [the list that followed had 10 items]?” This question was going to be extremely involved and would not provide meaningful information. It was eliminated and instead we replaced it with the following question; “How is pricing generally managed in your company?

  • Centrally managed by corporate pricing group
  • Decentralized activities within each region or business unit
  • Other (please explain)”

Branching was refined to target different questions for different respondents based on where they were in their adoption of pricing tools. For example those that have already deployed price management software were asked different questions than those that had not done so.

Several questions were deleted at this point because they no longer made sense. A perfect example of a question that lost its meaning is the question, “When you have implemented new pricing tools or processes, was it typically part of other changes made in your company?” Clearly this question is too broad to provide useful information.

The only other substantial change for this round, other than moving questions to their most appropriate section, was the revision of the options provided for several questions. The more thought given to the questions the better equipped you are to decipher appropriate question options.

The next draft was scheduled to be ready for review by August 27 with hopes of having a final copy by September 5. Each of us reviewed version 5 by August 28 and it felt like we had a near final draft, however a few important changes surfaced. For example, Lori realized we needed to add a question asking for the respondent’s work location so that the analysis could include breakouts by location (like US versus other parts of the world). Next Eric was to review the survey once more and pass it on to the PPS for their buy-in.

Complete Guide to Great Survey Design

Become a survey success story with this 6-part guide to designing amazing surveys.

Get the Ebook

[Side Note: This is the point in the design where Lori realized she wanted SDA to handle the online deployment of the survey, which led to her agreeing that SurveyGizmo was the best tool for us to use. Although outside the scope of this article, there is plenty of value to add when going from a finished questionnaire to the final online deployment.]

Although it seemed like we were close to a final version, discussion ensued on a couple of items including what detail was needed when we asked for location – would it make sense to have US, Canada, UK and other? Where would Russia go? Should we add “Middle East? September 5 came and went, as PPS was taking longer than expected to look at the survey. The planned launch was pushed back to September 18, but we still needed to put the survey online and test it with a pilot group as well as have PPS add their comments. There would be further delays.

When PPS finally reviewed the survey they were very impressed, but they had a few issues around the privacy of those on their member list to whom we would send the survey. Next Lori had an opportunity to give the survey (in paper form) to a small group of customers and prospects that were attending an event hosted by TS. This presented an opportunity to see some “live” data, which led to more adjustments and changes.

The survey was put online and reviewed on September 12. After changes in fonts and adjustments to the look and feel of the online version, another conference call was scheduled for September 18 to go through the survey in more detail. It was another good working session. We refined many of the options for the multiple-choice questions and added some new questions. As is often the case, when you see the survey as the respondent will see it, it can spawn new ideas.

At this point we went back and forth between editing a text version of the survey (exported from SurveyGizmo) and making changes directly to the online version. Over the next month there were at least three more rounds of revisions, most of which involved the rewording of questions and question options.

We were now on version 12 and the survey was ready for a pilot run. We sent the survey to a small group of team members at TS. (At the same time, PPS set a launch date of October 31, so there was plenty of time for the pilot.)

The pilot and further reviews of the online survey led to a host (about 22) of improvements. Most were minor, but they all helped make the survey easier to understand and more targeted toward the desired information. I won’t list all the changes made at this stage, but a few examples will illustrate the type of changes made. I’ve left out the changes to formatting. As you look through these changes remember that the motivation is always to strengthen the survey objectives. Changes are made to improve the quality of data and facilitate the use of the data in your decision-making.

  • Numbering was removed to avoid confusion when questions are skipped due to branching.
  • A couple of ratings questions were changed for drop-downs to radio buttons. We felt like radio buttons would be better so respondents could see all the choices at once.
  • Based on pilot responses we changed year ranges for the question, “How long have you been involved in pricing?”
  • For five questions “you” was replaced by “your company”.
  • “Manager” was changed to “Manager/Analyst”
  • “Where do you spend MOST of your ‘pricing activity’ time?” was changed to “What pricing activities take up MOST of your time?”

Now we were very close to having the survey ready to launch. There were just a few more cosmetic changes, like fixing some graphics, changing question formatting, adding a few missing words and adjusting what questions get shown to whom based on responses. It was October 26 and Lori, Eric and I reviewed it once more.

I found a few things to clean up but I thought we were done before Lori mentioned that Eric had about 20 more “minor” edits! Eric called it his “fine tooth comb” list. Although I was taken back by the thought of even more changes, I never complained about time spent up front on the design. I may get testy when someone wants to “overdo” the analysis side but never the design. The difference is that every change to the survey design has the potential to improve the data quality, the usability of the data and/or the response rate.

So, I happily made the changes from Eric’s fine-tooth-comb list (most of them were really good changes) and the survey was finally ready to launch on October 30.

Not all surveys go through 14 or more revisions, as we will see in my next case study, but it can be necessary to ensure the survey produces the information you need. And for this survey we even got some (unsolicited) comments that helped justify all our work. Here are some of those comments:

  • “Thanks for the survey”
  • “Excellent survey!“
  • “Great Study… this will help a lot to assess how we are situated compared to other companies“
  • “Nice survey. Worded pretty well in general.“

We received over 400 responses to the survey. If you would like to see an example of the final product (names changed for privacy purposes), you can see it here.

Join the Conversation