Likert Scale – What is it? When to Use it? How to Analyze it?

In all likelihood, you have used a Likert scale (or something you’ve called a Likert scale) in a survey before.

It might surprise you to learn that Likert scales are a very specific format and what you have been calling Likert may not be.

Not to worry — researchers that have been doing surveys for years still get their definitions confused. In fact, many researchers do not even agree on the best way to report on the numeric values in a Likert scale.

This article will explain the traditional and, in our opinion, most valuable way to use Likert scales and report on them.

What is a Likert Scale vs. a Likert Item

A “Likert scale” is actually the sum of responses to several Likert items. These items are usually displayed with a visual aid, such as a series of radio buttons or a horizontal bar representing a simple scale.

In a “good” Likert scale, the scale is balanced on both sides of a neutral option, creating a less biased measurement. The actual scale labels, as well as the numeric scale, may vary.

A “Likert Item” is a statement that the respondent is asked to evaluate. In the example below, this item, “The checkout process was easy” is a Likert item — and the table as a whole is the Likert scale.

Here’s how to remember it: The “scale” in “Likert scale” refers to the total sum of all Likert items in the question — not the 1-5 range you see for each item. In the example below, the scale would be 4 to 20.

Below is an example of a nearly perfect Likert scale. It has one potential flaw which we’ll discuss later.

Please select the number below that best represents how you
feel about your recent online software purchase for each statement.

                    Strongly                             Strongly
                     Agree    Agree  Undecided  Disagree Disagree
1. The software I
wanted was easy        1       2        3         4        5
to find.

2. The checkout        1       2        3         4        5
process  was easy

3. The software        1       2        3         4        5
solved my needs

4. I am happy with     1       2        3         4        5
my purchase
Historic Trivia: The Likert scale question itself was invented by the educator and psychologist Rensis Likert in his thesis at Columbia University. You never know when this might come up in Market Research Trivia night at your local bar.

So given this new information, when should you use a Likert scale?

To answer that, it’s important to look at how you’d report and analyze the data for this question type. So let’s take a look.

Reporting on Likert Scales

The traditional way to report on a Likert scale is to sum the values of each selected option and create a score for each respondent. This score is then used to represent a particular trait (particularly when used for sociological or psychological research).

This is also quite useful for evaluating a respondent’s opinion of important purchasing, product, or satisfaction features. The scores can be used to create a chart of the distribution of opinion across the population. For further analysis, you can cross tabulate the score mean with contributing factors.

Important Tip: For the score to have meaning, each item in the scale should to be closely related to the same topic of measurement.

In the example Likert scale above, the third option is actually slightly out of place, as it doesn’t relate to the purchasing or checkout process — which is the intended topic.

Ideally, in a Likert scale question, all of the items would be categorically similar so the summed score becomes a reliable measurement of the particular behavior or psychological trait you are measuring.

If you have an item on the scale that doesn’t fit, the total score for the respondent becomes potentially polluted and you’ll end up spending a great deal of time deciphering the results!

When to Use Likert Scales

This is a very useful question type when you want to get an overall measurement of a particular topic, opinion, or experience and also collect specific data on contributing factors. Measuring the satisfaction (the trait) of a recent shopping experience is a common use.

You should not use this form of question (or at least you should not call it a Likert scale) when the items in the question are unrelated to each other, or when the options are not in the form of a scale.

As with all other rating and scale questions — we encourage you not to mix scales within your surveys. Choose a particular scale (3 point, 5 point, 7 point, etc) and use it as your standard to cut down on potential confusion and fatigue. This will also allow for comparisons within and between your data sets!


Use the Comments & Discussions area below the article to discuss Likert scales! Here are some ideas:

  • Have additional information you want to share?
  • Do you have successful examples of Likert scales you’d like to share?
  • Follow up questions?
Join the Conversation
  • Sklonis

    The direction used in the example is different from what I usually use.  I assign the 1 to ‘strongly disagree’ and the 5 to ‘strongly agree’ so that more points means a more positive attitude.

    • sgizmo

      Good point. 1 for “strongly disagree” is a much more standard practice.

  • sgizmo

    That’s a good point Sklonis.  The direction of the numeric scale depends on how you will be reporting and presenting the information.  Your way is much more common and makes more intuitive sense.


    • ashly

      what if i collated the five point scale to three point? what would be the mean then?

  • Laurie Gelb

    There is no reason to show numbers to the respondent at all. They should be unlabeled radio buttons. Use the reporting value for your numbers. Moreover, your scale is flipped.
    Sometimes you want to show 0 as “strongly disagree” and “10″ as “completely agree” with unlabeled points between, so your data have a natural but unseen midpoint and a base 10 foundation. Usually there is no reason to label a midpoint on an intensity scale.
    “Undecided” is not an option. Phrase the question so you can have your “out” (if not a legitimate forced choice) be the same across the grid: “Not sure yet,” “Does not apply” “Did not use” or whatever. In some situations, you will differentiate between don’t know/don’t care/wasn’t aware and so on.
    Sorry, but contrary to your rule, many surveys require a “mix” of categorical and ordinal
    scales. There’s nothing wrong w/ this as long as categorical scales are
    fully and naturalistically labeled (e.g. “about half the time” as
    opposed to “40-60%”) and the question flow is smooth.
    I could also add that these answer items are overlapping and generally imprecise. Not a good example overall. Definitely not “nearly perfect”!

  • Patrick

    Hmm – in your example your mid point is scored a 3 (out of a possible 5) – in other words, 60% .. I feel skewed reports coming on.

    Obviously 50% is a more accurate score for a mid point / neutral / undecided score.

    • William

      You are looking at the scale wrong: If you are looking at each number as a percentage, you need to realize that each number actually is a range:

      This shows the 3 as a midpoint, not 50%, as if it were 50%, it is less than the middle, and 51% is more than the middle. Essentially, I am saying that you are arguing semantics.

      • Paul Klawinski

        Each “number’ is actually a categorical label and you cannot assume a range unless you have designated the range in which case, you are getting closer to interval scale data but not quite there yet.

  • Joe-hobbs

    The Likert scale is fantastic and very accurate I think, with respect to what Patrick said, I do not think that is 50% … I am not sure that is as well.
    Joe Hobbs – Recetas Faciles y Rapidas

  • dong

    Instead of Summing up the scores, you can use mean

    • Sheila Hafer

      I think that is an excellent idea.

  • Norm

    There are two large and potentially fatally flawed problems here. The first is that using a Likert type Scale with 5 choices is an ordinal scale only, ie. it has rank but no magnitude. So we know that “Strongly Agree” is lower than “Agree” but we don’t know by HOW MUCH. This is like the gold medal winner is better than the silver medal winner but the scale doesn’t indicate the difference or magnitude between them.
    This then creates a second problem: creating a ratio scale which has rank and magnitude from an ordinal scale with no magnitude. This is to say that respondents believe “Strongly Agree” is different from “Agree” but probably not the same amount of difference as between, say “Agree” and “Undecided”. But we all know that the difference between 3 & 4 is the same as the difference between 2 & 3. Therefore you CANNOT perform mathematical calculations on this for reporting – you can ONLY report the number of responses for each answer ,ie. 3- Strongly Disagree; 1 – Disagree. Given this, you cannot add up these false numbers to give a true sum. If you want to manipulate numbers given them a scale of at least 1-7 or better 1-10. One to ten is a true ratio (scale or interval scale if there is a true zero) and the numbers can be descriptively and inferentially analyzed.

    • Barbara

      what about responses: 1=strongly (totally) agree, 2 = somewhat agree, 3=neither agree nor disagree, 4 = somewhat disagree, 5 = strongly (totally) disagree

    • Paul Klawinski

      Simply increasing the number of categories (to 7 or 10) does not change the fundamental issue that these are ORDINAL categories.

  • kasema

    The Likert scale is fantastic but I still encounter difficulties to interpret scale 3 Undecided in a five scale item. eg. 1 SD= very low, 2 D=Low, 3 Undecided=? (what about this scale?) 4 A=High, 5 SA=Very high

  • Eric

    I entirely agree with you Skions, it makes more sense to award more points to more positive attitude

  • disqus_axxSSa22H6

    After summing up the responses of respondents, how about the statistical limits? Which is correct 1.00-1.49, 1.50-2.49, 2.50-3.49, 3.50-4.49, 4.50-5.00……….or…… 1.00-1.80, 1.81-2.60, 2.61-3.40, 3.41-4.20, 4.21-5.00

    • sgizmo

      When analyzing data collected from your Likert Scale, we generally recommend using your first set of limits. There’s a much larger emotional difference between choosing a “1″ over a “2″, or a “5″ over a “4″, so it’s best to be sure before accepting such a polar decision. We also recommend taking a look at the frequency of each response value to get a better idea if your respondents tend to be more polar or in unison in their sentiment.

      • Sandy McKee

        We agree!

  • murianki

    i agree with sklonis 5 should be assigned to strongly agree

  • melanie

    does it matter, using this scale can potentially provoke bias for one were using this scale in voluntary response sample, meaning that the individual (depending on how strongly they feel about the subject) will chose to respond to this survey, additionally because of that we cant just “use” the averages or data given on this sample, it cant possibly represent an entire population only the specific people that feel strongly enough about the subject -_-

    • Christian Vanek


      That’s true. If it was voluntary and you believe that only people that left polarized about the issue were going to reply then you do have a bias. But that’s more a sample bias than a question bias.

      A very common form of sample bias like this can be found in service satisfaction surveys. People that had an alarmingly bad experience or a fantastically good experience are more likely to apply. That’s why companies have to be careful about drawing conclusions about their customer base from customer service feedback.

  • Indika Priyamali

    I used likert scale method to my research

    • Christian Vanek


  • DocRox

    I don’t like these kind of scales. There is NEVER an answer that fits how I truly feel and there is rarely a space for free response. I know it’s a quick and idrty way to get what you hope is a valid statistic about your product/service/etc. but they are almost always poorly written and don’t reflect the reality of the situation.

    • Christian

      Hi Doc,

      That can be true sometimes. No respondent ever likes feeling railroaded into an answer — and it does make an assumption that your opinion will fit nicely on a linear scale.

      Adding a comment section under the scale (optional of course) is a good thing for researchers to consider. The researcher can use that information to help qualify your opinion more and decide if it matches with the rest of the data set.

      Thanks for your comment!


      • Hector.

        Why doesn’t one just use a simple Yes or No scale? i.e. Are you happy with the product Yes or No. Do you really know if you are somewhat happy or somewhat unhappy? I think that is all just a waste and doesn’t really get to the point.

        • Tom

          Because one can always collapse a multip point scale into a binary categorization, but one can no expand a binary categorization into a multi-point scale if that is desirable for psychometric purposes.

  • Jaya kumara

    I used Likert scale method for my research, really it is very good to get good responses from the users, and for analysis of data i used mean, Standard Deviation and Chi-square test for evaluation of hypotheses. The out put of research work is good.

    • sgizmo

      Agreed! Standard Deviation and Chi-square are great analysis methods for Likert scales.

  • Pingback: Survey-Writing Tips for Not-So-Dumb Dummies | March Communications

  • leonard

    A lot of the comments on this topic show a mis-understanding of the properties of a likert scale, and what it is meant to represent. For example, what score you assign to the bottom and top end of the scale is not important, i.e. strongly agree could be 5 or 1, as long as you understand what an increasing or decreasing magnitude on the scale represents. Similarly, while some people might prefer to flip the scale round and have the responses ordered from strongly disagree to strongly agree, this doesn’t change the nature of the scale (althought it can change the way people respond to the scale, but that is another story…). Certain items might also be framed differently within a survey, called reverse coded items, i.e. “the software was easy to find” vs. “the software was not easy to find”. These types of items can be used to check that people are actually reading, interpreting and responding to the questions correctly, however the scores need to be re-coded before data analysis.
    In response to some of the other comments:
    - in response to Laurie, while I agree with many of your points, including the potential for using an unlabelled intensity scale for certain constructs, your criticism that the items in the scale are overlapping and imprecise is unfounded. Often, a survey using a likert scale is designed to measure an underlying construct of interest, in this case, consumer satisfaction with their purchasing experience. because a single item is often unreliable, we use multiple items to try to give us a better ability to hone in on the underlying construct. that is to say – often we want overlapping items in a survey. imprecision is a different point entirely, often we don’t need super duper levels of precision for social and behavioural survey research, it is enough to know that a certain proportion of people are very satisfied, and a certain proportion of people aren’t. being able to differentiate between someone who is 81.0378 % satisfied and someone who is 82% satisfied is probably not going to be of use.
    - in response to Patrick – it will only result in skewed reports if people misinterpret what a score of 3 on the scale represents, as it appears you have… it is not a score of 3 out of 5, it is a score of 3, on a 5 point scale.
    - in response to William – a likert scale cannot be directly converted to a percentage,
    - in response to Dong – using the mean score across items will result in exactly the same end result as using the summary score… as the mean is the summary score divided by the number of items in the scale, and dividing by a constant doesn’t change the relative magnitude of different scores.
    - in response to Norm – while your first point is valid, that a likert scale is an ordinal scale, not an interval or ratio scale, using a ten or eleven (0-10) point scale does not automatically convert your scale to an interval scale, it is still an ordinal scale, such that we can’t say that a shift from 3 to 4 is of the same magnitude as a shift from 4 to 5 or 5 to 6. Additionally, while there are issues using ordinal scales in numerical analysis, such as using numbers assigned to likert responses, using data in this way often approximates the sort of data collected using interval type scales (where this is possible). there are also other statistical techniques for examining this data, such as using dummy variables coded as 0 or 1 to indicate a persons response, and using these dummy variables as predictors in regression equations.
    Sorry for the rant, but I read through some of these comments and I strongly disagreed with many of them, and others I was undecided about…

  • maya91

    Please help urgently! I have two 7-point scales and one 5-point scale. I need to combine all of them in factor analysis- how can I convert 5 point scale to 7 point scale? One way to do this is to convert systematically, 0.7 ->1, 1.4-> 2, 2.1- >3 etc. is that correct?

  • Paul Klawinski

    “Important Tip: For the score to have meaning, each item in the scale
    should to be closely related to the same topic of measurement.”

    No, In order for the Likert “scale” to have meaning the underlying data must have the characteristic of interval data. Simply taking a categorical statement and assigning it a number does not change the fundamental property of the data. They are still categories. If you labeled them A,B,C,D,E, you could not calculate averages on them. THEY ARE CATEGORIES. The numbers you assign to them are merely shorthand placeholders for the more complicated verbal statements that they represent.

  • Clipping Path

    I think this post has all the
    characteristics of the best post. Thank you a lot, man, for that.

    Clipping Path

    • Sandy McKee

      Thank you!

  • Regidor T. Carale, MA

    Thank you for the very comprehensive explanation. As research teacher in the high school department of St. Paul University Dumaguete it is very useful to me and for my students specially in preparing them for a higher degree of learning in formulating a tool or a questionnaire. May the good Lord continue to shower His blessings to all of you.

  • Clipping Path

    I constantly find your arguments well structured and
    sensible. I always prefer to read the class and glad I found this thing in you

    Clipping Path

  • Clipping Path

    This is what I
    have been searching in many websites and I finally found it here.
    article. I am so impressed.
    Could never think of such a thing is possible
    with it…
    I think you have a great knowledge especially while dealings with
    such subjects.

    clipping path

  • Clipping Path

    Thanks for taking
    the time to discuss this, I feel strongly about it and love learning more on
    this topic. If possible, as you gain expertise, would you mind updating your
    blog with more information? It is extremely helpful for me.

    clipping path

  • Donald N. Arellano

    It’s really a
    nice and helpful piece of information. I’m glad that you shared this helpful
    info with us. Please keep us informed like this. I want to say it would
    supply up to !