Regression Analysis: How Do I Interpret R-squared and Assess the Goodness-of-Fit?

After you have fit a linear model using regression analysis, ANOVA, or design of experiments (DOE), you need to determine how well the model fits the data. To help you out, Minitab statistical software presents a variety of goodness-of-fit statistics. In this post, we’ll explore the R-squared (R2 ) statistic, some of its limitations, and uncover some surprises along the way. For instance, low R-squared values are not always bad and high R-squared values are not always good!

What Is Goodness-of-Fit for a Linear Model?

Illustration of regression residuals Definition: Residual = Observed value - Fitted value

Linear regression calculates an equation that minimizes the distance between the fitted line and all of the data points. Technically, ordinary least squares (OLS) regression minimizes the sum of the squared residuals.

In general, a model fits the data well if the differences between the observed values and the model's predicted values are small and unbiased.

Before you look at the statistical measures for goodness-of-fit, you should check the residual plots. Residual plots can reveal unwanted residual patterns that indicate biased results more effectively than numbers. When your residual plots pass muster, you can trust your numerical results and check the goodness-of-fit statistics.

What Is R-squared?

R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression.

The definition of R-squared is fairly straight-forward; it is the percentage of the response variable variation that is explained by a linear model. Or:

R-squared = Explained variation / Total variation

R-squared is always between 0 and 100%:

  • 0% indicates that the model explains none of the variability of the response data around its mean.
  • 100% indicates that the model explains all the variability of the response data around its mean.

In general, the higher the R-squared, the better the model fits your data. However, there are important conditions for this guideline that I’ll talk about both in this post and my next post.

Graphical Representation of R-squared

Plotting fitted values by observed values graphically illustrates different R-squared values for regression models.

Regression plots of fitted by observed responses to illustrate R-squared

The regression model on the left accounts for 38.0% of the variance while the one on the right accounts for 87.4%. The more variance that is accounted for by the regression model the closer the data points will fall to the fitted regression line. Theoretically, if a model could explain 100% of the variance, the fitted values would always equal the observed values and, therefore, all the data points would fall on the fitted regression line.

Key Limitations of R-squared

R-squared cannot determine whether the coefficient estimates and predictions are biased, which is why you must assess the residual plots.

R-squared does not indicate whether a regression model is adequate. You can have a low R-squared value for a good model, or a high R-squared value for a model that does not fit the data!

The R-squared in your output is a biased estimate of the population R-squared.

Are Low R-squared Values Inherently Bad?

No! There are two major reasons why it can be just fine to have low R-squared values.

In some fields, it is entirely expected that your R-squared values will be low. For example, any field that attempts to predict human behavior, such as psychology, typically has R-squared values lower than 50%. Humans are simply harder to predict than, say, physical processes.

Furthermore, if your R-squared value is low but you have statistically significant predictors, you can still draw important conclusions about how changes in the predictor values are associated with changes in the response value. Regardless of the R-squared, the significant coefficients still represent the mean change in the response for one unit of change in the predictor while holding other predictors in the model constant. Obviously, this type of information can be extremely valuable.

See a graphical illustration of why a low R-squared doesn't affect the interpretation of significant variables.

A low R-squared is most problematic when you want to produce predictions that are reasonably precise (have a small enough prediction interval). How high should the R-squared be for prediction? Well, that depends on your requirements for the width of a prediction interval and how much variability is present in your data. While a high R-squared is required for precise predictions, it’s not sufficient by itself, as we shall see.

Are High R-squared Values Inherently Good?

No! A high R-squared does not necessarily indicate that the model has a good fit. That might be a surprise, but look at the fitted line plot and residual plot below. The fitted line plot displays the relationship between semiconductor electron mobility and the natural log of the density for real experimental data.

Regression model that does not fit even though it has a high R-squared value

Residual plot for a regression model with a bad fit

The fitted line plot shows that these data follow a nice tight function and the R-squared is 98.5%, which sounds great. However, look closer to see how the regression line systematically over and under-predicts the data (bias) at different points along the curve. You can also see patterns in the Residuals versus Fits plot, rather than the randomness that you want to see. This indicates a bad fit, and serves as a reminder as to why you should always check the residual plots.

This example comes from my post about choosing between linear and nonlinear regression. In this case, the answer is to use nonlinear regression because linear models are unable to fit the specific curve that these data follow.

However, similar biases can occur when your linear model is missing important predictors, polynomial terms, and interaction terms. Statisticians call this specification bias, and it is caused by an underspecified model. For this type of bias, you can fix the residuals by adding the proper terms to the model.

For more information about how a high R-squared is not always good a thing, read my post Five Reasons Why Your R-squared Can Be Too High.

Closing Thoughts on R-squared

R-squared is a handy, seemingly intuitive measure of how well your linear model fits a set of observations. However, as we saw, R-squared doesn’t tell us the entire story. You should evaluate R-squared values in conjunction with residual plots, other model statistics, and subject area knowledge in order to round out the picture (pardon the pun).

While R-squared provides an estimate of the strength of the relationship between your model and the response variable, it does not provide a formal hypothesis test for this relationship. The F-test of overall significance determines whether this relationship is statistically significant.

In my next blog, we’ll continue with the theme that R-squared by itself is incomplete and look at two other types of R-squared: adjusted R-squared and predicted R-squared. These two measures overcome specific problems in order to provide additional information by which you can evaluate your regression model’s explanatory power.

For more about R-squared, learn the answer to this eternal question: How high should R-squared be?

If you're learning about regression, read my regression tutorial!


Name: Fawaz • Thursday, July 25, 2013

Could you guide me to a statistics textbook or reference where I can find more explanation on how R-squared have different acceptable values for different study fields (e.g. social vs. engineering).


Name: Edgar de Paz • Tuesday, October 1, 2013

THANK YOU!!!! I have had this question (Are Low R-squared Values Inherently Bad?) in my mind for a while...Working on a manufacturing project where human behavior have significant contribution; I see these typical low R-squared values, however I have significant contributions from some of my predictors (decent residuals). Aiming creating guidelines for standard work based on insight. Great article. Any bibliography that you can mention on this topic (low R-sq)?

Name: Jim Frost • Wednesday, October 2, 2013

Hi Edgar, thanks for reading and I'm glad you found it helpful.

Unfortunately, I don't have a bibliography handy. However, the importantance of R-squared really depends on your field and what you want to do with your model.

If you just want to know what predictors are significant and how they relate to the response, then the coefficients and p-values are more important. A one unit increase in X is related to an average change in the response regardless of the R-squared value.

However, if you plan to use the model to make predictions for decision-making purposes, a higher R-squared is important (but not sufficient by itself). The biggest practical drawback of a lower R-squared value are less precise predictions (wider prediction intervals).

Keep in mind that a prediction is the mean response value given the inputs. You need to keep the variability around that mean in mind when using the model to make decisions.

This topic happens to be the subject of my next blog! That'll be out on October 3, 2013.


Name: Rafael • Monday, December 16, 2013

Great Post, thank you for it. I'm trying to modeling a credit flow from a government bank that have political influence! Right now I'm trying to find texts like yours to show that R-square are not always above 80% in good models!

By the way, if you can sugest other texts that talks about that, I'd appreciate.

Thank you again for the info!

Name: Ruth • Thursday, December 19, 2013

Thank you so much! I'm busy interpreting my results of my MA Psychology thesis and panicked when my R squared value was only 9.1%, despite all my predictors making significant contributions. Need an academic reference though (my university isn't keen on website references) so if you have any, that would be great!
Thanks again!

Name: tingting • Monday, January 13, 2014

nice tutorial, really good for starters like me:P

Thank you so much, please carry on your great job.

Name: Joe • Saturday, March 1, 2014

Hi Friend.
if the R2 is as low as 0.099 but two Independent variables (IV)(out of three IVs) are significant predictors,
Will our conclusion about the sig. predictors be meaningful in the presence of this extremely low R2?

Name: Jim Frost • Tuesday, March 4, 2014

Hi Joe,

Yes, if you're mainly interested in the understanding the relationships between the variables, your conclusions about the predictors, and what the coefficients mean, remain unchanged despite the low R-squared.

However, if you need precise predictions, the low R-squared is problematic.

I write about this in more detail here:

Thanks for reading and writing!

Name: Malathi Cariapa • Thursday, March 6, 2014

Very well explained. May be this could be explained in conjuction with beta.Beta (β) works only when the R² is between 0.8 to 1. That signifies the coefficient of correlation between the stocks and the index are strong, the only β could be taken for further consideration.

Name: Hal • Tuesday, March 11, 2014

Little off topic, when writing a long report on correlation and regression analysis the word "explaining" is used way too often for my taste when talking about r2. What word can I use to have the paper more easy to read?

Name: Bill • Thursday, March 13, 2014

Hal...use interpret.

Name: gaurav • Thursday, March 13, 2014

I stumbled across your blog today, and I am happy to have done that. Very helpful in understanding the concepts. Keep blogging and I am now a definite follower of your blog.

Name: Hellen • Thursday, March 20, 2014

Hello Jim,

I must say i did enjoy reading your blog and how you clarified and simplified R-squared.

Now, I wonder if you could venture into standard error of the estimate and how it compares to R-squared as a measure of how the regression model fits the data.

Many thanks.

Name: Jim Frost • Friday, March 21, 2014

Hi Hellen,

That's a great question and, fortunately, I've already written a post that looks at just this!


Thanks for the kind words and taking the time to write!

Name: Newton • Friday, March 21, 2014

I like the discussant on r-squared. in my study i analyzed my data using pearson correlation and produced some scatter plots that gave me values of r-squared. Is that right for me to report? please help

Name: Jim Frost • Friday, March 21, 2014

Hi Newton,

Great question! Many people don't stop to think about the best way to present the results to others.

There are several things that I would do if I were you. A Pearson's correlation is valid only for linear relationships. To verify this, fit a regression model to your data and verify that the residual plots look good.

See how here: http://blog.minitab.com/blog/adventures-in-statistics/why-you-need-to-check-your-residual-plots-for-regression-analysis

Assuming that the model fits well, I totally agree that a scatterplot with the R-squared is an excellent way to present the results. You could also include the regression equation. However, research shows that graphs are crucial, so your instincts are right on. Read here for more details about the importance of graphing your results.


Thanks for reading!

Name: Stella • Saturday, March 22, 2014

Hello, I’m glad I came across this site! Please I’m facing a challenge with my research work. I sampled 6 different land use types, replicated 4 land use types 5times and the other two, 4 and 2 (due to their limited size for sampling). Now I want to see to significant difference using a parameter between different replications and their means using ANOVA. This shows an unbalanced sampling, and I’ve tried to use Gabriel test but I have unequal variance and my data is not normally distributed. Please, how do I go about this analysis? Thanks!

Name: andrei • Thursday, April 10, 2014

There is some mysterious function called hat()
If you type in a console
and then
you get
0.5238095 0.2952381 0.1809524 0.1809524 0.2952381 0.5238095
How these numbers are worked out I have absolutely no idea

Name: Jim Frost • Thursday, April 10, 2014

Hi Andrei,

Fortunately, Minitab can help you out with this! All you need to do is create a column with all of the X values: 1 - 6. Create a column with all of the Y values: 0.5238095, etc.

In Minitab, go to Stat > Regression > Fitted Line Plot. Enter the Y column for the Response and X column for the predictor. Just by looking at the numbers, I can tell it's a U shape, so choose Quadratic for Type of regression model.

Voila! You get the equation and the graph. Spoiler alert, the graph looks like a smile. And, I hope you're smiling with these results. The equation fits the points perfectly!

Y = 0.8667 - 0.4000 X + 0.05714 X^2


Name: Qing • Friday, May 23, 2014

Would you please further explain why significant estimator is meaningful regardless of low r-squared? what is the logic behind this?

Name: Jim Frost • Tuesday, May 27, 2014

Hi Qing,

It is an interesting situation when you have a significant predictor but a low R-squared value. I talked about this situation in more detail in this blog post:

Also, In the upcoming weeks I'll write a new post that addresses this situation specifically. Stay tuned!


Name: Rosy • Wednesday, May 28, 2014

Dear Sir,may I ask a question,please.
In my thesis,the coefficient of determination is 0.998.My thesis is about transportation network plan.I used the data which I observed. However,my teachers said 0.998 can't be possible.But I can't do to reduce the value of R square. How should I do? Could you tell me your suggestion,please?

Name: Jim Frost • Thursday, May 29, 2014

Hi Rosy,

Without the specifics of your model, I can't figure out what is going on. However, I agree with your teachers that the R-squared value for your model is too high. You'd only expect a legitimate R-squared value that high for low noise physical process (e.g. law of physics) where you have high accuracy/precision measurements.

Here are some common reasons for overly high R-squared values.

1) You could be including too many terms for the number of observations or using an overly complicated model. Keep in mind that while a super high R-squared looks good, your model won't predict new observations nearly as well as it describes the data set. Read here for more details and pay particular attention to the Predicted R-squared:

2) If you have time series data and your response variable and a predictor variable both have significant trends, this can produce very high R-squared values. You might try a time series analsysis, or including time related variables in your regression model (e.g. lagged and/or differenced variables).

3) It's possible that you're including different forms of the same variable for both the response variable and a predictor variable. For example, if the response variable is temperature in Celcius and you include a predictor variable of temperature in some other scale, you'd get an R-squared of nearly 100%! That's an obvious example case, but you can have the same thing happening more subtlely.

I'm sure this isn't a complete list of possible reasons but it covers the more common cases. I hope it helps!

Name: Kausar • Monday, June 2, 2014

Dear All,
I have done my academic research and used statistical tools like reliability test, regression analysis and factor analysis.
My result of reliability is 79.8% ( is it good)
Value of R-square is 47.6% ( i know it is low but for primary data is it acceptable or not?)
One more question, my research was based on Structural equation modelling but instead of SEM I tested through regression. Is it also allowable or not? As i dont know how to use SEM.

Needed your experienced answers.


Name: Rosy • Wednesday, June 4, 2014

Hi Jim,
Thanks for your reply.Now, I would like to know about the range of coefficient of determination. I have already known that the range of R2 is 0 to 1.Then, I knew that the next range of R2 is 0.3 to 0.6. Is it true ? If I send my model to you, could you check my model,please? Whatever, I'm thanking to you for your help. Thank you so much Jim. :)

Name: Jim Frost • Thursday, June 5, 2014

Hi Kausar,

What qualifies as an acceptable R-squared value depends on your field of study. Your R-squared value would be great for many psychology studies but not good for some studies of physical processess. The acceptability of the value also depends on what you want to do with your model. To learn more about this topic, follow the link near the end of this post about "How high should R-squared be?"

I don't have enough context to understand the reliability value. And, sorry, but I don't know enough about structural equation modeling to answer your question.


Name: Winnie • Sunday, June 8, 2014

Could you please provide some references for your comment re: low R-squareds in fields that stidy human behavior? Thanks.

Name: Ben Sigal • Wednesday, June 18, 2014

How do you interpret R squared of -0.1?

Name: Jim Frost • Monday, June 23, 2014

Hi Ben,

If you have a negative R-squared, it must be either be the adjusted or predicted R-squared because it's impossible to have a regular R-squared that is negative.

For either type of R-squared, a negative value is a bad thing. If zero is bad, negative is even worse!

Often you'll get negative values when you have both a very poor model and a very small sample size.

But, there's not really much to be gained by trying to understand what a negative value means. You can interpret it as a value of zero for all intents and purposes.


Name: Ogbu, I.M • Wednesday, July 2, 2014

I am glad i have this opportunity.
please sir, can i use regression line and a curve at the same time to interpret my data? i am plotting more than one set of data on one graph and only scatter makes the work untidy.

Name: annie zahid • Saturday, July 12, 2014

dear sir, i have 0.05 r square in my research what does it mean? the topic is impact of emotional labor on job satisfaction

Name: Jim Frost • Monday, July 14, 2014

Hi Annie,

I wrote a blog post that covers how to interpret models that have a low R-squared value. I think it will answer your questions. If after reading it you have further questions, please don't hesitate to write.


Thanks for writing!

Name: Reza • Sunday, August 17, 2014

hello. I used curve fit and nonlinear regression analysis in my study. when and how can I report R square in may paper?

Name: Jim Frost • Tuesday, August 19, 2014

Hi Reza,

I've written an entire blog post about why you shouldn't use R-squared with nonlinear regression because it usually leads you to an incorrect model. You can read that post here:

You do get legitimate R-squared values when you use polynomials to fit a curve using linear regression. Even though you're fitting a curve it's still linear regression. Be sure you know exactly which form you are using to fit a curve--nonlinear regression or linear regression with polynomials.

To help you determine which form of regression you are using, read my post on this subject:

Thanks for the great question!

blog comments powered by Disqus