Previously, I showed why there is no R-squared for nonlinear regression. Anyone who uses nonlinear regression will also notice that there are no P values for the predictor variables. What’s going on?
Just like there are good reasons not to calculate R-squared for nonlinear regression, there are also good reasons not to calculate P values for the coefficients.
Why not—and what to use instead—are the subjects of this post!
This may be an unexpected question, but the best way to understand why there are no P values in nonlinear regression is to understand why you can calculate them in linear regression.
A linear regression model is a very restricted form of a regression equation. The equation is constrained to just one basic form. Each term must be either a constant or the product of a parameter and a predictor variable. The equation is constructed by adding the results for each term.
Response = constant + parameter * predictor + ... + parameter * predictor
Y = b o + b1X1 + b2X2 + ... +bkXk
Thanks to this consistent form, it’s possible to develop a hypothesis test for all parameter estimates (coefficients) in any linear regression equation. If you enter a coefficient of 0 into any term, and multiply it by the predictor value, the term always equals zero and indicates that the predictor variable does not affect the response value.
Given this consistency, it’s possible to set up the following hypothesis test for all parameters in all linear models:
The p-value for each term in linear regression tests this null hypothesis. A low p-value (< 0.05) indicates that you have sufficient evidence to conclude that the coefficient does not equal zero. Changes in the predictor are associated with changes in the response variable.
How to interpret P values and coefficients in linear regression analysis
While a linear model has one basic form, nonlinear equations can take many different forms. There are very few restrictions on how parameters can be used in a nonlinear model.
The upside is that this flexibility allows nonlinear regression to provide the most flexible curve-fitting functionality.
The downside is that the correct null hypothesis value for each parameter depends on the expectation function, the parameter's place in it, and the field of study. Because the expectation functions can be so wildly different, it’s impossible to create a single hypothesis test that works for all nonlinear models.
Instead of P values, Minitab can display a confidence interval for each parameter estimate. Use your knowledge of the subject area and expectation function to determine if this range is reasonable and if it indicates a significant effect.
To see examples of nonlinear functions, see What is the difference between linear and nonlinear regression equations?