Linear modeling and regression analysis: A mini-review over regression pros and cons

Document Type : Review

Authors

1 Crop and Horticulture Research Department, Kermanshah Agricultural and Natural Resources Research and Education Center (AREEO), Kermanshah, Iran.

2 Department of Plant Protection, College of Agriculture, Razi University, Kermanshah, Iran.

3 Department of Natural Resources Engineering, College of Agriculture, Shiraz University, Shiraz, Iran.

4 Department of Plant Protection, College of Agriculture, University of Kurdistan, Sanandaj, Iran.

Abstract

Introduction: Powerful and practical statistical packages have simplified the analysis and thus developed the application of data science in all research fields. Accordingly, regression has been applied to almost all aspects of the life sciences. However, misuse of this model has been reported in the past decades. This article aims to examine modeling with this important statistical method and introduce readers to the correct use of this method.
Materials and methods: This review article uses real data, and the supplementary materials provide the method for performing the regression analysis in SAS and R statistical software and their related codes.
Results: In the required assumptions of the regression model, the residuals of the model must be normally distributed, but performing the normality test for the actual values ​​of the response variable or any of the explanatory variables is not mandatory. Therefore, researchers should not obsess more than necessary about the normal distribution of real data. On the other hand, almost all normality test methods, such as Kolmogorov-Smirnov, are designed for large numbers of data, typically more than a thousand samples. This suggests that using such methods to test the normality of model residuals estimated from a small number of data, mostly less than a hundred cases, would be inaccurate. Another issue regarding applying the regression model is related to the co-linearity of the explanatory variables. There are still signs of correlation in a data set where all variables are generated separately and randomly in a statistical package. This means that it is very hard to find a correlation coefficient equal to zero (r = 0) even between any pair of separate, random variables. Therefore, in all regression models, there are some kinds of correlation between explanatory variables, but the important issue here is that only high correlation causes severe problems in the model. For collinearity test it would be better to use specialized methods such as Variance Inflation Factor (VIF) or Principal Component Analysis (PCA). The linearity of the model is one other assumption of regression model. Data transformation might be helpful under the situation of non-linearity of the model. However, transformation changes the variables unit, altering the array direction in a geometric space. Researchers should be careful regarding the use of modeling a large number of data affects the probability values ​​in variance analysis due to increasing the value of the degree of freedom of the model.
Conclusion: As the number of data points increases, the degree of freedom of the error term increases rapidly. Therefore, the final error mean squared significantly reduces. In contrast, the scatter of data points around the regression line may be too wide. For this reason, using the coefficient of determination, usually called (R-Squared), is a suitable criterion for testing the model's fit. High coefficient values indicate a suitable model for the data set used. It should be noted that in a multiple regression model, the higher the number of explanatory variables used in the model, the higher the value of this coefficient increases. For such conditions, when the number of explanatory variables is large, another form of this coefficient, called the adjusted coefficient of determination (adjusted R2), has been introduced. The use of this coefficient in the approximations creates a limit on the number of variables used in the regression model. Accordingly, the number of variables in the model as explanatory variables should not exceed the number of samples (or the number of tens) in a set, and researchers should avoid using more variables than the number of samples.

Keywords

Main Subjects


Aliakbari, M., A. Saed-Moucheshi, H. Hasheminasab, H. Pirasteh-Anosheh, M. T. Asad and Y. Emam. 2013. Suitable stress indices for screening resistant wheat genotypes under water deficit conditions. International journal of Agronomy Plant Production, 4(10), 2672-2695
Astivia, O. L. O. and B. D. Zumbo. 2019. Heteroskedasticity in Multiple Regression Analysis: What it is, How to Detect it and How to Solve it with Applications in R and SPSS. Practical Assessment, Research & Evaluation, 24, 265-279.
Bagya Lakshmi, H., M. Gallo and R. M. Srinivasan. 2018. Comparison of regression models under multi-collinearity. Electronic Journal of Applied Statistical Analysis, 11(1), 340-368.
Baum, C. F. and A. Lewbel. 2019. Advice on using heteroskedasticity-based identification. The Stata Journal, 19(4), 757-767.
Bazilevsky, M. P. 2018. Research of new criteria for detecting first-order residuals autocorrelation in regression models. Mathematics and Mathematical Modeling, 6(3), 13-25.
James, G., D. Witten, T. Hastie, R. Tibshirani, G. James, D. Witten, T. Hastie and R. Tibshirani. 2021. Linear model selection and regularization. An introduction to statistical learning: with applications in R, 10, 225-288.
Kabaila, P., D. Farchione, S. Alhelli and N. Bragg. 2021. The effect of a Durbin–Watson pretest on confidence intervals in regression. Statistica Neerlandica, 75, 4-23.
Lio, W. and B. Liu. 2018. Residual and confidence interval for uncertain regression model with imprecise observations. Journal of Intelligent & Fuzzy Systems, 35(2), 2573-2583.
Morrissey, M. B. and G. D. Ruxton. 2018. Multiple regression is not multiple regressions: the meaning of multiple regression and the non-problem of collinearity. Philosophy, Theory, and Practice in Biology, 10(3), 563-588.
Saed-Moucheshi, A., E. Fasihfar, H. Hasheminasab, A. Rahmani and A. Ahmadi. 2013a. A review on applied multivariate statistical techniques in agriculture and plant science. International journal of Agronomy and Plant Production, 4, 127-141.
Saed-Moucheshi, A., M. Pessarakli and B. Heidari. 2013b. Comparing relationships among yield and its related traits in mycorrhizal and nonmycorrhizal inoculated wheat cultivars under different water regimes using multivariate statistics. International Journal of Agronomy, 13(13), 345-365.
Saed-Moucheshi, A., H. Razi, A. Dadkhodaie, M. Ghodsi and M. Dastfal. 2019. Association of biochemical traits with grain yield in triticale genotypes under normal irrigation and drought stress conditions. Australian Journal of Crop Science, 13(2), 272-295.
Souza, L. C., R. M. C. R. Souza, G. J. A. Amaral and T. M. Silva Filho. 2017. A parametrized approach for linear regression of interval data. Knowledge-Based Systems, 131, 149-159.
Vosough, A., R. Ghouchani and A. Saed-Moucheshi. 2015. Genotypic Variation and Heritability of Antioxidant related Traits in Wheat Landraces of Iran. Biological Forum, 7(2), 43-55