REGRESSION MODEL, F AND t- STATISTICS

assumptions 3, Remedy and solve the violation if any through the recommended approaches 4.

re-run the modifyed regression models, and test the assumptions again 5. If any further

violation, again proceed with Step 3 and 4. 6. Do this until you get the corrected form of

regression model free from all violations of all assumptions,

are I(0) no cointegration tests are required and OLS can be used.

Traditionally, Econometric is based on CLassical Ordinary Least Square where we believed

that time-series data have a constant mean and Variance, that is, No series in a regression

model has a unit root. Thus, in a situation,we are faced with purely I(O) series, OLS is the most

appropriate model is OLS. ECM,ARDL, Bounds Test techniques, FMOLS, DOLS etc arises

when the use of OLS breaks down.

Carlos Valdes commented< The F value for the hypothesis "all betas are equal to 0" right below

is the p value...

Sivarajasingham Selliah commented> F test statistics value is for overall significance test.

Kara Brahim commented> the F statitistic of this model equal to 748,88 is greater than F critical

value with ( 3, 19) degree of freedom , meaning that the null hypothesis (all coefficients are

zero ) is rejected , p-value less than 0,05, consequently all coefficients are jointly significant

( the model is good )

Hifsa Syed commnted> F test shows overall significance of the model. Calculated value is

greater than observed value signifying the model is statistically significant.

Professor Taiwo Timi commented>The model does not look overall fitted, but can we assume a

situation where both X1 and X3 can not be statistically significant in explaining the dependent

variable. a good example of this is taxation and economic growth in developing African

countries.

Professor Akhtar Khan commented> X1 and X3 are insignificant only X2 is significant. R-square

is very low just 16%.Durbin Waston test value are near to 2 means no auto-correlation or serial

correlation. Whole model is not significant because F-test Value is insignificant.

Professor Abubakar Kumo commented> (1) the model does not fit the data well. (2) Only one

variable X2 is significant at 5% level (3) The D.W >2 indicates that there's no autocorrelation or

serial correlation . (4) Given the F-stat and adjusted R2 of just 8 % it means only 8% of the

variation in the dependent variable is explained by the model. This is not good fit. The model

did not fit the data well.

Emine Kılavuz The independent variables which are slected are not correct, they can not

explain changes in the dependent variable.

Gerardo Andrés Milano Gallardo commented> Durbin Watson Stat is nearly two, but there is

just one significant variable. It'd be a multi-collinear case, if I don't get wrong.

Arbab Tahir Khan Here in the three independent variable only one is significant individually the

model is not good fit r square is also low and f state show the relaibality of over all model but his

value is 12.13 here which is more than 5% so the overall model is not good fit

Ernest Tubolayefa commented> Using the DW value, the null of No Positive/Negative

autocorrelation cannot be rejected given the upper bound (du) critical DW value of 1.65 for 35

observation and 3 explanatory variables. Thus, (du _ 4-du) becomes (1.65 _ 2.35). The

calculated DW value of 2.184 falls within (1.65_ 2.35) which is the region we cannot reject the

null. Thus, there is no Positve (or negative) Autocorrelation in the model. For further

explananation, Go to Durbin-Watson table and read off the value for upper limit (du) with 35

observation and 3 explanatory variables (k=3) at 5% sig. Level which is 1.65. For first order

positive serial correlation, the calculated DW must be less than 1.65 but in the above model

calculated DW is 2.184, thus there is no positive serial correlation in the model. On the other

hand, for negative serial correlation, the calculated DW must be greater than (4_du), given that

du=1.65, (4_du) becomes (4-1.65= 2.35) but in the model above, 2.184 < 2.35, thus, there is no

negative serial correlation in the model.

Mosikari Teboho commented> variable X1 and X3 they are not statistically significant, wich

might be a weakness. and our goodness of fit is low, which is a weakness. the strength could

be that the D.W is 2.18 which suggest no serial correlatio

Dada James Temitope commented> only x2 is significant, R2 is very low, which mean the

explanatory variable did not fit the model wel. also, f statistic that measure the overall significant

of the model is not significant.

Sibawaihee Dayyabu commented> Only one variable (i.e. X2) is significance at 5%. The model

is free from the effect of serial correlation because the value of DW is greater than 2 Despite all

these pros, the model is having low prediction power due to lower value of R2 (i.e. 0.087)

Tella Oluwatoba Ibrahim commented> But if an explanatory variable doesn't follow economic

theory,I see nothing wrong with it as long as the researcher can justify why variables failed to

follow apiori sign. For instance,the cost push inflation claimed that higher interest rate will raise

the cost of borrowing which in turn leads to higher cost of production. And price is believed to

be a positive function of cost of production(as well as interest rate) in the theory. But if your

model displayed a negative sign,you needn't to use physical alteration but rather provide

justification for such sign. You may say-contrary to the apiori expectation,interest rate exert

negative impact on inflation. This may be attributed to high non- performing loan in the economy

in such a way that a reduction in interest rate encourage the economic agents to borrow but

most component of the loans go into transactional demand which tends to push demand above

what is needed to clear the market

Ayaz Khan commentedd> the last feature mostly cover the other features....

Prabhat Majumdar commented> The model should not have irrelevant variables. Xs must not

be strongly correlated with each other. Causality must be unidirectional and a single relation

should be sufficient. . Exogeneity tests necessary.

Rohin Otieno commented> The regression model should be linear, correctly specified with an

additive error term.

Katji Makatjane commented> all the regression models estimates should be blue.

significant as you can see, and stationary at first difference, but it suffers from

autocorrelation D.W is lower than du from D.W table. R square is high. as it suffers from

serial correlation as well. I'm beginner in econometrics please need your help, and tell

me how to eliminate

Barka Ahmed posted the following figure. (Dec 30, 2015)

Saeed Aas Khan Meo commented>Convert your variables into log form and run again

regression and may be increase observations

Gurpreet Singh commented>Here's what I can suggest from my elementary knowledge:

1. Try different functional forms like Log etc. Your model may be misspecified.

2. Try GLS instead of OLS estimation.

Aadil Shah plz sand me base paper and data. i will give u accurate result.Aadilshah777@gmail.

com

Lim Kim Juan commented> spurious regression-R squared greater than Durbin Watson

Tarek Djeddi commented> Spurious regression. Plz test the Multicolinearity.

Valdemir Galvao de Carvalho commented> the omission of one or more explanatory variables

reflect the waste, whose values tend to be autocorrelated. Poor specification of the

mathematical form: depending on the structure of the data, you must perform an exploratory

analysis of the data so that the most appropriate model to study is chosen. Imperfect adjustment

of statistical series: many published data contains interpolations or smoothing, which may make

random disturbâncias correlated over time. Once diagnosed correlation, it is possible to

eliminate their effects through changes of the variables. To correct the autocorrelation three

methods are: the estimate of Cochrane-Orcutt; two-stage method of Durbin and method of the

first differences.

Khaled Elbeydi commented> Try log form and if u still have auto correlation add ar(1)

Jabbar Ali Its suprious regreesion bro,, try to increase ur sample size and take log of variables.

Momal Faizan commented> First check correlation among the variables through eviews...n

eliminate those variables having high correlation value... Then do something else

Hassan Danish commented> if your variables are stationery a 1st diff , then why r u applyng

OLS ? move to co-integeration analysis

Tarek Djeddi commented> E_views use LS not OLS which is a method that can't be used in

practice because it can be used only when all stochastic hypothesis are satisfied. And this is

impossible.

Sayed Hossain commented> Just create 1 period lag of dependent variable(INF) and run the

model again having this lagged INF as independnet variable. I hope your autocorrelation

problem will be solved. We also call it AR(1) process. Even if you write AR(1) as independent

variable and run the model again, autororrelation is likely to be removed.

Budi Setiawan commented> I'm focusing on durbin watson 1.88 and that value is still in the

range -2<DW<+2 so the conclusion will be no autocorrelation.

Younes Azzouz commented> -2<dw<2 where did you get the formula? Dw is included between

0 and 4

Muhammad Asghar commented> As per DW, no serial autocorrelation because it approaching

to 2, further it may verified that the particular value may not follow in inconclusive zone at this

degree of freedom.

Meher Afroze commented> Large sample size, only two explanatory var, and DW close to 2

implies no serial correlation.

Saeed Aas Khan Meo commented> no serial correlation you also can see serial correlation

through this table,http://saeedmeo.blogspot.com/2015/10/autocorrelation.html

Syed Masroor Hussain Zaidi commented> No autocorelation and serial correlation because dw

is around 2 and the model has no lags

Ansarabbas Abbas commented> you must be put excel file and make a scatter plot of the data.

If the relationship is linear then serial correlation do not exist if scatter plot is nonlinear then

there is serial correlation. Serial correlation is commonly exist in time series data but not in

cross sectional data.

Ansarabbas Abbas commented> numerical figures provide us just clues but not complete

information you must be go to in real world and spending shoe leather

Naila Erum commented> No serial correlation

Hassan Danish commented> numerical values cannot tell us the right picture of AC.. but i think

AC is present in this model, because when errors are serially correlated then value of R2

become very small ..

Ansarabbas Abbas commented> in time series data time plays a significant and as a third

implicit variable between dependent variable and independent variable .the classical

assumption of error tern follows a normal distribution does not fulfill. hence problem of serial

correlation is significant.

Sayed Hossain commented> Only intercept is significant. R square is extremely low, not good

sign. F statistics is not also significant, bad sign too. As DW is close to 2, probably there is no

serial correlation.

Ade Kutu

Afolabi Luqman

Abdullah Sonnet

Asad Zaman

Atiq Rehman

Burcu Özcan

Ghumro Niaz Hussain

Muhammad Anees

Mohammad Zhafran

Muzammil Bhatti

Monis Syed

Mine PD

Moulana N. Cholovik

Muili Adebayo Hamid

Nicat Gasim

Najid Iqbal

Nasiru Inuwa

Noman Arshed

Rapelanoro Nady

Seye Olasehinde-Williams

Suborno Aditya

Sayed Hossain

Shishir Sakya

Sheikh Muzammil Naseer

Tella Oluwatoba Ibrahim

Younes Azzouz

Meo School of Research

Shishir Shakya

Noman Arshed

Hossain Academy Note

Univariate Models |

Multivariate Models |

Panel Data Model |