Abstract
Online classes technology in the U.S. is now nearly universal in many forms of education. Online classes remain one of the initiative that most institutions are yet to adopt and implement. In this study a student is the usage of a computer within a classroom is either allowed, restricted or not permitted at all. During this study, sections of the introductory economics course were randomly assigned to conditions, and students were randomly assigned to course sections. A dummy Zi was assigned to represent both conditions and which takes a 1 value if the students are taught in a class with computers and 0 for the vice. A standard deviation of 0.18 lesser was obtained from the scores of the classes that allowed usage of computers.
ASSUMPTIONS
The error term has a population mean of zero
The error term accounts for the variation in the dependent variable that the independent variables do not explain. Random chance should determine the values of the error term.
All independent variables are uncorrelated with the error term
The regression model is linear in the coefficients and the error term
This assumption addresses the functional form of the model. In statistics, a regression model is linear when all terms in the model are either the constant or a parameter multiplied by an independent variable.
The error term has a constant variance (no heteroscedasticity)
The variance of the errors should be consistent for all observations. In other words, the variance does not change for each observation or for a range of observations. This preferred condition is known as homoscedasticity
If an independent variable is correlated with the error term, we can use the independent variable to predict the error term, which violates the notion that the error term represents unpredictable random error
In all cases the formula for OLS estimator remains the same:$hat{}beta = left( text{XTC} right) – Ty$; the only difference is in how we interpret this result
METHOD OF CALCULATION
i)Set a difference between dependent variable and its estimation:
ii)Square the difference:
iii)Take summation for all data.
iv)To get the parameters that make the sum of square difference become minimum, take partial derivative for each parameter and equate it with zero,
In order to prove that OLS in matrix form is unbiased, we want to show that the expected value of ˆβ is equal to the population coefficient of β. First, we must find what ˆβ is. Then if we want to derive OLS we must find the beta value that minimizes the squared residuals (e).
consequences for the OLS estimator. Although the OLS estimator remains unbiased, the estimated SE is wrong. Because of this, confidence intervals and hypotheses tests cannot be relied on.
Method 1
Y = a + bX + ϵ
Where:
Y – Dependent variable
X – Independent (explanatory) variable
3a – Intercept
b – Slope
ϵ – Residual (error)
Method 2
Y = a + bX1 + cX2 + dX3 + ϵ
Where:
Y – Dependent variable
X1, X2, X3 – Independent (explanatory) variables
a – Intercept
b, c, d – Slopes
ϵ – Residual (error)
To carry out a Z-test, find a Z-score for your test or study and convert it to a P-value. If your P-value is lower than the significance level, you can conclude that your observation is statistically significant
If the confidence interval does not contain the null hypothesis value, the results are statistically significant. If the P value is less than alpha, the confidence interval will not contain the null hypothesis
From the case (1) and (2) above we can conclude that after the p-test and z-test the parameters are significantly different. Hence it is Economical
If the confidence interval does not contain the null hypothesis value, the results are statistically significant. If the P value is less than alpha, the confidence interval will not contain the null hypothesis
General Steps for an F Test. State the null hypothesis and the alternate hypothesis. Calculate the F value. Find the F Statistic (the critical value for this test). Support or Reject the Null Hypothesis.
Because ^β0 and ^β1 are computed from a sample, the estimators themselves are random variables with a probability distribution — the so-called sampling distribution of the estimators — which describes the values they could take on over different samples. The F statistic formula is: F Statistic = variance of the group means / mean of the within group variances. You can find the F Statistic in the F-Table.
The F value is a value on the F distribution. Various statistical tests generate an F value. The value can be used to determine whether the test is statistically significant. The F value is used in analysis of variance (ANOVA). It is calculated by dividing two mean squares.
When you have found the F value, you can compare it with an f critical value in the table. If your observed value of F is larger than the value in the F table, then you can reject the null hypothesis with 95 percent confidence that the variance between your two populations isn’t due to random chance.
The F-value is the Mean Square Regression (2385.93019) divided by the Mean Square Residual (51.0963039), yielding F=46.69. The p-value associated with this F value is very small (0.0000). These values are used to answer the question “Do the independent variables reliably predict the dependent variable?”.
random chance?
Tests for statistical significance tell us what the probability is that the relationship we think we have found is due only to random chance. They tell us what the probability is that we would be making an error if we assume that we have found that a relationship exists.
We can never be completely 100% certain that a relationship exists between two variables. There are too many sources of error to be controlled, for example, sampling error, researcher bias, problems with reliability and validity, simple mistakes, etc.
But using probability theory and the normal curve, we can estimate the probability of being wrong, if we assume that our finding a relationship is true.