On the Difference in Means
using the t-Test
Tom grew up in the City of Mohawk, the land of natural springs, known for its pristine water. Tom’s house is alongside the west branch of Mohawk, one such pristine river. Every once in a while, Tom and his family go to the nature park on the banks of Mohawk. It is customary for Tom and his children to take a swim.
Lately, he has been reading in the local newspapers that the rivers’ arsenic levels have increased. Tom starts associating the cause of the alleged arsenic increases to this new factory in his neighborhood just upstream of Mohawk. They could be illegally dumping their untreated waste into the west branch.
He decided to test the waters of the west branch and the east branch of the Mohawk River.
His buddy Ron, an environmental engineer, would help him with the laboratory testing and the likes.
Over the next ten days, Tom and Ron collected water samples from the west and east branches, and Ron got them tested in his lab for arsenic concentration.
In parts per billion, the sample data looked like this.
West Branch: 3, 7, 25, 10, 15, 6, 12, 25, 15, 7
East Branch: 4, 6, 24, 11, 14, 7, 11, 25, 13, 5
If Tom’s theory is correct, they should find the average arsenic concentration in the west branch to be significantly greater than the average arsenic concentration in the east branch.
How can Tom test his theory?
.
.
.
You are right!
He can use the hypothesis testing framework and verify if there is evidence beyond a statistical doubt.
Tom establishes the null and alternate hypotheses. He assumes that the factory does not illegally dump their untreated waste into the west branch, so the average arsenic concentration in the west branch should be equal to the average arsenic concentration in the east branch Mohawk River. Or, the difference in their means is zero.
Against this null hypothesis, he pits his theory that they are indeed illegally dumping their untreated waste. So, the average arsenic concentration in the west branch should be greater than the average arsenic concentration in the east branch Mohawk River — the difference in their means is greater than zero.
The alternate hypothesis is one-sided. A significant positive difference needs to be seen to reject the null hypothesis.
Tom is taking a 10% risk of rejecting the null hypothesis; . His Type I error is 10%.
Suppose the factory does not affect the water quality, but, the ten samples he collected showed a difference in the sample means much greater than zero, he should reject the null hypothesis. So he is committing an error (Type I error) in his decision making.
You must be knowing that there is a certain level of subjectivity in the choice of .
Tom may want to prove that this factory is the leading cause for the increased arsenic levels in the west branch. So he could have chosen to commit a greater error of rejecting the null hypothesis, i.e., he must be inclined to selecting a larger value of .
Someone who represents the factory management would be inclined to selecting a smaller value for as that means less likely to reject the null hypothesis.
So, assuming that the null hypothesis is true, the decision to reject or not to reject is based on the value one chooses for .
Anyhow, now that the basic testing framework is set up, let’s look at what Tom needs.
He needs a test-statistic to represent the difference in the means of two samples.
He needs the null distribution that this test-statistic converges to. In other words, he needs a probability distribution of the test-statistic to verify his null hypothesis -- how likely it is to see a value as large (or greater) as the test-statistic in the null distribution.
Let’s take Tom on a mathematical excursion
There are two samples represented by random variables and .
The mean and variance of are and . We have one sample of size from this population. Suppose the sample mean and the sample variance are and .
The mean and variance of are and . We have one sample of size from this population. Suppose the sample mean and the sample variance are and .
The hypothesis test is on the difference in means;
Naturally, a good estimator of the difference in population means () is the difference in sample means ().
If we know the probability distributions of and , we could perhaps infer the probability distribution of .
The sample mean is an unbiased estimate of the true mean, so the expected value of the sample mean is equal to the truth. . We learned this in Lesson 67.
The variance of the sample mean () is . It indicates the spread around the center of the distribution. We learned this is Lesson 68.
Putting these two together, and with the central limit theorem, we can say
So, for the two samples,
|
If and are normal distributions, it is reasonable to assume that will be a normal distribution.
We should see what and are.
Expected Value of y
Since the expected value of the sample mean is the true population mean,
Variance of y
(can you tell why?)
Since, the variance of the sample mean () is ,
Using the expected value and the variance of y, we can now say that
Or, if we standardize it,
At this point, it might be tempting to say that z is the test-statistic and the null distribution is the standard normal distribution.
Hold on!
We can certainly say that, under the null hypothesis, . So, z further reduces to
However, there are two unknowns here. The population variance and .
Think about making some assumptions about these unknowns
What could be a reasonable estimate for these unknowns?
.
.
.
You got it. The sample variance and .
Now, let’s take the samples that Tom and Ron collected and compute the sample mean and sample variance for each. The equations for these are anyhow at your fingertips!
West Branch: 3, 7, 25, 10, 15, 6, 12, 25, 15, 7
|
East Branch: 4, 6, 24, 11, 14, 7, 11, 25, 13, 5
|
Look at the value of the sample variance. 58.28 and 54.89. They seem close enough. Maybe, just maybe, could the variance of the samples be equal?
I am asking you to entertain the proposition that the population variance of the two random variables and are equal and that we are comparing the difference in the means of two populations whose variance is equal.
Say,
Our test-statistic will now reduce to
or
There is a common variance , and it should suffice to come up with a reasonable estimate for this combined variance.
Let’s say that is a good estimate for .
We call this the estimate for the pooled variance
If we have a formula for and if it is an unbiased estimate for , then, we can replace for .
What is the right equation for ?
Since is the estimate for the pooled variance, it will be reasonable to assume that it is some weighted average of the individual sample variances.
Let’s compute the expected value of .
In Lesson 70, we learned that sample variance is an unbiased estimate for population variance . The factor in the denominator is to correct for a bias of .
An unbiased estimate means . If we apply this understanding to the pooled variance equation, we can see that
Since , we get
To get an unbiased estimate for , we need the weights to add up to 1. What could those weights be? Could it relate to the sample sizes?
With that idea in mind, let’s take a little detour.
Let me ask you a question.
What is the probability distribution that relates to sample variance?
.
.
.
You might have to go down memory lane to Lesson 73.
follows a Chi-square distribution with degrees of freedom. We learned that the term is a sum of squared standard normal distributions.
So,
|
Since the two samples are independent, the sum of these two terms, and will follow a Chi-square distribution with degrees of freedom.
Add them and see. is a sum of squared standard normal distributions, and is a sum of squared standard normal distributions. Together, they are a sum of squared standard normal distributions.
Since
we can say,
So we can think of developing weights in terms of the degrees of freedom of the Chi-square distribution. The first sample contributes degrees of freedom, and the second sample contributes degrees of freedom towards a total of .
So the weight of the first sample can be and the weight of the second sample can be , and they add up to 1.
This then means that the equation for the estimate of the pooled variance is
Since it is an unbiased estimate of , we can use in place of in our test-statistic, which then looks like this.
where, is the estimate for the pooled variance.
Did you notice that I use to represent the test-statistic?
Yes, I am getting to with it.
It is a logical extension of the idea that, for one sample, follows a T-distribution with degrees of freedom.
There, the idea was derived from the fact that when you replace the population variance with sample variance, and the sample variance is related to a Chi-square distribution with degrees of freedom, the test-statistic follows a T-distribution with degrees of freedom. Check out Lesson 73 (Learning from “Student”) to refresh your memory.
Here, in the case of the difference in means between two samples (), the pooled population variance is replaced by its unbiased estimator , which, in turn, is related to a Chi-square distribution with degrees of freedom.
Hence, under the proposition that the population variance of the two random variables and are equal, the test-statistic is , and it follows a T-distribution with degrees of freedom.
Now let’s evaluate the hypothesis that Tom set up — Finally!!
We can compute the test-statistic and check how likely it is to see such a value in a T-distribution (null distribution) with so many degrees of freedom.
where
In Tom’s case, both . So the weights will be equal to 0.5.
The test-statistic is 0.1486. Since the alternate hypothesis is that the difference is greater than zeros (), Tom has to verify how likely it is to see a value greater than 0.1486 in the null distribution. Tom has to reject the null hypothesis if this probability (p-value) is smaller than the selected rate of rejection. A smaller than probability indicates that the difference is sufficiently large enough that, in a T-distribution with so many degrees of freedom, the likelihood of seeing a value greater than the test-statistic is small. In other words, the difference in the mean is already sufficiently greater than zero and in the region of rejection.
Look at this visual.
1.33 is the quantile on the right tail corresponding to a 10% probability (rate of rejection) for a T-distribution with eighteen degrees of freedom.
If the test statistic () is greater than , which is 1.33, he will reject the null hypothesis. At that point (i.e., at values greater than 1.33), there would be sufficient confidence to say that the difference is significantly greater than zero.
This decision is equivalent to rejecting the null hypothesis if (the p-value) is less than .
We can read off the standard T-table, or can be computed from the distribution.
At df = 18, and , and .
He cannot count on the theory that the factory is illegally dumping their untreated waste into the west branch Mohawk River until he finds more evidence.
Tom is not convinced. The endless newspaper stories on arsenic levels are bothering him. He begins to think if the factory is illegally dumping their untreated waste into both the west and east branches of the Mohawk River. That is one reason why he might have seen no significant difference in the concentrations.
Over the next week, he takes Ron with him to Utica River, a tributary of Mohawk that branches off right before Mohawk meets the factory. If his new theory is correct, he should find that the mean arsenic concentration in either the west or east branches should be significantly greater than the mean arsenic concentration in the Utica River.
Ron again helped him with the laboratory testing. In parts per billion, the third sample data looks like this.
Utica River: 4, 4, 6, 4, 5, 7, 8
There are 7 data points.
The sample mean
The sample variance
Can you help Tom with his new hypothesis tests? Does the proposition still hold?
To be continued…
If you find this useful, please like, share and subscribe.
You can also follow me on Twitter @realDevineni for updates on new lessons.