Joe and Devine Meet Again — for the ‘r’th time
J: It’s been 13 lessons since we met last time. Thought I’d say hello. You did not show up last week. I kept waiting as you asked me to in lesson 44.
D: Hey Joe! Sorry for the wait. I had a tough choice between travel/work and the weekly lesson. Could only do one. It was not intentional, although where we left off kind of hinted toward the wait. End of the year is a busy time for all.
J: I noticed you covered exponential distribution and its memoryless property in the previous two lessons. Isn’t time to our meetings also exponential?
D: That is correct. The first time we met was in lesson 6. The wait time was 6. We met again in lesson 9. The wait time (or wait lessons) was 3. Between the last time we met and now, as you pointed out, the wait time is 13. In lesson 43, where we first discussed the exponential distribution, I showed how its probability density function is derived. Did you follow the logic there?
J: Yes, I did. We begin with the fact that the arrival time (to the first or next event) exceeds some value t only if there are no events in the interval [0, t].
The probability that T > t is equal to the probability that there are 0 events in the period. P(N = 0) is computed from the Poisson distribution.
Since , .
is the cumulative density function for the exponential distribution.
We can get the probability density function f(t) by taking the derivative of F(t).
D: Well done. The inter-arrival time follows an exponential probability distribution.
J: Isn’t the exponential distribution like the Geometric distribution? I learned in lesson 33 that the random variable which measures the number of trials it takes to see the first success is Geometrically distributed.
D: That is a shrewd observation. Yes, the exponential distribution is the continuous analog of the discrete geometric distribution.
In geometric distribution, the shape is controlled by p, the parameter. The greater the value of p, the steeper the fall.
In exponential distribution, the shape is controlled by .
J: In that case, does the exponential distribution also have a related distribution that measures the wait time till the ‘r’th arrival?
D: Can you be more specific?
J: The geometric distribution has the Negative binomial distribution that measures the number of trials it takes to see the ‘r’th success. Remember lesson 35?
Just like the exponential distribution is the continuous analog of the discrete geometric distribution, is there a continuous analog for the discrete negative binomial distribution?
D: Yes, there is a related distribution that can be used to estimate the time to the ‘r’th arrival. It is called the Gamma distribution.
Look at our timeline chart for instance. The time to the first arrival is . The time to the second arrival since the first arrival is . But, our second meeting happened at lesson 9, so the time to the second arrival from the origin is .
Similarly, the second time we meet again after lesson 9 is in lesson 16. So, the time to the second arrival since lesson 9 is 16 – 9 = 7. Put together, these times to second meeting follow a Gamma distribution. More generally,
the wait time for the ‘r’th arrival follows a Gamma distribution.
J: That seems to be a logical extension. I believe we can derive the probability density function for the Gamma distribution using the exponential distribution. They seem to be related. Can you help me with that?
D: Sure. If you noticed, I said that our second meeting happened at lesson 9, and the time to the second arrival from the origin is .
J: Yes. That is because it is the total time — the first arrival and the second arrival since.
D: So the random variable is the sum of two random variables and
The time to ‘r’th arrival .
We can derive the probability density function of using the convolution of the individual random variables .
J: 😕 What is convolution?
D: It might require a full lesson to explain it from first and show some examples, but for now remember that convolution is the blending of two or more functions. If you have two continuous random variables X and Y with probability density functions and , then, the probability density function of the new random variable Z = X + Y is
Employing this definition on r variables () using induction, we can get the probability density function of the Gamma distribution as
J: 😕 😕 😕 😕 😕 😕
D: Not to worry. We will learn some of the essential steps of convolution soon.
J: I have to say, the density function looks a little convoluted though. 😉
D: Ah, that’s a good one. Perhaps it is. Why don’t you check what happens to the equation when you choose r = 1, i.e., the arrival time for the first event.
J: Let me try. . This is the density function for the exponential distribution. It has to, because we measure the arrival time to the first event.
D: You are correct. The Gamma distribution has two control parameters. is called the scale parameter because it controls the width of the distribution and r is called the shape parameter because it controls the shape parameter.
J: Can we make some quick graphics to see how the distribution looks.
D: Yes, here it is. This one is for a of 0.2 and r changes from 1 to 4, i.e., for use to meet the first time, second time, third time and the fourth time.
J: This is cool. I see that the tails are getting bigger as the value of r increases.
D: Good observation again. That is why Gamma distribution is also used to fit data with significant skewness. It is widely used for fitting rainfall data. Insurance agents also use it to model the claims.
J: Understood. When do we meet again? We have to figure out the convolution stuff.
You now have all the tools to estimate this. Figure out the probability that the wait time is more than one week while we celebrate the emergence of the light from darkness.
Merry Christmas.
If you find this useful, please like, share and subscribe.
You can also follow me on Twitter @realDevineni for updates on new lessons.