So while skewness is associated with one tail being elongated, that elongation doesn’t result in a heavier tail but rather in a lighter tail. Moreover, Figure 3 also contains a couple of additional surprises about this family of distributions. The first of these is the middle curve (k = 2), which shows the areas within two standard deviations of the mean. The flatness of this curve shows that, regardless of the skewness, a gamma distribution will always have about 95% to 96% of its area within two standard deviations of the mean.
As a result, even though we may compute exact critical values based on our estimated parameters, the results will never have the expected precision. While we may want to filter out exactly 95% of the noise, we’ll only filter out approximately 95%. The uncertainty is hidden by the complexity of the computations, but it remains there all the same.
Management
What You Need to Know About Gamma Probability Models
The more you know, the easier it becomes to use your data
The mean and variance for a gamma distribution are:
However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads.
Gamma distributions are widely used in all areas of statistics, and are found in most statistical software. Since software facilitates our use of the gamma models, the following formulas are given here only in the interest of clarity of notation. Gamma distributions depend upon two parameters, denoted here by alpha and beta. The probability density function for the gamma family has the form:
So how do you filter out the noise when you think your data are modeled by a gamma distribution?
Figure 3 plots the areas for the three, fixed-width, central intervals. The bottom curve of Figure 3 (k = 1) shows that the areas found within one standard deviation of the mean of a gamma distribution will increase with increasing skewness. Since the tails of a probability model are traditionally defined as those regions that are more than one standard deviation away from the mean, the bottom curve of Figure 3 shows us that the areas in the tails must decrease with increasing skewness. This contradicts the common notion about skewness being associated with a heavy tail.
For an example of this uncertainty, assume that the upper specification limit is six standard deviations above the mean. The original model with alpha = 1.0 would then have 912 ppm nonconforming. But with alpha = 2.1, the fitted model would predict only 8 ppm nonconforming. And with alpha = 0.5, the fitted model would predict 8,151 ppm nonconforming. These two estimates only differ by a factor of a thousand.
With industrial data, a simpler approach is feasible. Here, we’re trying to do the same thing over and over, and the signals of interest are changes that are large enough to have an economic effect. In this case, we bundle up nearly all of the routine variation as probable noise and react only to potential signals that are clearly not part of the routine variation.
Summary
The alpha parameter determines the shape of the gamma model, and the beta parameter determines the scale. When the value for alpha is 1.00 or less, the gamma distributions will be J-shaped. Alpha values greater than 1.00 result in mound-shaped gamma models. As the value for alpha gets large, the gammas approach the normal distribution. Since we consider these distributions in standardized form, the value for the beta parameter won’t affect any of the following results. Five standardized gamma distributions are shown in Figure 1.
Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types.
When working with experimental data, where researchers are spending time and money trying to detect potential signals, it’s reasonable to very carefully model the routine variation. By modeling the routine variation, we can package some specified percentage of the probable noise to be ignored (usually 95%) and then look for values outside that package that look like potential signals. This approach is like using the table in Figure 6. Fit a model, choose a specific area to filter out, and then find the exact width of interval to use in packaging the noise.
So what gets stretched?
You could find bespoke values for the parameters of a gamma distribution based on your data, and then find an exact interval that you hope will wrap up a specific amount of the probable noise. (This approach becomes unreliable in the extreme tail.)
The purpose of analysis is insight. To gain insight, we have to detect any potential signals within the data. To detect potential signals, we must filter out the probable noise. And filtering out the probable noise is the objective of statistical analysis.
So please consider turning off your ad blocker for our site.
For example, a gamma model with an alpha parameter of 64 will have 92% of its area within 1.74 standard deviations of the mean, and it will have 95% of its area within 1.95 standard deviations of the mean. Additionally, a gamma model with an alpha parameter of 1.25 will have 92% of its area within 1.53 standard deviations of the mean, and it will have 98% of its area within 2.84 standard deviations of the mean.
From Figure 2 we see that the mean plus-or-minus three standard deviations will filter out 97.6% or more of any and every gamma distribution. Thus, this one-size-fits-all approach will filter out virtually all of the probable noise for any set of data that might be modeled by a gamma distribution.
Chi-square distributions are a subset of the family of gamma distributions. A chi-square distribution with k degrees of freedom is a gamma distribution with beta = 2 and alpha = k/2 (for integer values of k). Thus, the distributions above are standardized chi-square distributions with 1, 2, 4, 8, and 32 degrees of freedom.
You can investigate this uncertainty with a simple simulation study. I used an exponential distribution (where alpha and beta were both equal to 1.00) to generate 5,000 data sets of size n = 100. For each data set, I estimated the value of alpha and beta as described above. The estimates for the shape parameter alpha ranged from 0.495 to 2.103.
So, while the central 920 parts per thousand are shifting toward the mean, and while another 60 parts per thousand get slightly shifted outward and then stabilize, it’s primarily the outer 20 parts per thousand that bear the brunt of the stretching and elongation that goes with increasing skewness.
Fitting a gamma distribution to your data
Figure 7 shows the values in each column of Figure 6 plotted against skewness. The bottom curve shows that the middle 92% of a gamma will shift toward the mean with increasing skewness. The 95% fixed-coverage intervals are remarkably stable until the increasing mass near the mean eventually begins to pull this curve down. The 97.5% fixed-coverage intervals initially grow until they plateau near three standard deviations. The spread of the top three curves shows that for the gamma models it’s primarily the outermost 2% that gets stretched into the extreme upper tail.
Thanks,
Quality Digest