*Barry T. Neyer
EG&G Mound Applied Technologies
Miamisburg, OH 45343-3000
*

Contact Address

Barry T. Neyer

PerkinElmer Optoelectronics

1100 Vanguard Blvd

Miamisburg, OH 45342

(937) 865-5586

(937) 865-5170 (Fax)

Barry.Neyer@PerkinElmer.com

A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, m, and the standard deviation, s) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.

Technical Papers of Dr. Barry T. Neyer

Sensitivity tests are often used to estimate the parameters associated with latent continuous variables which cannot be measured. For example, in testing the sensitivity of explosives to shock, each specimen is assumed to have a critical stress level or threshold. Shocks larger than this level will always detonate the specimen, while smaller shocks will not lead to detonation. Repeated testing of any one sample is not possible since the stress that is not sufficient to cause detonation nevertheless will generally damage the specimen. To measure the parameters of the underlying distribution (e.g., mean threshold, m, and standard deviation, s, of a normal distribution), samples are tested at various stress levels, and the response or lack of response is noted. The experimenter then analyzes the data to provide estimates of the parameters of the population.

A previous paper (Neyer 1989) discussed a number of the sensitivity test designs. This paper will discuss methods of analysis. The next section describes the method for estimating the parameters of the distribution. The third section describes some of the commonly used methods for estimating confidence intervals. Section four proposes a new method of analysis, based on the Likelihood Ratio Test. Unlike previously used methods, this new method can analyze the results of all tests, and does not produce confidence intervals that are biased small. Section five presents simulation results.

The analysis of sensitivity tests is more complicated than the analysis of many standard statistical tests since the experimenter does not have any threshold information about the individual elements in the sample (i.e., the experimenter cannot compute a simple average to estimate the mean threshold, m, because there is nothing to average). The only information is the stress applied to each specimen and the response or lack of response.

A very general method of analyzing these tests is to use the
method of Maximum Likelihood. Let *x*_{i} be
the (transformed) stimulus level for the *i*^{th}
test, *n*_{i} be the number tested at this
level, and *p*_{i} be the proportion of
samples that responded. (The responses are often called successes
and the non-responses failures.) Let *P[(x*_{i}*-**m**)/**s**]* be a known distribution
function with m and s the unknown parameters. (Usually a normal
distribution is assumed, but any probability function can be
used. Additional parameters might be needed for other
distributions.) Define *z*_{i}* = (x*_{i}*-**m**)/**s**, Q(z) = 1 - P(z),* and *q*_{i
}*= 1 - p*_{i}. The likelihood
function, *L(**m**,
**s**)*, is the
probability of obtaining the given test results with the
specified m and s. It is given by

. (1)

The values, m_{e}
and *s*_{e},
which maximize the likelihood function, are the Maximum
Likelihood Estimates (MLEs). It is easier to find the parameters
which maximize *L*, the log of the likelihood function.
Unique MLEs are obtained if the successes and failures overlap;
i.e., the smallest success is smaller than the largest failure (Silvapulle 1981).

There is no guarantee that the MLEs are unbiased. The
simulation reported in this work and that of others (Edelman and Prairie 1966, Thompson and Stuart 1984)
shows that m_{e} is
essentially unbiased and *s*_{e}
is biased low in most cases of practical interest.

It is also possible to estimate other points of the
distribution, such as various percentiles. Let L_{p} be
the 100*p*th percentile of the population. Define *z*_{p}
such that *P(**m **+
z*_{p}* **s**)
= p*. Then

. (2)

Thus, Maximum Likelihood Estimates of m and s
directly yield estimates of *p*.

Most modern analysis methods use the Maximum Likelihood Estimates as estimates of the parameters. However, different analysis techniques were often employed by many researchers before electronic computers became accessible, due to the difficulty of the calculations.

The Probit test (Finney 1947) was designed to allow easy analysis by plotting the probability of success as a function of stress on normal probability paper. If the data lie on a straight line, the mean and standard deviation can be simply read from the mid point and slope of the line, respectively.

The Bruceton test (Dixon and Mood1948) was designed to allow the experimenter to estimate m and s from the data by computing sums of the number of tests conducted at a test level multiplied by the stress levels and their squares. It also produces estimates of the variances of these parameters, based on the asymptotic method discussed in a later section.

Both of these specialized methods give estimates close to the Maximum Likelihood Estimates under favorable conditions.

Unlike the estimation of the parameters, there are a number of
very different methods of estimating the confidence intervals for
the parameters. The simulation discussed in a later section shows
that the variance of both *m*_{e}
and *s*_{e} is
approximately proportional to s^{2}.
Since s^{2} is not
independently known, the variance function method, the
asymptotic, and the simulation method discussed in later sections
base their estimates on the Maximum Likelihood Estimate of s, s_{e}.
If the successes and failures do not overlap, s_{e} = 0, and these methods fail to
produce estimates for confidence regions for both *m* and *s*.
However, the Likelihood Ratio Test, discussed in later section,
is able to produce reliable confidence interval estimates in all
cases, including this degenerate case.

The variance function method of estimating confidence
intervals makes the assumption that the variances of both m_{e} and s_{e} are relatively simple
functions of just the sample size and the standard deviation of
the population. For example, Langlie [1965]
states that under certain conditions for the Langlie
("one-shot") method,

, (3)

, (4)

where N is the sample size. A more recent approximation (Langlie 1988m), more consistent with the results of the simulation reported in a later Section, gives larger variances for when the sample size is larger than 20:

. (5)

(The notation in this paper is different from that of
Langlie's paper. He defines the bias as b = s_{e}/s. His "unbiased" s is s_{e}/b.) Langlie [1988m] also established the bias of
the estimate of under the same circumstances. Let b = (s_{e
}- s)/s be the relative bias of the estimate s_{e}. Then

. (6)

For large *N* the bias goes to zero as expected. Since b^{2 }®
0 faster than Var^{ }s_{e}
® 0, s_{e}
is "essentially unbiased" for large *N*.

To calculate confidence intervals for m_{e},
Langlie [1965] advocates that
"Until such time when a 'small-sample' statistic equivalent
to 'Student's t' statistic is developed for the `one-shot'
method, the large sample approach will be taken by substituting
the unbiased standard deviation, (1+b)s_{e}, for in calculating the
variance in m_{e} for a given
sample size of *N*." Based on this analysis, the
confidence interval for m is given by

. (7)

Similarly, confidence intervals for the percentile p are given by

. (8)

Several software programs (Thompson
1987, Langlie 1988o) use Equations 3 - 8 to estimate
confidence intervals for m or various
values of *p*.

Estimation of confidence intervals for s could
similarly be computed, but they often include negative numbers.
Langlie [1965] recommends a
chi-squared method of calculation for all samples less than 50.
The ratio *n **s*_{e}*(1+**b**)/**s*
was assumed to follow a chi-square distribution with *n*
degrees of freedom. Using Equation 4 to
estimate the variance of s_{e}
determines the "effective" degrees of freedom for a
Langlie test:

. (9)

The confidence limits for s can be calculated by determining the coefficients of the chi-squared distribution at the appropriate confidence. The program ONE_SHOT (Langlie 1988o) uses this method to calculate confidence intervals for s. It calculates confidence intervals for s based on both Equation 4 and 5. However, this software does not calculate confidence intervals for various percentiles.

The variance function method depends on the validity of the equations used to estimate the variance. Equations 3 - 5 were obtained from simulating the Langlie test under optimal conditions. The variance is a complicated function of the population, the sample size, and the test design. Thus, new equations must be determined for different tests and for the same tests with different initial assumptions.

The program BRUCETON (Thompson 1989) uses the results of simulation to estimate confidence intervals by functions similar to, but more complicated than, those given by Equations 3 - 4.

In addition, the detailed simulation described in later and in
other work (Edelman and
Prairie 1966, Thompson
and Stuart 1984) shows that the variance is often worse for a
Langlie test than that given by Equations 3
and 5 *when the test is not optimized for
the population*. The Langlie test is most efficient (Langlie 1965) for sample sizes less
than 50 when the test limits are chosen as m
± 4s . In this case, Equations 3 and 5 yield
estimates of the variance that closely match the results from
simulation. However, if the limits chosen are either much wider
or narrower, the variance of both m and
s can be a factor of two or more times
that given by Equations 3 and 5.
(For sample sizes on the order of 100 or larger, wider limits are
more efficient than those suggested by Langlie.)

One further restriction of these methods is that they do not
work if there is no cross-over or overlap of the data. In this
case, the estimate of the standard deviation, s_{e},
is zero. Equations 3 and 4
would say that the variance of both m_{e}
and s_{e} would be zero. Thus,
no meaningful confidence interval can be constructed in these
cases.

The asymptotic method is used by programs such as ASENT (Mills 1980) and in the calculations of
the variance in the Bruceton method (Dixon and Mood 1948). The
Cramér-Rao theorem (Kendall
and Stuart 1967) gives a *lower bound* of the variance
for all *unbiased* estimates of the distribution. This lower
bound approaches the true variance *asymptotically*. Let *t*
be an unbiased estimator of some function t(q_{i}), where the q_{i} represent the parameters m and s. Then the
lower bound of the variance of *t* is given by

, (10)

where the information matrix is

(11)

. (12)

The second form of the information matrix is valid as long as the limits of integration are independent of the parameters (Kendall and Stuart 1967).

For sensitivity tests, *E(p*_{i}*)=P(z*_{i}*)*.
Let q_{0}= m
and q_{1}= s
and define

. (13)

The information matrix for sensitivity tests has the following elements:

. (14)

They are found by adding the *J*_{j}*(z*_{i}*)*
functions evaluated for each test level.

The asymptotic variance of the MLEs are given by terms in the inverse of the information matrix:

. (16)

The asymptotic methods of estimation of the variance of the
parameters uses Equations 13 - 16 with m_{e}
and s_{e} substituted for m and s
respectively. The confidence intervals are constructed from the
variances according to Equations 7 and 8 and a similar equation for s.

Equations 7 and 8 can be generalized to compute joint confidence regions for both m and s. With the same assumptions used to create the confidence intervals for individual parameters, joint confidence regions are bounded by the ellipse given by the equation

, (17)

where

. (18)

In spite of the difficulties mentioned in a later section, the asymptotic methods of analysis have gained wide acceptance. The main advantage of this method is that the calculations are relatively simple to perform. Another advantage is that the estimates of the variance change with the test design as the simulation variances change. Thus, one procedure can be used to calculate estimates for all test designs. The asymptotic method is the basis of various versions of the ASENT program (Mills 1980, Ashcroft 1981, Ashcroft 1987, Spahn 1989, Neyer 1990a), used by the explosive test community.

The Bruceton analysis for the confidence intervals was developed in the 1940's to enable calculating parameters and confidence intervals by computing simple sums. It is based on asymptotic analysis, with one further approximation. It calculates limits based on the assumption that the distribution of successes and failures at the various test levels follows the asymptotic distribution. According to Dixon and Mood [1948] this analysis is valid only if the sample size is larger than approximately 50 and the step size is between 0.5 and 2.0 times s. As long as these conditions are met, then the Bruceton analysis generally yields estimates similar to the ASENT programs.

As in the variance function method described previously, the asymptotic methods cannot provide estimates of the variances or confidence intervals when there is no overlap. Taking the limit of Equation 13 gives

. (19)

Use of Equations 15 and 16 shows that the asymptotic limits are infinite when the data do not overlap.

The simulation method is used by some experimenters (Thompson and Stuart 1984). A Monte Carlo simulation is used to measure the variance of the parameters of interest. A value is assumed for m and s, and a test design is selected. The program conducts a sensitivity test using a random number to determine if the specimen would respond at the specified stress level. After the required number of specimens are tested, Maximum Likelihood Estimates for the parameters are computed. The simulation is repeated many times, and the mean and variance of the parameters are computed.

The simulation methods are simple in principle, but require some care in their use. The simulation results are for a given value of m and s. For many sensitivity test designs, Bruceton and Langlie tests in particular, the variance of the parameters is a complicated function of the parameters, the sample size, and the spacing of the stress levels about m. Thus, the experimenter must perform a simulation, involving thousands of repetitions, closely matched to the exact experimental conditions. Moreover, since the experimenter does not know the true values of the parameters of the distribution, the MSEs are generally used for the simulation instead of the true parameters. Thus, many simulations with variations of the parameters must be performed to determine reliable estimates of the variation of the parameters. If all the necessary precautions are taken, simulation provides a reliable method of estimating the confidence intervals. However, as in the previous methods, confidence intervals are usually not predicted if there is no overlap of response and non-response levels. It is possible to determine the range of parameters that would be likely to yield non-overlapping results; however, this approach is rarely taken.

The Likelihood Ratio Test (Kendall and Stuart 1967) can be used to estimate confidence intervals for sensitivity tests. It can be used to analyze all tests, even those where there is no overlap.

Let L(x|t ) be a likelihood
function, where *x* represents the experimental values (x_{i},
n_{i}, p_{i}) and q
represents the parameters m and s. The vector q =
(q_{r}, q_{s})
has *k=r+s* parameters (*r* ³ 1,
*s* ³ 0). Confidence regions
can be found by testing the hypothesis

, (20)

against the alternative

. (21)

Let q_{e} = (q_{re}, q_{se})
be the unconditional MLEs and q'_{e}
= (q'_{r0}, q'_{se}) be the MLEs given *H*_{0}.
The ratio

is an indication of likelihood that *H*_{0}
is true. The closer *l* is to 1.0, the more likely it is
that *H*_{0 }is true. If asymptotic normality
and efficiency of the MLEs are satisfied, then

for large sample sizes (Kendall and Stuart 1967).

Equation 23 can only be established
asymptotically in general, but is true for all sample sizes for
some distributions. For example, in a normal distribution, the
Likelihood Ratio Test leads to the "students t" test
when testing the mean. The simulation reported in later shows
that *l* follows the asymptotic distribution well for sample
sizes greater than approximately 20 for sensitivity tests.

The calculation of Likelihood Ratio confidence regions is
conceptually simple, although the computations themselves are
quite complex. For a given *l*_{a}
compute the region

. (24)

Since the MLE parameters are unique when the data overlap (Silvapulle 1981), this region is composed of all points internal to the boundary defined by Equation 22. (When there is no overlap, the likelihood function is unity for all values of m inside the interval between the highest failure and the lowest response when s = 0. Efficient test designs ensure overlap as the sample size increases. This test can be used, with a slight modification discussed later, to analyze the results of these tests.)

It is straight forward to compute the confidence ratios, *l*_{a}, for any given a assuming Equation 23
is true. For example, to compute a confidence region for both m and s
simultaneously, there are two degrees of freedom, and *P[-2 *ln*
l < a] = *c^{2}*(a|*n* = 2) = 1 - e*^{-a/2}.
Letting *a** =e*^{-a/2}
gives* P[l > **a** ]
= 1 - **a*. Thus, the joint
confidence region of size *a*
for both *m* and *s* is specified by the confidence
ratio

(2 degrees of freedom). (25)

Similarly, a confidence region for a single parameter (also called a confidence interval) can be estimated from the confidence ratio

(1 degree of freedom). (26)

This technique can estimate confidence intervals for *m*, *s*,
or other 100pth percentiles.

The main disadvantage of this technique is that much calculation is needed to compute the confidence intervals. The calculation must use iterative techniques to follow various contours of the likelihood function to determine which combination of parameters gives the smallest and largest values. However, the confidence intervals for all parameters are determined by the same contour. (It can take between one second to several minutes to compute confidence regions on a personal computer.) Several programs (Neyer 1989m1, Neyer 1989m, Neyer1989p) employ this method to calculate confidence intervals.

The Likelihood Ratio Test can also be applied to test if two samples were drawn from the same or different populations. In this case, the ratio of the likelihood function,

(27)

is an indication of whether the two sets of parameters are the
same or different. Since there are two degrees of freedom, Equation 25 holds asymptotically. A ratio of *l*
means that you are 1 - *l* confident that the two samples
were drawn from different populations. Several programs (Neyer 1989c1, Neyer
1989c) employ this method to test for differences.

Simulation was performed to test the ability of the Likelihood Ratio Test to analyze the results of a wide variety of sensitivity tests. The simulation have been described in a previous paper (Neyer 1989) describing a more efficient sensitivity test. Several additions were made to the simulation to accommodate this study.

Simulations were performed for the Bruceton (Dixon and Mood1948), Langlie [1965], and Neyer [1989] tests when the initial guess of
sigma, *s*_{guess}, was
a multiple of the true *s*.
After each test, MLEs of the parameters were computed. The
variances of these estimates were computed to estimate the
efficiency of each test. In addition, two histograms of
likelihood ratios were computed. One histogram recorded the
ratios of the likelihood function evaluated at the true
parameters divided by the likelihood ratio at the MLEs. The
second recorded the ratio of the joint likelihood function of two
consecutive tests, assuming that they had common *m* and *s*,
divided by the product of the likelihood functions with their
individual *m* and *s*. Both histograms are cumulative
histograms; they graph the fraction of ratios lying below *l*
as a function of *l*.

Sensitivity tests sometimes result in tests with no overlap because of limited sample size or an inefficient test design due to faulty initial estimates of the parameters. Care must be taken when analyzing these experiments. Many researchers have suggested that either these tests be ignored, or additional samples be tested until overlap occurs. However, both of these suggestions cause problems in actual experiments. There is no reason to ignore the results if the data do not overlap, and it is often impossible to increase the sample size if the specimens require much preparation. Thus, realistic simulation must include tests with no overlapping data.

Let *L*_{hf} denote the level of the
highest failure and *L*_{ls} denote the level
of the lowest success. Tests with no overlap are signaled by *L*_{hf}
< *L*_{ls}. In this case the likelihood
function would be exactly one for any value of *m* chosen between *L*_{hf}
and *L*_{ls} when *s*
is equal to zero. However, if there is overlap (i.e., *L*_{hf}
> *L*_{ls}), or if *L*_{hf}
= *L*_{ls} then the maximum possible value of
the likelihood function is 0.25. There should be little
difference between the confidence regions of three tests, one
with *L*_{hf} = *L*_{ls}
- *e*, the second with *L*_{hf}
= *L*_{ls}, and the third with *L*_{hf}
= *L*_{ls} + *e*,
where *e* is some small number.
Thus the ratios were constructed in two different ways, one using
the actual likelihood values, and a second substituting 0.25 for
the likelihood function for all tests that did not have
overlapping results.

**Figure 1****:
Corrected and Uncorrected Likelihood Ratio Tests. **

Figure 1 shows the comparison between the
corrected and uncorrected ratios for the Langlie and Neyer tests
with a variety of sample sizes from 15 to 30. The curves with
squares are uncorrected while the curves with circles are
corrected. Since all tests have (the far from optimal) *s*_{guess }= 0.1 *s*, a significant fraction of the
tests yielded results that did not overlap. Inspection of the
graphs show that the corrected ratios were somewhat closer *on
the average* to the ideal case (shown by the solid line) than
the uncorrected ratios. Since corrected ratios result in more
conservative confidence regions, and are stable with respect to
small changes in the test level, they were used for the rest of
the work. The probability of overlap for efficient test designs
increases to unity as the sample size increases. Thus, both the
corrected and uncorrected ratios have the same asymptotic form.

Figures 2 - 4 show the result of the simulations. These
figures show the histograms of the ratio of the likelihood
function evaluated at the true parameters divided by the maximum
likelihood function. Each figure shows the ratios for the seven
cases of *m* = *m*_{guess}, *s *= (0.1, 0.2,0.5,1.0,2.0,5.0, and
10.0) *s*_{guess} for
various sample sizes and sensitivity tests. The legend shows the
ratio of the population *s *to
the guess used in the design of the test *s*_{guess}.
(Previous study (Neyer 1989) has
shown that wrong values of *m*
generally have little effect on sensitivity tests.) For these
ratios, there are two free parameters, *m*
and *s*. Thus Equation
25 should hold asymptotically.

**Figure 2****:
Cumulative Likelihood Ratio Distribution for the Bruceton test. **

**Figure 3: Cumulative Likelihood Ratio
Distribution for the Langlie test. **

**Figure 4****:
Cumulative Likelihood Ratio Distribution for the Neyer test. **

As is seen from the figures, even for sample sizes as small as
20, the ratios are close to the asymptotic value shown as the
solid line in the figures, as long as the test is efficient
(i.e., *s* *»
s*_{guess}). The ratios approach the
asymptotic value as the sample size increases. The only exception
is when the test design is very inefficient for large samples
(i.e., Bruceton tests when *s*
« *s*_{guess}).
Moderately inefficient tests (i.e., Bruceton and Langlie tests
when *s* » *s*_{guess})
approach the asymptotic value, but more slowly than test designs
matched to the population. Since the Neyer test is asymptotically
efficient regardless of the initial estimates of the parameters (Neyer 1989), it rapidly approaches the
asymptotic values when the sample size is reasonable. Since
researchers often repeat a sensitivity test if the initial
assumptions about the population are far from the truth, the
Likelihood Ratio Test can be used to compute confidence regions
in most cases.

In evaluating these graphs, it is important to note that the graphs illustrate how the computed confidence regions compare to the actual confidence regions for a variety of tests under different conditions. These graphs give no indication of the size of the confidence regions for the parameters. Thus, they should not be viewed as a measure of how efficiently a given test can measure the parameters of the distribution. The efficiency of these test designs has been reported in previous works (Neyer 1989, Edelman and Prairie 1966, Langlie 1965).

The same simulation was used to compute histograms of the fraction of time the true parameters were inside the confidence region that was computed by the asymptotic method. (See section on Asymptotic method.) The simulation has shown that the true variance is often larger than the asymptotic variance by a factor of two or more. Thus, the true confidence is often much smaller than that suggested by the asymptotic method. For example, a 99% confidence region computed by the asymptotic method would fail to contain the true parameters 26% of the time for a 20 sample Langlie test performed under ideal conditions! Figure 5 shows that when sample sizes are even as large as 100, with Bruceton, Langlie, and Neyer tests performed under ideal conditions, the asymptotic method significantly underestimates the confidence regions when the confidence is larger than 90%. Thus, reliance upon the asymptotic method could lead to much false confidence in the data.

**Figure 5****:
Comparison of asymptotic and Likelihood Ratio Test Confidence
Regions. **

Similarly, the histograms in Figures 6 - 8 are a measure of the relationship between the
confidence *a* and the
likelihood ratio, *l*. These histograms used the same
three tests and same ratios of *s*
to *s*_{guess} as used
in Figures 2 - 4. The histograms show the ratio of the product
of two likelihood functions evaluated at their joint MLEs to the
product of the likelihood functions evaluated at their individual
MLEs. Thus, they can be used to test whether the two samples were
drawn from the same or different populations.

**Figure 6****:
Cumulative dual Likelihood Ratio Distribution for the Bruceton
test. **

**Figure 7: Cumulative dual Likelihood
Ratio Distribution for the Langlie test. **

**Figure 8****:
Cumulative dual Likelihood Ratio Distribution for the Neyer test.
**

Since there are two extra parameters in the denominator (*m*_{t} and *s*_{t} for the numerator and
*m*_{1}, *s*_{1}, *m*_{2},
and *s*_{2} in the
denominator) Equation 25 should hold
asymptotically. As is seen from the figures, the asymptotic
relationship is reasonable, even for sample sizes as small as 20,
for efficient tests. Like Figures
2 - 4, the tests that are inefficient
produce ratios further from the asymptotic values, although the
deviation is greater for the joint ratios. Thus, the Likelihood
Ratio Test can also be used to test whether two samples are from
the same population as long as the test is "reasonably
efficient". This comparison also has the advantage that it
can be used to compare the results of any two tests, even two
with different sample sizes and test methods.

A new method, based on the Likelihood Ratio Test, is proposed for analyzing sensitivity tests. Simulation shows that it is able to analyze the results of all sensitivity tests, including degenerate results, that it produces relatively unbiased analysis, and that the results are valid regardless of the test design. All three of these characteristics are advantages over the currently used asymptotic methods. It is able to analyze single tests to produce confidence regions of various size. It is also able to determine whether two samples were drawn from the same or different populations. The only apparent disadvantage of this test is the significant amount of computation necessary for computing confidence regions. However, these computations can be performed on even small computers in a few seconds or less.

Robert W. Ashcroft (1981), "A desktop computer version of ASENT," Technical Report MHSMP-81-46, Mason and Hanger, Silas Mason Company, Amarillo, Texas, November 1981.

Robert W. Ashcroft (1987), "An IBM PC version of ASENT," Technical Report MHSMP-87-51, Mason and Hanger, Silas Mason Company, Amarillo, Texas, December 1987.

J. W. Dixon and A. M. Mood
(1948), "A Method for Obtaining and Analyzing
Sensitivity Data,"* Journal of the American Statistical
Association*, **43**, pp. 109-126.

D. A. Edelman and R. R.
Prairie (1966), "A Monte Carlo Evaluation of the
Bruceton, Probit, and One-Shot Methods of Sensitivity
Testing," Technical Report **SC-RR-66-59**, Sandia
Corporation, Albuquerque, NM.

D. J. Finney (1947), *Probit
Analysis, A Statistical Treatment of the Sigmoid Response Curve*,
Cambridge at the University Press, Cambridge, England.

Maurice G. Kendall and
Alan Stuart (1967), *The Advanced Theory of Statistics*,
Volume 2, Second Edition, New York: Hafner Publishing Company.

H. J. Langlie (1965), "A
Reliability Test Method For "One-Shot'" Items,"
Technical Report **U-1792**, Third Edition, Aeronutronic
Division of Ford Motor Company, Newport Beach, CA.

H. J. Langlie (1988m), *ONE_SHOT
PAC Users Manual*, CMOS Records, Balboa, California.

H. J. Langlie (1988o), ONE_SHOT
PAC, Version 1.2,* *CMOS Records, Balboa, California.

B. E. Mills (1980), *"*Sensitivity
Experiments: A One-Shot Experimental Design and the ASENT
Computer Program," SAND80-8216, Sandia Laboratories,
Albuquerque, New Mexico.

Barry T. Neyer (1989), "More Efficient Sensitivity Testing," Technical Report MLM-3609, EG&G Mound Applied Technologies, Miamisburg, OH.

Barry T. Neyer (1989c1), *COMSEN*,
Version 1.0, National Energy Software Center, Argon, Illinois.

Barry T. Neyer (1989m1), *MUSIG*,
Version 1.0, National Energy Software Center, Argon, Illinois.

Neyer Software (1990a), *ASENT**
*Program, Version 2.1, Neyer
Software, Cincinnati, Ohio.

Neyer Software (1989c), *ComSen*,
Version 2.1, Neyer Software,
Cincinnati, Ohio.

Neyer Software (1989m), *MuSig*,
Version 2.1, Neyer Software,
Cincinnati, Ohio.

Neyer Software (1989p), *ProbPlot*,
Version 2.1, Neyer Software,
Cincinnati, Ohio.

Mervyn J. Silvapulle (1981),
"On the Existance of Maximum Likelihood Estimators for the
Binomial Response Models," *Journal of the Royal
Statistical Society B*, **43**, pp. 310-313.

Patrick Spahn (1989), *LOGIT*,
Naval Surface Warfare Center, White Oak, Maryland.

Ramie H. Thompson (1989), *BRUCETON*,
Franklin Research Institute, Philadelphia, Pennsylvania.

Ramie H. Thompson (1987), L1SHOT, Franklin Research Institute, Philadelphia, Pennsylvania.

Ramie H. Thompson and
James G. Stuart (1984)*, An Evaluation of Statistical
Techniques used in Electroexplosive Device Testing*,
F-C5867-003, Franklin Research Center, Philadelphia,
Pennsylvania.