Home » Additional Features, Technometrics Highlights

Reliability Growth Metrics Featured in November Issue

1 November 2010 1,961 views No Comment
David M. Steinberg, Technometrics Editor


The U.S. Department of Defense (DoD) requires high levels of reliability in the products and systems it purchases. A task force set up by the DoD raised serious doubts about whether these goals are realized in a 2008 report. The task force found that an increasing number of DoD weapon systems were not operationally suitable, primarily due to poor reliability, availability, and maintainability. According to the report, nearly half of U.S. Army systems failed to meet their reliability goals when tested in the field.

In the feature article, J. Brian Hall, Paul M. Ellner, and Ali Mosleh argue that a major reason for these system failures is the lack of appropriate metrics for measuring reliability growth during system development. Their work is devoted to reliability growth management metrics and statistical methods for discrete-use systems (i.e., systems whose test duration is measured in terms of discrete trials, shots, or demands).

Most systems undergo a reliability improvement process, in which failure modes are discovered and appropriate corrective actions are taken. Thus, the system configuration does not remain constant. The new methods presented in “Reliability Growth Management Metrics and Statistical Methods for Discrete-Use Systems” can be used for analyzing and assessing reliability growth during such a process. The authors show how to estimate system reliability; the expected number of failure modes observed during testing; the probability of failure due to a new failure mode; and the portion of system unreliably associated with repeat, or known, failure modes.

In turn, one can then (1) estimate the initial and projected reliability and the reliability growth potential, (2) address model goodness-of-fit concerns, (3) quantify programmatic risk, and (4) assess reliability maturity of discrete-use systems undergoing development. Statistical procedures for point estimation, confidence interval construction, and goodness-of-fit testing also are given. An application to a missile program illustrates the utility of the approach.

Five invited discussions accompany the article, along with a rejoinder by the authors.

The next two articles are on experimental design. The first, by Xianggui Qu, is titled “Optimal Row-Column Designs in High-Throughput Screening Experiments.” High-throughput screening (HTS) is a large-scale screening process whose goal is to identify, from among hundreds of thousands to millions of potential compounds, those that are pharmacologically active. A key piece of HTS equipment is the microplate, a container that features a grid of small, open wells in which to place compounds under test. Testing throughput is maximized by placing a distinct compound in each well. However, microplates typically have row and column effects, so designs are needed that permit pairwise comparison of compounds while simultaneously eliminating the row and column effects.

Finding good row-column designs for HTS is a challenging task. In this article, the (M,S)-optimality criterion is used to select optimal and eliminate inefficient designs. It turns out that all (M,S)-optimal designs are binary (i.e., no treatments appear twice in any row or column). The author shows how to construct (M,S)-optimal designs that permit estimation of all paired treatment comparisons.

Holger Dette and Andrey Pepelyshev propose a new class of designs in their article, “Generalized Latin Hypercube Design for Computer Experiments.” Most popular designs for computer experiments, such as the Latin hypercube, place points uniformly on each factor axis. This article investigates the performance of nonuniform placement, with more points toward the boundary of the design space.

These designs are obtained from existing designs by a quantile transformation on each factor. The transformation is motivated by logarithmic potential theory, which yields the arc-sine measure as an equilibrium distribution. The methodology is illustrated for maximin Latin hypercube designs by several examples. The new designs yield a smaller integrated mean squared error for prediction than the original.

Dorin Drignei considers the analysis of data from computer experiments in his article, “Functional ANOVA in Computer Models with Time.” The paper develops analyses for computer experiments that generate time series or functional output. Drignei shows how to generalize sensitivity analysis—used to assess the influence of each input on the output—to models with time series output. The methods in the paper originate in the concept of conditional expectation and variance with respect to time. The article also establishes a relationship between the proposed sensitivity indices and the global sensitivity indices that have been proposed for models with scalar output. An application from the automotive industry illustrates the analysis.

Most methods for statistical process monitoring require initial in-control data to set control limits. As a result, they are of limited value for start-up data or short runs. Panagiotis Tsiamyrtzis and Douglas M. Hawkins overcome this problem by adopting a Bayesian framework that exploits prior information in “Bayesian Start-Up Phase Mean Monitoring of an Autocorrelated Process That Is Subject to Random-Sized Jumps.” The authors provide a monitoring scheme for the mean of an autocorrelated process that can experience bidirectional jumps of random size and occurrence and has a steady state. The method tracks the mean in an online fashion using a Bayesian sequential updating scheme. The performance of the proposed model is compared to other methods that can be applied in similar settings. The model is illustrated with a real application from the dairy business. Supplemental files, available online, include R code for applying the proposed methods.

The issue concludes with an article by Bing Xing Wang, Keming Yu, and M. C. Jones titled “Inference Under Progressively Type II Right-Censored Sampling for Certain Lifetime Distributions.” The article considers estimation of the parameters of a certain family of two-parameter lifetime distributions based on progressively type II right-censored samples (including ordinary type II right censoring).

This family of proportional hazard distributions includes the Weibull, Gompertz, and Lomax distributions. The authors derive exact confidence intervals for one of the parameters and generalized confidence intervals for the other; inference for the first parameter is accomplished independently of the unknown value of the other parameter in this family of distributions. A simulation study concentrating mainly on the Weibull distribution illustrates the accuracy of these confidence intervals, as well as the shorter length of the exact confidence interval compared with a known alternative. It also shows that the estimators compare favorably with maximum likelihood estimators. The method is applied to data from an industrial experiment on the breakdown times of an insulating fluid.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.