Design and Model Selection Featured in November Issue
Hugh Chipman, Technometrics Editor
This issue of Technometrics contains an assortment of articles about design and analysis.
Increasingly, industrial experiments use multi-stratum designs, such as split-plot and strip-plot. Often, these experiments span more than one processing stage. The challenge is to identify an appropriate multi-stratum design, along with an appropriate statistical model. In “A General Strategy for Analyzing Data from Split-Plot and Multi-Stratum Experimental Designs,” Peter Goos and Steven G. Gilmour introduce Hasse diagrams as a visual tool to aid in construction. They demonstrate the approach with a large study of the adhesion properties of coatings to polypropylene for designs ranging from a simple split-plot design to a strip-plot type of design involving repeated measurements of the response.
In many industrial experiments, some factors are not independently set for each run due to cost constraints and the hard-to-change nature of the levels of these factors. Attention is usually restricted to split-plot designs in which all the hard-to-change factors are independently reset at the same points in time. This constraint is relaxed somewhat in split-split-plot designs because these require the less hard-to-change factors to be reset more often than the most hard-to-change factors. A key feature of the split-split-plot designs, however, is that the less hard-to-change factors are reset whenever the most hard-to-change factors are reset. In “Staggered-Level Designs for Experiments with More Than One Hard-to-Change Factor,” Heidi Arnouts and Peter Goos relax this constraint and present a new type of design that allows the hard-to-change factor levels to be reset at entirely different points in time.
Two-level designs are powerful tools for the identification of models with main effects and two-factor interactions. In “Model-Robust Two-Level Designs Using Coordinate Exchange Algorithms and a Maximin Criterion,” Byran J. Smucker, Enrique del Castillo, and James L. Rosenberger present a new set of coordinate exchange algorithms, which construct designs that maximize the number of estimable models and use a secondary, maximin criterion to encourage high efficiency with respect to the models.
In “Two-Stage Sensitivity-Based Group Screening in Computer Experiments,” Hyejung Moon, Angela M. Dean, and Thomas J. Santner develop a new two-stage group screening methodology for identifying active inputs. In Stage 1, groups of inputs showing low activity are screened out; in Stage 2, individual inputs from the active groups are identified. Examples show that, compared with other procedures, the proposed method provides more consistent and accurate results for high-dimensional screening.
In robust design studies, the important noise factors are varied systematically in offline experiments and their interactions with control factors are investigated. The choice of the noise variable settings is extremely important. However, the noise distributions are rarely known, and the choices are often based on convenience. In “Noise Variable Settings in Robust Design Experiments,” Derek Bingham and Vijayan N. Nair demonstrate unintended and undesirable consequences of such choices, including identification of small dispersion effects as important, missing of large ones, and issues with parameter optimization.
Accelerated life tests provide timely information on product reliability. As product complexity increases, these studies often generate multiple dependent failure modes. In “Planning of Accelerated Life Tests with Dependent Failure Modes Based on a Gamma Frailty Model,” Xiao Liu incorporates the dependence between failure modes into design and modeling via a gamma frailty model. The frailty model is easily understandable and maintains mathematical tractability in the planning problem when there are more than two failure modes.
In Bayesian system reliability studies, second-stage data is aimed at obtaining a more precise estimate of the system’s reliability. The current strategy for comparing potential experimental designs is computationally intensive and time-consuming. In “Computationally Efficient Comparison of Experimental Designs for System Reliability Studies with Binomial Data,” Jessica Chapman, Max Morris, and Christine Anderson-Cook present a new, more computationally efficient methodology.
In “Variable Selection in Additive Models Using P-Splines” by A. Antoniadis, I. Gijbels, and A. Verhasselt, the nonnegative garrote is extended to a component selection method in a nonparametric additive model in which each univariate function is estimated with P-splines. The resultant procedure is computationally efficient and performs well when implemented with an appropriate parameter-selection method.
Visit the Technometrics website for a full table of contents.