Home » Additional Features, Technometrics Highlights

Forecasting, Reliability, and Design of Experiments Part of Newest Issue

1 October 2009 1,984 views No Comment
David M. Steinberg, Technometrics Editor



technometricsThe August issue of Technometrics includes 10 articles that cover a broad range of topics, including forecasting, reliability, design of experiments, computer experiments, automatic control, regression, and measurement.

The lead article, by Haipeng Shen, is “On Modeling and Forecasting Time Series of Smooth Curves.” This research was motivated by problems arising in the operations management of telephone customer service centers, where forecasts of daily call arrival rate profiles are needed for service agent staffing and scheduling purposes. The article shows how these data can be effectively modeled as a time series of smooth curves and develops methods for forecasting such curves and dynamically updating the forecasts. The methodology has three components: dimension reduction through a smooth factor model, time series modeling and forecasting of the factor scores, and dynamic updating using penalized least squares. The proposed methods are illustrated via call center data and two simulation studies.

“Two-Stage Leveraged Measurement System Assessment,” by Ryan Browne, Jock MacKay, and Stefan Steiner, presents a method for studying the variation in a measurement system. The new plan is conducted in two stages. In the first stage, called the baseline, a number of parts are measured once. In the second stage, a few extreme parts are selected (based on their baseline measurement) and each is re-measured several times. The authors compare this to the standard approach of making repeat measurements on a random sample of parts and demonstrate the advantage of the leveraged plan in terms of the bias and standard deviation of estimators of the intraclass correlation coefficient. They also present a method to determine sample size when planning a leveraged measurement system assessment.

Lulu Kang and V. Roshan Joseph contribute “Bayesian Optimal Single Arrays for Robust Parameter Design.” A critical goal in robust parameter design experiments is to estimate control factor-by-noise factor interactions. To achieve this goal in small experiments, some researchers have proposed use of a modified effect hierarchy principle. In this article, Kang and Joseph propose a Bayesian criterion for single arrays that incorporates the importance of control-by-noise interactions without altering the effect hierarchy. A modified exchange algorithm is proposed for finding the designs, and matching software is available online. The authors also explain how to design experiments with internal noise factors, a topic that has received scant attention in the literature. The advantages of the proposed approach are illustrated using several examples.

In “Algorithmic Construction of Efficient Fractional Factorial Designs with Large Run Sizes,” Hongquan Xu develops a sequential algorithm for constructing minimum aberration, two-level, fractional factorial designs. The algorithm exploits results that relate minimum aberration designs to minimum aberration projections onto a subset of factors in a sequential build-up process. Moment projection patterns are used to efficiently identify nonisomorphic designs. A fast isomorphism check procedure is developed by matching the factors using their delete-one-factor projections. This algorithm is used to completely enumerate all 128-run designs of resolution 4, all 256-run designs of resolution 4 up to 17 factors, all 512-run designs of resolution 5, all 1024-run designs of resolution 6, and all 2048- and 4096-run designs of resolution 7. A method is proposed for constructing minimum aberration designs using only a partial catalog of good designs. Minimum aberration or good designs are tabulated up to 40, 80, 160, 45, 47, and 65 actors for 128, 256, 512, 1024, 2048, and 4096 runs, respectively.

Our next article, by Gang Han, Thomas J. Santner, and William I. Notz, studies a problem that arises in computer experiments: “Prediction for Computer Experiments Having Quantitative and Qualitative Input Variables.” This paper introduces a Bayesian methodology for prediction in computer experiments having both quantitative and qualitative inputs. The proposed model is a hierarchical Bayesian model with conditional Gaussian stochastic process components. For each of the qualitative inputs, the model assumes the outputs corresponding to different levels of the qualitative input have ‘similar’ functional behavior in the quantitative inputs. The predictive accuracy of this method is compared with the predictive accuracies of alternative proposals in examples. The method is illustrated in a biomechanical engineering application.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.