Home » Additional Features, Technometrics Highlights

Dimensional Analysis Featured in August Issue

1 August 2013 174 views No Comment
Hugh A. Chipman, Technometrics Editor

    technometrics

    Dimensional Analysis (DA) is a fundamental method in the engineering and physical sciences for analytically reducing the number of experimental variables affecting a given phenomenon prior to experimentation. Two powerful advantages associated with the method, relative to standard design of experiment approaches are (1) a priori dimension reduction and (2) scalability of results. The latter advantage permits the experimenter to effectively extrapolate results to similar experimental systems of differing scale.

    Unfortunately, DA experiments are underused because few statisticians are familiar with them. In “Experimental Design for Engineering Dimensional Analysis,” Mark C. Albrecht, Christopher J. Nachtsheim, Thomas A. Albrecht, and R. Dennis Cook provide an overview of DA and give basic recommendations for designing DA experiments. They also consider various risks associated with the DA approach, the foremost among them being the possibility that the analyst might omit a key explanatory variable, leading to an incorrect DA model. When this happens, the DA model will fail and experimentation will be largely wasted. To protect against this possibility, they develop a robust-DA design approach that integrates the best of standard empirical DOE with the suggested design strategy. Results are illustrated with straightforward applications of DA. This article features discussion by Tim Davis, Daniel D. Frey, Bradley Jones, V. Roshan Joseph, Dennis K.J. Lin, Greg F. Piepel, Matthew Plumlee, Weijie Shen, and C. F. Jeff Wu and a rejoinder by the authors.

    Radiation detection systems are deployed at U.S. borders to guard against entry of illicit radioactive material. Each vehicle slowly passes by a set of fixed radiation sensors, resulting in a ‘vehicle profile’ consisting of a time-series of counts. In “Moving Neutron Source Detection in Radiation Portal Monitoring,” Tom Burr and Michael S. Hamada evaluate the efficacy of six detection methods under different vehicle and illicit material scenarios. One of these methods, a novel estimated matched filter that estimates the shape of a neutron count vehicle profile, is shown to be especially effective at signaling illicit profiles.

    The goal of multivariate receptor modeling is to estimate the profiles of major pollution sources and quantify their effects based on ambient measurements of pollutants. Despite the growing availability of multi-pollutant data collected from multiple monitoring sites, there has not yet been any attempt to incorporate spatial dependence that may exist in such data. In “Multivariate Receptor Models for Spatially Correlated Multi-Pollutant Data,” Mikyoung Jun and Eun Sug Park propose a spatial statistics extension of multivariate receptor models. The proposed method yields more precise estimates of source profiles. More importantly, it enables predictions of source contributions at unmonitored sites as well as when there are missing values at monitoring sites.

    In many areas of science, one aims to estimate latent subpopulation mean curves based only on observations of aggregated population curves. For example, in near-infrared spectroscopy, a single spectral profile is the aggregate for a complex mixture of several constituents. Disentangling such signals is the subject of “A Hierarchical Model for Aggregated Functional Data,” by Ronaldo Dias, Nancy L. Garcia, and Alexandra M. Schmidt. A Gaussian process approach using B-spline basis functions to model the covariance function, combined with a full Bayes hierarchical model, provides full inference and assessment of uncertainty. Two real examples illustrate the method: NIR spectroscopy and an analysis of distribution of energy among different types of consumers.

    In manufacturing, a binary measurement system may need to provide 100% inspection to protect customers from receiving nonconforming product. In “Assessing a Binary Measurement System with Varying Misclassification Rates When a Gold Standard Is Available,” Oana Danila, Stefan H. Steiner, and R. Jock MacKay consider assessment plans and their analysis when an available gold standard system is too expensive for everyday use. New random effects models allow for variation in the misclassification rates within the populations of conforming and nonconforming parts. For high-capability processes with low misclassification rates, the standard plan of randomly sampling parts requires a large number of measurements. An alternate design, where random sampling is from the sets of previously passed and failed parts, can precisely estimate the parameters of interest with many fewer measurements.

    The problem of choosing a design that is representative of a finite candidate set is an important problem in computer experiments. The minimax criterion measures the degree of representativeness via the maximum distance of a candidate point to the design. A significant stumbling block in the use of such designs is the availability of good construction algorithms. Matthias H.Y. Tan, in “Minimax Designs for Finite Design Regions,” makes a useful connection between minimax designs and the classical set covering location problem in operations research, which is a binary linear program. These results are employed to design an efficient procedure for finding minimax designs for small candidate sets. A heuristic procedure is proposed to generate near-minimax designs for large candidate sets.

    The issue concludes with “A Bayesian Approach for Model Selection in Fractionated Split Plot Experiments with Applications in Robust Parameter Design.” In the paper, Matthias H.Y. Tan and C.F. Jeff Wu extend Bayesian variable selection techniques to split plot experiments. The approach accounts for split plot error structure, resulting in an appropriate analysis. A novel algorithm efficiently explores model space, identifying models with high posterior probabilities and providing estimates of these probabilities. Robust parameter design represents a natural application for split-plot experiments, with hard-to-change (e.g., control) factors corresponding to whole plots. Two real robust parameter design examples demonstrate the advantages of the method.

    Share
    1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
    Loading ... Loading ...

    Leave your response!

    Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.

    Be nice. Keep it clean. Stay on topic. No spam.

    You can use these tags:
    <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

    This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.