Home » Additional Features, Technometrics Highlights

Service Accessibility Analyzed with Novel Methods in May Issue

1 May 2012 1,478 views No Comment

Service accessibility is the access of a community to the nearby site locations in a service network consisting of multiple geographically distributed service sites. Existing studies have had limited scope, both geographically (e.g., towns) and temporally (e.g., a one-year period). In “Clustering Random Curves Under Spatial Interdependence with Application to Service Accessibility,” Huijing Jiang and Nicoleta Serban develop new statistical methodology to estimate and classify service accessibility patterns varying over a large geographic area and a period of 16 years. The focus of this study is on financial services, but it generally applies to any other service operation.

The paper introduces a model-based method for clustering random time-varying functions that are spatially interdependent. The underlying clustering model is nonparametric with spatially correlated errors. The authors assume the clustering membership is a realization from a Markov random field. These assumptions enable the model to borrow information across functions corresponding to nearby spatial locations, resulting in enhanced estimation accuracy of the cluster effects and cluster membership.

Several discussions and a rejoinder by the authors accompany the article, examining the relationship with other approaches for functional and spatiotemporal data and identifying possible directions for future research. The discussants are Ciprian M. Crainiceanu, Ana-Maria Staicu, C.B. Dean, Cindy X. Feng, Gareth M. James, Wenguang Sun, Xinghao Qiao, Bo Li, Xiao Wang, Jiaping Wang, Haipeng Shen, and Hongtu Zhu.

The remainder of the issue includes papers about regression, reliability, and experimental design. John H.J. Einmahl and Maria Gantner develop the “half-half (HH) plot,” a new graphical method to investigate qualitatively the shape of a regression curve. The empirical HH plot counts observations in the lower and upper quarter of a strip that moves horizontally over the scatterplot. The plot displays jumps clearly and reveals further features of the regression curve.

Two papers in reliability follow. In “Testing for Monotone Trend in Recurrent Event Processes,” J.F. Lawless, C. Cigsar, and R.J. Cook examine a general concept of “trend,” showing the behavior of tests can depend on the assumed definitions of “no trend” and “trend,” and on the observation periods for the processes. The paper also presents robust tests for trend across multiple processes, extends them to deal with interval-censored event times, and compares them with other well-known trend tests.

Once estimated, a statistical reliability model can help plan burn-in tests, in which every part is tested as a way of eliminating infant mortality. In “Degradation-Based Burn-In Planning Under Competing Risks,” Zhi-Sheng Ye, Min Xie, Loon-Ching Tang, and Yan Shen develop and illustrate such a planning framework for competing risks, enabling both infant mortality and normal failure modes. The failures, themselves, can be either degradation threshold failures or catastrophic failures. The authors build three degradation-based burn-in models and derive the optimal cut-off degradation levels.

The issue finishes with three papers from different areas of experimental design. The first concerns computer experiments. Computer simulators can be effective, but computationally intensive models of real systems. In such situations, experimental designs enable the efficient exploration of the input-output relationship. In “Non-Collapsing Space-Filling Designs for Bounded Non-Rectangular Regions,” Danel Draguljic, Thomas J. Santner, and Angela M. Dean invent a design algorithm that can fill unusually shaped regions and be non-collapsing over input dimensions that may be irrelevant. The paper demonstrates the technique in applications with constrained design regions, including a total elbow replacement and tool-coating study.

Generally, experimental designs seek to maximize information or minimize expected uncertainty. In “A Bifocal Measure of Expected Ambiguity in Bayesian Nonlinear Parameter Estimation,” Emanuel Winterfors and Andrew Curtis take a new view of this problem, developing a “bifocal” measure of ambiguity based on analyzing pairs of parameter estimates. Applicable to both linear and nonlinear models, the new measure is equivalent to expected posterior variance in the linear case and to a related measure in the nonlinear case. An earth sciences application involving the location of wave sources in a medium of inhomogeneous velocity demonstrates the method.

Experiments with factors at two levels are popular for investigating main effects and two-factor interaction effects. However, cost considerations may make it infeasible to run resolution V designs, which would enable estimation of all main effects and two-factor interactions. If some two-factor interactions are negligible, then resolution IV designs may allow estimation of all effects of interest. In “Fractional Factorial Designs with Admissible Sets of Clear Two-Factor Interactions,” Huaiqing Wu, Robert Mee, and Boxin Tang improve on earlier approaches to the search for such designs, using a graph representation of the design to reduce the set of designs under consideration to a more manageable size.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.