Home » Additional Features, Technometrics Highlights

Dimension Reduction, Functional Data Analysis, Experimental Designs, Gaussian Processes Featured in February Issue

1 February 2015 1,576 views One Comment
Peihua Qiu, Technometrics Editor

Sufficient Dimension Reduction (SDR) techniques are important for analyzing high-dimensional data in various applications. A fundamental assumption required by many SDR techniques is that the predictors are elliptically contoured. In practice, this assumption is often invalid. In the paper titled “Nonparametric Variable Transformation in Sufficient Dimension Reduction,” Qing Mai and Hui Zou propose a nonparametric variable transformation method to get rid of the elliptical contour assumption. To demonstrate the main idea, the paper combines this flexible transformation method with two well-established SDR techniques: sliced inverse regression and inverse regression estimator. The resulting SDR techniques are shown to have competitive performance.

In the paper titled “Simultaneous Envelopes for Multivariate Linear Regression,” R. Dennis Cook and Xin Zhang develop a likelihood-based envelope method for simultaneously reducing the predictors and responses in multivariate linear regression, so the regression then depends only on estimated linear combinations of the original predictors and responses. They use a likelihood-based objective function for estimating envelopes and then propose algorithms for estimation of a simultaneous envelope and basic Grassmann manifold optimization. The asymptotic properties of the resulting estimator are studied under normality and extended to general distributions. Numerical studies show substantial gain over the classical methods such as partial least squares, canonical correlation analysis, and reduced-rank regression.

When it comes to functional data analysis, samples of curves usually present phase variability in addition to amplitude variability. However, existing functional regression methods do not handle phase variability in an efficient way. In the paper titled “Dynamic Retrospective Regression for Functional Data,” Daniel Gervini proposes a regression model that incorporates phase synchronization as an intrinsic part of the model and attains better predictive power than ordinary linear regression in a simple and parsimonious way.

In the paper titled “Analysis of Computer Experiments with Functional Response,” Ying Hung, V. Roshan Joseph, and Shreyes N. Melkote propose a general and efficient methodology to overcome the computational difficulty in analyzing functional outputs using kriging, especially for the data observed in irregular grid. They develop a Gibbs sampling-based expectation maximization algorithm that converts the irregularly spaced data into a regular grid.

The next four papers are about experimental designs. Computer models of physical systems often are written based on known theory or “first principles” of a system. However, there are some cases when insufficient known theory is available to encode all necessary aspects of the system. In the article titled “Physical Experimental Design in Support of Computer Model Development,” Max D. Morris considers the question of how a physical experiment might be designed to approximate one model or subroutine of a computer model that can otherwise be written from the first principles. The concept of preposterior analysis is used to suggest an approach to generating a kind of I-optimal design for this purpose when the remainder of the computer model is a composition of nonlinear functions that can be directly evaluated as part of the design process.

Optimal designs depend upon a pre-specified model form. A popular and effective model-robust alternative is to design with respect to a set of models instead of just one. However, model spaces associated with experiments of interest are often prohibitively large. In the article titled “Approximate Model Spaces for Model-Robust Experiment Design,” Byran J. Smucker and Nathan M. Drew propose a simple method that largely eliminates this problem by choosing a small set of models that approximates the full set and finding designs that are explicitly robust for this small set.

In the paper titled “Sequential Exploration of Complex Surfaces Using Minimum Energy Designs,” V. Roshan Joseph, Tirthankar Dasgupta, Rui Tuo, and C. F. Jeff Wu propose a minimum energy design (MED) as a new space-filling method to explore unknown regions of the design space of particular interest to an experimenter. The key ideas involved in constructing the MED are the visualization of each design point as a charged particle inside a box and minimization of the total potential energy of these particles. It is shown through theoretical arguments and simulations that with a proper choice of the charge function, the MED can asymptotically generate any arbitrary probability density function.

In the paper titled “Model-Based Sampling Design for Multivariate Geostatistics,” Jie Li and Dale L. Zimmerman consider multivariate spatial sampling design based on criteria targeted at classical co-kriging (prediction with known covariance parameters), estimation of covariance (including cross-covariance) parameters, and empirical co-kriging (prediction with estimated covariance parameters). Through a combination of analytical results and examples, they investigate the characteristics of optimal designs with respect to each criterion, addressing in particular the design’s degree of collocation.

The next two articles are about Gaussian processes. In the paper titled “Geodesic Gaussian Processes for the Parametric Reconstruction of a Free-Form Surface,” Enrique del Castillo, Bianca M. Colosimo, and Sam Tajbakhsh propose a Geodesic Gaussian Process (GGP) approach for the statistical reconstruction of a free-form surface patch based on three-dimensional point cloud data. The proposed GGP approach uses a parametric representation of a surface patch in which each of the three coordinates is modeled via a Gaussian process on the parametric space defined by surface coordinates. The method is applied to simulated surface data and a real data set obtained with a noncontact laser scanner.

Degradation models are widely used to assess the lifetime information of highly reliable products. The paper titled “Inverse Gaussian Processes with Random Effects and Explanatory Variables for Degradation Data” by Chien-Yu Peng proposes a degradation model based on an inverse normal-gamma mixture of an inverse Gaussian process. The paper presents the properties of lifetime distribution and parameter estimation using the EM-type algorithm, in addition to providing a simple model-checking procedure to assess the validity of different stochastic processes. Several case applications are performed to demonstrate the advantages of the proposed model with random effects and explanatory variables.

In the paper titled “Statistical Inference for Power Law Process with Competing Risks,” Anupap Somboonsavatdee and Ananda Sen discuss the problem of failure history of a repairable system. This problem has received considerable attention in statistics, engineering, and software debugging for decades. In the paper, the authors focus on analyzing failures from a repairable system subject to multiple failure modes. They consider the Power Law Process model, which is a nonhomogeneous Poisson process model with a Weibull intensity function.

The last paper, titled “Solving the MEG Inverse Problem: A Robust Two-Way Regularization Method,” by Tian Siva Tian, Jianhua Z. Huang, and Haipeng Shen is about magnetoencephalography (MEG) imaging. MEG is a common noninvasive imaging modality for instantaneously measuring whole-brain activities. One challenge in MEG data analysis is to minimize the impact of outliers that commonly exist in the images. In the paper, the authors propose a two-way regularization method for reconstructing neuronal activities from the measured MEG signals. The proposed method is based on the distributed source model and produces a spatiotemporal solution for all the dipoles simultaneously. Unlike the traditional methods that use the squared error loss function, their method uses a robust loss function, which improves the robustness of the results against outliers.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.