Home » Additional Features, Technometrics Highlights

Special Issue Focuses on Computer Modeling

1 January 2010 2,246 views No Comment

Jonathan Rougier, Serge Guillas, Astrid Maute, and Arthur D. Richmond consider statistical aspects of climate study in their article, “Expert Knowledge and Multivariate Emulation: The Thermosphere-Ionosphere Electrodynamics General Circulation Model (TIE-GCM).” This simulator of the upper atmosphere has a number of features that are a challenge to standard approaches to emulation, such as a long run time, multivariate output, periodicity, and strong constraints on the inter-relationship between inputs and outputs. These kinds of features are not unusual in models of complex systems. The authors show how they can be handled in an emulator and demonstrate the use of the outer product emulator for efficient calculation, with an emphasis on predictive diagnostics for model choice and validation. The emulator is used to ‘verify’ the underlying computer code and improve physical understanding of the simulator.

It is often helpful to approximate a complex simulator with a statistical model, known as an emulator. The use of Gaussian process models has been especially popular for generating emulators. In “Diagnostics for Gaussian Process Emulators,” by Leonardo S. Bastos and Anthony O’Hagan, diagnostics are presented to validate and assess the adequacy of a Gaussian process emulator as a surrogate for the simulator. These diagnostics are based on comparisons between simulator outputs and Gaussian process emulator outputs for test data, known as validation data, defined by a sample of simulator runs not used to build the emulator. The diagnostics take care to account for correlation between the validation data. To illustrate the validation procedure, these diagnostics are applied to two data sets.

Actual experimental of field observation data may be available, along with output from running the simulator. An important problem then is to validate the simulator (i.e., show it accurately represents the real-life system that has been observed). How to accomplish this is the subject of “Bayesian Validation of Computer Models,” by Shuchun Wang, Wei Chen, and Kwok-Leung Tsui. The proposed approach overcomes several difficulties of a frequentist approach proposed in Oberkampf and Barone (2004). Kennedy and O’Hagan (2000) proposed a similar Bayesian approach. A major difference between that approach and the one here is that Kennedy and O’Hagan focus on deriving directly the posterior of the true output. Wang and coauthors focus on first deriving the posteriors of the computer model and model bias (difference between computer and true outputs) separately, then deriving the posterior of the true output. As a result, the approach provides a clear decomposition of the expected prediction error of the true output. This decomposition explains why and how combining computer outputs and physical experiments can provide more accurate prediction than using only computer outputs or physical experiments. Two examples are used to illustrate the proposed approach.

Another area that makes extensive use of computer models is sensitivity analysis (i.e., understanding how sensitive models and decisions are to uncertain knowledge of inputs and identifying which inputs contribute most to the uncertainty). The latter problem is especially troublesome when some of the inputs are correlated. Sebastien da Veiga, Francois Wahl, and Fabrice Gamboa consider this issue in their article, “Local Polynomial Estimation for Sensitivity Analysis on Models with Correlated Inputs.” They derive sensitivity indexes from local polynomial techniques and propose two original estimators that apply local polynomial smoothers. Both estimators have good theoretical properties. They are compared to the Bayesian approach developed in Oakley and O’Hagan (2004). The methods are then applied to two real case studies that have correlated input parameters.

The final article, “Simultaneous Determination of Tuning and Calibration Parameters for Computer Experiments,” is by Gang Han, Thomas J. Santner, and Jeremy J. Rawlinson. Tuning and calibration are processes for improving the representativeness of a computer simulation code to a physical phenomenon. This paper introduces a statistical methodology for simultaneously determining tuning and calibration parameters when data are available from a computer code and the associated physical experiment. Tuning parameters are set by minimizing a discrepancy measure; the distribution of the calibration parameters is determined based on a hierarchical Bayesian model, which views the output as a realization of a Gaussian stochastic process with hyper-priors. Draws from the resulting posterior distribution are obtained by Markov chain Monte Carlo simulation. The methodology is compared to an alternative approach in examples and illustrated in a biomechanical engineering application.

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Comments are closed.