This Demonstration is dedicated to the memory of François de Crécy. An earlier series of Demonstrations [1–3] concerned the wellknown crossvalidation approach to optimally estimate smooth univariate regression functions. Notably, [2] analyzes, on simple examples, how one can construct good confidence statements (asymptotically justified) about the optimal amount of smoothing, and it is demonstrated by [3] that the selector (a variant of crossvalidation) can be easily robustified via randomized choices. The present Demonstration discusses an extension of the latter robustification method to a simple bivariate context. Specifically, we model and compute using the wellknown backfitting algorithm, by additive cubic smoothing splines. Furthermore, this Demonstration introduces a natural nonisotropy in the smoothing operators. In detail, this Demonstration considers the estimation of a smooth additive function of two variables, here over . It is given that only triplets ( ) are known for , which satisfy the model , where the are independent, standard normal random variables. However, for the sake of simplicity, we also assume that the variance is known. (1) for all the in a grid of equally spaced (in log scale) trial values. Recall that assuming a standard meanzero restriction on each of the components and (see [4]) for any , generically gives a unique solution , denoted by , the being omitted when there is no ambiguity, where the two components and are cubic spline functions. Of course, giving the same weight (here ) to the two integrals of criterion (1) (the two socalled smoothness energies) is rather arbitrary, and there is room for improvement even without introducing differing weights, which would complicate their robust selection by percentiles of randomized unbiasedrisk estimators. As a second step, introduce a dilation of one of the two explanatory variables, say , and denoting by the dilation factor, write . At the new observation sites, we consider the second problem: (2) We solve this problem again for a grid of values, after having tuned by the simple formula (see the Details section for its justification): (3) where and are the two components given by , with selected by the robust method corresponding to the percentile. (This is similar to the univariate context in [3], except that here, the randomized degrees of freedom are drawn from a list that has been previously computed.) Let be the minimizer of , where is selected, among the grid of values mentioned above, by a fast robust method. "Fast" means here that instead of precisely computing a percentile, we chose only the fifth largest among six randomized choices. This Demonstration displays the two smooth components , taking into account the tuned dilation that was used, denoted by .
Refer to Chapter 9.1.1 of [4] for a recent detailed presentation of the additive modeling of multivariate regression functions and the backfitting algorithm to implement it for a given value of . The firststep estimates use a robustified selector of proposed in [5] (and evaluated in [3]). Possibly, the methods discussed in [6] could have been used as well. In brief, the idea behind our proposed tuning formula (3) for is that the two smoothness energies occurring in should be roughly equal at the optimum, in the same way that the two terms of the risk (namely the squared bias and the variance) have a similar order when is well chosen. Precisely, the suggestion here is simply, in a second step, to choose so that: where and , are the firststep estimate, that is, solution of where is given, as mentioned above, by the robust method. It is easy to see that the selected is then simply given by the formula (3). It was François de Crécy, a dear friend and colleague over the past 25 years while he was a researcher at CEA (Commissariat à l'énergie atomique et aux énergies alternatives), who had suggested to me that for a more general multivariate additive model (i.e. the natural extension with explanatory variables, whereas in this Demonstration ), the dilation factors (those of ) could be very simply tuned by harmonizing, in a second step, all the smoothness energies in the penalization term (the one that generalizes the penalization term of ), each computed on the firststep estimate . Of course, the procedure could be iterated (i.e. a third step would determine new dilation factors by harmonizing the smoothness energies computed on the secondstep estimate etc.), but such a surface estimator has not yet been evaluated. [4] T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed., New York: Springer, 2009. [5] D. A. Girard, "Estimating the Accuracy of (Local) CrossValidation via Randomised GCV Choices in Kernel or Smoothing Spline Regression," Journal of Nonparametric Statistics, 22(1), 2010 pp. 41–64. doi:10.1080/10485250903095820. [6] M. A. Lukas, F. R. de Hoog and R. S. Anderssen, "Practical Use of Robust GCV and Modified GCV for Spline Smoothing," C omputational Statistics, 31(1), 2016 pp. 269–289. doi:10.1007/s0018001505777.
