Every stationary Gaussian random field ![](HTMLImages/index.en/1.gif) , where ![](HTMLImages/index.en/2.gif) , is determined by its constant mean ![](HTMLImages/index.en/3.gif) ( ![](HTMLImages/index.en/4.gif) when the field is assumed centered), its variance ![](HTMLImages/index.en/5.gif) and its autocorrelation function ![](HTMLImages/index.en/6.gif) , which satisfies: This Demonstration considers a well-studied example of such a field, defined by: ![](HTMLImages/index.en/10.gif) , which might be considered a simple extension of the well-known Ornstein–Uhlenbeck (OU) process to two dimensions. This random field is also called the OU sheet. As in one dimension, ![](HTMLImages/index.en/11.gif) is often called the range parameter. See [1, 2] and [3, Chap. 12] for plots of the Brownian sheet, which is a limit version (as ![](HTMLImages/index.en/12.gif) ) of the OU sheet. Note that horizontal and vertical features are well present in an OU sheet, especially for small ![](HTMLImages/index.en/13.gif) . To simplify, let us assume that one realization of such a field is observed on the regular grid ![](HTMLImages/index.en/14.gif) with ![](HTMLImages/index.en/15.gif) and ![](HTMLImages/index.en/16.gif) . If the observed image ![](HTMLImages/index.en/17.gif) (whose ![](HTMLImages/index.en/18.gif) entry is ![](HTMLImages/index.en/19.gif) ) is rearranged in a vector ![](HTMLImages/index.en/20.gif) of size ![](HTMLImages/index.en/21.gif) by stacking its columns, it can be checked that ![](HTMLImages/index.en/22.gif) is a Gaussian vector whose covariance matrix coincides with the Kronecker product ![](HTMLImages/index.en/23.gif) , where ![](HTMLImages/index.en/24.gif) denotes the ![](HTMLImages/index.en/25.gif) correlation matrix of an AR(1) time series of parameter ![](HTMLImages/index.en/26.gif) (precisely, its serial correlation being ![](HTMLImages/index.en/27.gif) ). Such an image ![](HTMLImages/index.en/28.gif) is also said to be an ![](HTMLImages/index.en/29.gif) process. Simulating such a ![](HTMLImages/index.en/30.gif) is quite fast (see Details). Here this can be done once a value for the underlying range parameter ![](HTMLImages/index.en/31.gif) is chosen in a list of values similar to the one in Table 3 of [4], except that some smaller values are added. It is well known that the classic maximum likelihood (ML) principle can be easily implemented, even for large ![](HTMLImages/index.en/32.gif) , by using the known expression of ![](HTMLImages/index.en/33.gif) (a tridiagonal matrix) and its determinant and by exploiting the properties of the Kronecker product. It can be verified here that the calculation of the profile log-likelihood is very fast even when computed over a fine grid of ![](HTMLImages/index.en/34.gif) -values: this criterion (up to a constant term) multiplied by ![](HTMLImages/index.en/35.gif) is displayed in the top-right panel for each simulated ![](HTMLImages/index.en/36.gif) . This Demonstration also studies the "energy variance matching" alternative to ML (GE-EV method, which is the "no-noise" version of CGEM-EV), which was already implemented in a series of Demonstrations for different contexts in the one-dimensional case (Matérn autocorrelations and the powered-exponential autocorrelation). Recall that it consists of first taking the naive empirical variance ![](HTMLImages/index.en/37.gif) as an estimate of ![](HTMLImages/index.en/38.gif) , next defining ![](HTMLImages/index.en/39.gif) by matching the quadratic form ![](HTMLImages/index.en/40.gif) , where ![](HTMLImages/index.en/41.gif) , here (the so-called "candidate Gibbs energy of ![](HTMLImages/index.en/42.gif) " denoted ![](HTMLImages/index.en/43.gif) ) to ![](HTMLImages/index.en/44.gif) . Of course (since GE-EV requires even fewer computations than ML) the implementation of GE-EV is also very fast (bottom-right panel). As observed in [4] for isotropic random fields, ML and GE-EV methods give quite close results except for settings with a large range. For these extreme settings, the proximity of the two methods is restored provided we only focus on the estimation of the product ![](HTMLImages/index.en/45.gif) , which plays here the role of a micro-ergodic coefficient (see [5]), as was the case for the diffusion coefficient specific to each of the above-mentioned one-dimensional cases.
As an example, for ![](HTMLImages/index.en/46.gif) we define the following matrices: and set ![](HTMLImages/index.en/48.gif) . Then the well-known expression of the inverse of ![](HTMLImages/index.en/49.gif) can be concisely written as: ![](HTMLImages/index.en/50.gif) . Then from the properties of the Kronecker product, it can be shown that ![](HTMLImages/index.en/51.gif) , ![](HTMLImages/index.en/52.gif) , ![](HTMLImages/index.en/53.gif) , ![](HTMLImages/index.en/54.gif) , ![](HTMLImages/index.en/55.gif) . Then, even for ![](HTMLImages/index.en/56.gif) , the four ![](HTMLImages/index.en/57.gif) matrices ![](HTMLImages/index.en/58.gif) , ![](HTMLImages/index.en/59.gif) , ![](HTMLImages/index.en/60.gif) and ![](HTMLImages/index.en/61.gif) are very sparse matrices independent of ![](HTMLImages/index.en/62.gif) , and thus they can be easily pre-computed for a simple and fast calculation of ![](HTMLImages/index.en/63.gif) . Also note that the known expression for the Cholesky factor of ![](HTMLImages/index.en/64.gif) , precisely ![](HTMLImages/index.en/65.gif) , can be used to implement a fast simulation method for ![](HTMLImages/index.en/66.gif) images. [1] J. B. Walsh, "An Introduction to Stochastic Partial Differential Equations," in École d'Été de Probabilités de Saint Flour XIV - 1984 (P. L. Hennequin, ed.), Lecture Notes in Mathematics, vol 1180, Berlin, Heidelberg: Springer, 1984 pp. 265–439. doi:10.1007/BFb0074920. [2] S. Baran and K. Sikolya, "Parameter Estimation in Linear Regression Driven by a Gaussian Sheet," Acta Scientiarum Mathematicarum, 78(3), 2012 pp. 689–713. doi:10.1007/BF03651393. [3] D. Khoshnevisan, Multiparameter Processes: An Introduction to Random Fields, New York: Springer, 2002. [4] D. A. Girard, "Efficiently Estimating Some Common Geostatistical Models by 'Energy–Variance Matching' or Its Randomized 'Conditional–Mean' Versions," Spatial Statistics, 21(Part A), 2017 pp. 1–26. doi:10.1016/j.spasta.2017.01.001. [5] Z. Ying, "Maximum Likelihood Estimation of Parameters under a Spatial Sampling Scheme," The Annals of Statistics, 21(3) 1993 pp. 1567–1590. doi:10.1214/aos/1176349272.
|