Optimal Recovery of Local Truth
C. C. RODRIGUEZ
Abstract
Probability mass curves the data space with horizons!. Let f be a
multivariate probability density function with continuous second
order partial derivatives. Consider the problem of estimating the
true value of f(z) > 0 at a single point z, from n independent
observations. It is shown that, the fastest possible estimators
(like the knearest neighbor and kernel) have minimum asymptotic
mean square errors when the space of observations is thought as
conformally curved. The optimal metric is shown to be generated by
the Hessian of f in the regions where the Hessian is definite.
Thus, the peaks and valleys of f are surrounded by singular
horizons when the Hessian changes signature from Riemannian to
pseudoRiemannian. Adaptive estimators based on the optimal variable
metric show considerable theoretical and practical improvements over
traditional methods. The formulas simplify dramatically when the
dimension of the data space is 4. The similarities with General
Relativity are striking but possibly illusory at this point.
However, these results suggest that nonparametric density estimation
may have something new to say about current physical theory.
1 Introduction
During the past thirty years the theory of Nonparametrics has been
dominating the scene in mathematical statistics. Parallel to the
accelerating discovery of new technical results, a consensus has been
growing among some researchers in the area, that we may be witnessing a
promising solid road towards the elusive Universal Learning Machine
(see e.g. [,]).
The queen of nonparametrics is density estimation. All the fundamental
ideas for solving the new problems of statistical estimation in
functional spaces (smoothing, generalization, optimal minimax rates,
etc.) already appear in the problem of estimating the probability
density (i.e. the model) from the observed data. More over, it is now
well known that a solution for the density estimation problem
automatically implies solutions for the problems of pattern
recognition and nonparametric regression as well as for most problems
that can be expressed as a functional of the density.
In this paper I present a technical result, about optimal
nonparametric density estimation, that shows at least at a formal
level, a surprising similarity between nonparametrics and General
Relativity. Simply put,
probability mass curves the data space with horizons.
What exactly it is meant by this is the subject of this paper but
before proceeding further a few comments are in order. First of all,
let us assume that we have a set {x_{1},¼,x_{n}} of data.
Each observation x_{j} consisting of p measurements that are
thought as the p coordinates of a vector in Â^{p}. To make the
data space into a probability space we endow Â^{p} with the field
of Borelians but nothing beyond that. In particular no a priori metric
structure on the data space is assumed. The n observations are
assumed to be n independent realizations of a given probability
measure P on Â^{p}. By the Lebesgue decomposition theorem, for
every Borel set B we can write,
P(B) = 
ó õ

B

f(x) l(dx) + n(B) 
 (1) 
where n is the singular part of P that assigns positive
probability mass to Borel sets of zero Lebesgue volume. Due to the
existence of pathologies like the Cantor set in one dimension and its
analogies in higher dimensions, the singular part n cannot be
empirically estimated (see e.g. []). Practically
all of the papers on density estimation rule out the singular part of
P a priori. The elimination of singularities by fiat has permitted
the construction of a rich mathematical theory for density estimation,
but it has also ruled out a priori the study of models of mixed
dimensionality (see []) that may be necessary for
understanding point masses and spacetime singularities coexisting with
absolutely continuous distributions.
We assume further that in the regions where f(x) > 0 the density f
is of class C^{2} i.e., it has continuous second order partial
derivatives.
1.1 Nonparametrics with the World in Mind
The road from Classical Newtonian Physics to the physics of today can
be seen as a path paved by an increasing use of fundamental concepts
that are statistical in nature. This is obvious for statistical
mechanics, becoming clearer for quantum theory, and appearing almost
as a shock in General Relativity. Not surprisingly there have been
several attempts to take this trend further (see e.g.
[,,,]) in the direction of
Physics as Inference.
Now suppose for a moment that in fact some kind of restatement of the
foundations of physics in terms of information and statistical
inference will eventually end up providing a way to advance our
current understanding of nature. As of today, that is either already a
solid fact or remains a wild speculation, depending on who you ask.
In any case, for the trend to take over, it will have to be able to
reproduce all the successes of current science and make new correct
predictions. In particular it would have to reproduce General
Relativity. Recall that the main lesson of General Relativity is that
space and time are not just a passive stage on top of which the
universe evolves. General Relativity is the theory that tells (through
the field equation) how to build the stage (left hand side of the
equation) from the data (right hand side of the equation). The
statistical theory that tells how to build the stage of inference (the
probabilistic model) from the observed data is: Nonparametric
Density Estimation. It is therefore reassuring to find typical
signatures of General Relativity in density estimation as this paper
does. Perhaps Physics is not just a special case of statistical
inference and all these are only coincidences of no more relevance
than for example the fact that multiplication or the logarithmic
function appear everywhere all the time. That may be so, but even in
that case I believe it is worth noticing the connection for
undoubtedly GR and density estimation have a common goal: The
dynamic building of the stage.
More formally. Let f be a multivariate probability density function
with continuous second order partial derivatives. Consider the
problem of estimating the true value of f(z) > 0 at a single point
z, from n independent observations. It is shown that, fastest
possible estimators (including the knearest neighbor and kernel as
well as the rich class of estimators in
[]) have minimum asymptotic mean square
errors when the space of observations is thought as conformally
curved. The optimal metric is shown to be generated by the Hessian of
f in the regions where the Hessian is definite. Thus, the peaks and
valleys of f are surrounded by horizons where the Hessian changes
signature from Riemannian to pseudoRiemannian.
The result for the case of generalized knearest neighbor estimators
[] has circulated since 1988 in the form of a
technical report []. Recently I found that a special
case of this theorem has been known since 1972
[] and undergone continuous development in
the Pattern Recognition literature, (see e.g.
[,,,]).
2 Estimating Densities from Data
The canonical problem of density estimation at a point z Î Â^{p}
can be stated as follows: Estimate f(z) > 0 from n independent
observations of a random variable with density f.
The most successful estimators of f(z) attempt to approximate the
density of probability at z by using the observations x_{1},¼,x_{n} to build
both, a small volume around z and, a weight for this volume in terms
of probability mass. The density is then computed as the ratio of the
estimated mass over the estimated volume. The two classical examples
are the knearest neighbor (knn) and the kernel estimators.
2.1 The knn
The simplest and historically the first example of a nonparametric
density estimator is [] the knn. The knn estimator
of f(z) is defined for k Î {1,2,¼,n} as,
where l_{k} is the volume of the sphere centered at the point
z Î Â^{p} of radius R(k) given by the distance from z to the
kthnearest neighbor observation. If l denotes the Lebesgue
measure on Â^{p} we have,
where,
S(r) = { x Î Â^{p} : xz £ r } 
 (4) 
The sphere S(r) and the radius R(k) are defined relative to a
given norm, · in Â^{p}. The stochastic behavior of the
knn depends on the specific value of the integer k chosen in
(2). Clearly, in order to achieve consistency (e.g.
stochastic convergence of h_{n}(z) as n®¥ towards
the true value of f(z) > 0) it is necessary to choose k = k(n) as a
function of n. The volumes l_{k} must shrink, to control the
bias, and consequently we must have k/n® 0 for h_{n}(z)
to be able to approach a strictly positive number. On the other hand,
we must have k®¥ to make the estimator dependent on
an increasing number k of observations and in this way to control
its variance. Thus, for the knn to be consistent, we need k to
increase with n but at a rate slower than n itself.
The knn estimator depends not only on k but also on a choice of
norm. The main result of this paper follows from the characterization
of the · that, under some regularity conditions, produces
the best asymptotic (as n®¥) performance for density
estimators.
2.2 The kernel
If we consider only regular norms ·, in the sense that for
all sufficiently small values of r > 0,
l(S(r)) = l(S(1)) r^{p} º br^{p} 
 (5) 
then, the classical kernel density estimator can be written as:
where,
M_{m} = 
1 n


æ è

å
x_{j} Î S(m)

K_{m1}(x_{j}z) 
ö ø


 (7) 
The smoothing parameter m = m(n) is such that k = [nm^{p}]
satisfies the conditions for consistency of the knn, K_{m1}(x) = K(m^{1}x) where the kernel function K is a non negative
bounded function with support on the unit sphere (i.e. K(x) = 0 for
x > 1) and satisfying,
Notice that for the constant kernel (i.e. K(x) = 1 for x £ 1)
the estimator (6) approximates f(z) by the proportion of
observations inside S(m) over the volume of S(m). The general
kernel function K acts as a weight function allocating different
weights K_{m1}(x_{j}z) to the x_{j}'s inside S(m). To
control bias (see () below) the kernel K is usually
taken as a decreasing radially symmetric function in the metric
generated by the norm ·. Thus, K_{m1}(x_{j}z)
assigns a weight to x_{j} that decreases with its distance to z.
This has intuitive appeal, for the observations that lie closer to z
are less likely to fall off the sphere S(m), under repeated
sampling, than the observations that are close to the boundary of
S(m).
The performance of the kernel as an estimator for f(z) depends first
and foremost on the value of the smoothness parameter m. The
numerator and the denominator of g_{n}(z) depend not only on m
but also on the norm · chosen and the form of the kernel
function K. As it is shown in theorem () these three
parameters are interrelated.
2.3 Double Smoothing Estimators
The knn (2) and the kernel (6) methods are two
extremes of a continuum. Both, h_{n}(z) and g_{n}(z) estimate
f(z) as probabilitymassperunitvolume. The knn fixes the
mass to the deterministic value k/n and lets the volume
l_{k} to be stochastic, while the kernel method fixes the
volume l(S(m)) and lets the mass M_{m} to be random.
The continuum gap between (2) and (6) is filled
up by estimators that stochastically estimate mass and volume by
smoothing the contribution of each sample point with different
smoothing functions for the numerator and denominator (see
[]).
Let b ³ 1 and assume, without loss of generality that bk is an
integer. The double smoothing estimators with deterministic weights
are defined as,
f_{n}(z) = 
1 n


n å
i = 1

K 
æ ç
è

zx_{i} R(k)

ö ÷
ø

1 cb


bk å
i = 1

w_{i} l_{i} 


 (9) 
where,
w_{i} = 
ó õ

i/bk
(i1)/bk

w(u) du 
 (10) 
and w(·) is a probability density on [0,1] with mean c.
3 The Truth as n®¥ ?
In nonparametric statistics, in order to assess the quality of an
estimator f_{n}(z) as an estimate for f(z), it is necessary to
choose a criterion for judging how far away is the estimator from what
it tries to estimate. This is sometimes regarded as revolting and
morally wrong by some Bayesian Fundamentalists. For once you choose a
loss function and a prior, logic alone provides you with the Bayes
estimator and the criterion for judging its quality. That is
desirable, but there is a problem in high dimensional spaces. In
infinite dimensional hypothesis spaces (i.e. in nonparametric
problems) almost all priors will convince you of the wrong thing! (see
e.g. [,] for a nonregular way
out see []). These kind of Bayesian nonparametric
results provide a mathematical proof that: almost all fundamental
religions are wrong, (more data can only make the believers more
sure that the wrong thing is true!). An immediate corollary is that:
Subjective Bayesians can't go to Heaven. Besides, the choice of
goodness of fit criterion is as adhoc (an equivalent) to the choice
of a loss function.
3.1 The Natural Invariant Loss Function and Why the MSE is not
that Bad
The most widely studied goodness of fit criterion is the Mean Square
Error (MSE) defined by,
(MSE) = Ef_{n}(z)  f(z)^{2} 
 (11) 
where the expectation is over the joint distribution of the sample
x_{1},¼,x_{n} . By adding and subtracting T = Ef_{n}(z) and expanding the square,
we can express the MSE in the computationally convenient form,


Ef_{n}(z)  T^{2} + T  f(z)^{2} 
 

 (12) 

By integrating (11) over the z Î Â^{p} and
interchanging E and ò (OK by Fubbini's theorem since the
integrand ³ 0) we obtain,
(MISE) = E 
ó õ

f_{n}(z)  f(z)^{2} dz 
 (13) 
The Mean Integrated Square Error (MISE) is just the expected L^{2}
distance of f_{n} from f. Goodness of fit measures based on the
(MSE) have two main advantages: They are often easy to compute and
they enable the rich Hilbertian geometry of L^{2}. On the other hand
the (MSE) is unnatural and undesirable for two reasons: Firstly, the
(MSE) is only defined for densities in L^{2} and this rules out
a priori all the densities in L^{1}\L^{2} which is
unacceptable. Secondly, even when the (MISE) exists, it is difficult
to interpret (as a measure of distance between densities) due to its
lack of invariance under relabels of the data space. Many researchers
see the expected L^{1} distance between densities as the natural
loss function in density estimation. The L^{1} distance does in fact
exist for all densities and it is easy to interpret but it lacks the
rich geometry generated by the availability of the inner product in
L^{2}. A clean way out is to use the expected L^{2}
distance between the wave functions y_{n} = Ö{f_{n}} and
y = Öf.
Theorem 1
The L^{2} norm of wave functions is invariant under relabels of the
data space, i.e.,

ó õ

y_{n}(z)  y(z)^{2} dz = 
ó õ

 
~ y

n

(z¢)  
~ y

(z¢)^{2} dz¢ 
 (14) 
where z = h(z¢) with h any onetoone smooth function.
Proof: Just change the variables. From, the change of variables
theorem the pdf of z¢ is,

~ f

(z¢) = f(h(z¢)) 
ê ê
ê

¶(h) ¶(z¢)

ê ê
ê


 (15) 
from where the wave function of z¢ is given by,

~ y

(z¢) = y(h(z¢)) 
ê ê
ê

¶(h) ¶(z¢)

ê ê
ê

1/2


 (16) 
Thus, making the substitution z = h(z¢) we get,



ó õ

y_{n}(h(z¢))  y(h(z¢))^{2} 
ê ê
ê

¶(h) ¶(z¢)

ê ê
ê

dz¢ 
 


ó õ

y_{n} 
ê ê
ê

¶(h) ¶(z¢)

ê ê
ê

1/2

 y 
ê ê
ê

¶(h) ¶(z¢)

ê ê
ê

1/2

^{2} dz¢ 
 

 (17) 

Q.E.D.
The following theorem shows that a transformation of the MSE of a
consistent estimator provides an estimate for the expected L^{2}
norm between wave functions.
Theorem 2
Let f_{n}(z) be a consistent estimator of f(z). Then,
E 
ó õ

y_{n}  y^{2} dz = 
1 4


ó õ


Ef_{n}(z)  f(z)^{2} f(z)

dz + (smaller order terms) 
 (18) 
Proof: A first order Taylor expansion of Öx about
x_{0} gives,
Öx   Ö

x_{0}

= 
1 2


(xx_{0})

+ o((xx_{0})^{2}) 
 (19) 
Substituting x = f_{n}(z), x_{0} = f(z) into (19) squaring
both sides and taking expectations we obtain,
Ey_{n}(z)  y(z)^{2} = 
1 4


Ef_{n}(z)f(z)^{2} f(z)

+ o(Ef_{n}(z)f(z)^{2}) 
 (20) 
integrating over z and interchanging E and ò we arrive at
(18).
Q.E.D.
Proceeding as in the proof of theorem 3.1 we can show
that

ó õ


f_{n} f^{2} f

dz = 
ó õ



dz¢ 
 (21) 
where, as before, z« z¢ is any onetoone smooth
transformation of the data space and [f\tilde] is the density of
z¢. Thus, it follows from (21) that the leading term on
the right hand side of (18) is also invariant under
relabels of the data space. The nice thing about the L^{2} norm of
wave functions, unlike (21), is that it is defined even
when f(z) = 0.
4 Some Classic Asymptotic Results
We collect here the well known Central Limit Theorems (CLT) for the
knn and kernel estimators together with some remarks about
nonparametric density estimation in general. The notation and the
formulas introduced here will be needed for computing the main result
about optimal norms in the next section.
Assumption 1
Let f be a pdf on Â^{p} of class C^{2} with non singular
Hessian, H(z) at z Î Â^{p}, and with f(z) > 0, i.e., the
matrix of second order partial derivatives of f at z exists, it is
non singular and its entries are continuous at z.
Assumption 2
Let K be a bounded non negative function defined on the unit sphere,
S_{0} = {x Î Â^{p}: x £ 1} and satisfying,

ó õ

x £ 1

K(x) dx = l(S_{0}) º b 
 (22)  
ó õ

x £ 1

x K(x) dx = 0 Î Â^{p} 
 (23) 

Theorem 3 [CLT for knn]
Under assumption 4, if k = k(n) is taken in the
definition of the knn (2) in such a way that for some a > 0

lim
n®¥

n^{4/(p+4)} k = a 
 (24) 
then, if we let Z_{n} = Ök(h_{n}(z)  f(z)) we have,

lim
n®¥

P(Z_{n} £ t) = 
ó õ

t
¥


1

exp 
æ ç
è

 
(yB(z))^{2} 2V(z)

ö ÷
ø

dy 
 (25) 
where,
B(z) = 
æ ç
è

a^{[(p+4)/2p]} 2f^{2/p}(z)

ö ÷
ø


ì í
î

b^{12/p} 
ó õ

x £ 1

x^{T}H(z)x dx 
ü ý
þ


 (26) 
and,
Proof: This is a special case of [].
Theorem 4 [CLT for kernel]
Under assumptions 4, and 4 if m = m(n)
is taken in the definition of the kernel (6) in such a
way that for some a > 0, k = [nm^{p}] satisfies (24)
then, if we let Z_{n} = Ök(g_{n}(z)  f(z)) we have
(25) where now,
B(z) = 
æ ç
è

a^{[(p+4)/2p]} 2

ö ÷
ø


ì í
î

b^{1} 
ó õ

x £ 1

x^{T}H(z)x K(x) dx 
ü ý
þ


 (28) 
and,
V(z) = f(z) 
ì í
î

b^{2} 
ó õ

x £ 1

K^{2}(x) dx 
ü ý
þ


 (29) 
Proof: The sample x_{1},¼,x_{n} is assumed to be iid f and therefore the
kernel estimator g_{n}(z) given by (6) and (7)
is a sum of iid random variables. Thus, the classic CLT applies and we
only need to verify the rate (24) and the asymptotic
expressions for the bias (28) and variance
(29). We have,



1 bm^{p}


1 n


n å
j = 1


ó õ

K 
æ ç
è

x_{j}z m

ö ÷
ø

f(x_{j}) dx_{j} 
 (30)  


1 bm^{p}


ó õ

K(y) f(z+my)m^{p} dy 
 (31)  


ó õ


K(y) b


ì í
î

f(z) + mÑf(z)·y + 
m^{2} 2

y^{T}H(z)y + o(m^{2}) 
ü ý
þ

dy 
 (32)  

f(z) + 
m^{2} 2b


ó õ

y^{T}H(z)y K(y) dy + o(m^{2}) 
 (33) 

where we have changed the variables of integration to get
(31), used assumption 4 and Taylor's theorem
to get (32) and used assumption 4 to obtain
(33). For the variance we have,



1 nb^{2}m^{2p}

var (K((Xz)/m)) 
 (34)  


1 nb^{2}m^{2p}


ì í
î


ó õ

y £ 1

f(z+my) K^{2}(y) m^{p}dy 
 

 
æ è

ó õ

y £ 1

f(z+my)K(y) m^{p}dy 
ö ø

2

ü ý
þ


 (35)  


f(z) nb^{2}m^{p}


ó õ

y £ 1

K^{2}(y)dy + o 
æ ç
è

1 nm^{p}

ö ÷
ø


 (36) 

where we have used var(K) = EK^{2}(EK)^{2} and changed the
variables of integration to get (35), used
assumption 4 and (0th order) Taylor's theorem to get
(36). Hence, the theorem follows from (33) and
(36) after noticing that (24) and k = nm^{p}
imply,


k^{[(4+p)/2p]}n^{2/p} = (n^{[4/(p+4)]}k)^{[(p+4)/2p]} ® a^{[(p+4)/2p]} 
 (37)  

 (38) 

Q.E.D.
Theorem 5 [CLT for double smoothers]
Consider the estimator f_{n}(z) defined in (9).
Under assumptions 4, 4, and (24)
if we let Z_{n} = Ök(f_{n}(z)  f(z)) we have
(25) where now,
B(z) = 
æ ç
è

a^{[(p+4)/2p]} 2[bf(z)]^{2/p}

ö ÷
ø

b^{1} 
ì í
î

ó õ

x £ 1

x^{T}H(z)x [K(x)+l_{0}] dx 
ü ý
þ


 (39) 
and,
V(z) = f^{2}(z) 
ì í
î

b^{1} 
ó õ

x £ 1

K^{2}(x) dx  l_{1} 
ü ý
þ


 (40) 
with,



b^{2/p} c


ó õ

1
0

u^{1+[2/p]} w(u) du  1 
 (41)  

1  c^{2}b^{1} 
ó õ

1
0


ì í
î

ó õ

1
y

w(x) dx 
ü ý
þ

2

dy 
 (42) 

Proof: See []. Remember to
substitute K by b^{1}K since in the reference the Kernels are
probability densities and in here we take them as weight functions
that integrate to b.
4.1 Asymptotic Mean Square Errors
Let f_{n} be an arbitrary density estimator and let
Z_{n} = Ök(f_{n}(z)f(z)). Now suppose that f_{n}(z) is
asymptotically normal, in the sense that when k = k(n) satisfies
(24) for some a > 0, we have (25) true. Then,
all the moments of Z_{n} will converge to the moments of the
asymptotic Gaussian. In particular the mean and the variance of
Z_{n} will approach B(z) and V(z) respectively. Using,
(12) and (24) we can write,

lim
n®¥

n^{4/(p+4)} Ef_{n}(z)f(z)^{2} = 
V(z) a

+ 
B^{2}(z) a


 (43) 
We call the right hand side of (43) the asymptotic mean
square error (AMSE) of the estimator f_{n}(z). The value of a can
be optimized to obtain a global minimum for the (AMSE) but it is well
known in nonparametrics that the rate n^{4/(p+4)} is best possible
(in a minimax sense) under the smoothness asumption 4
(see e.g. []). We can take care of the knn, the
kernel, and the double smoothing estimators simultaneously by noticing
that in all cases,
(AMSE) = a_{1}a^{1} + a_{2}a^{4/p} 
 (44) 
has a global minimum of,
(AMSE)^{*} = 
ì í
î

(1+4/p) 
æ ç
è

p 4

ö ÷
ø

[4/(p+4)]

ü ý
þ

a_{1}^{[4/(p+4)]}a_{2}^{[p/(p+4)]} 
 (45) 
achieved at,
a^{*} = 
æ ç
è

pa_{1} 4a_{2}

ö ÷
ø

[p/(p+4)]


 (46) 
Replacing the corresponding values for a_{1} and a_{2}
for the knn, for the kernel, and for the double smoothing estimators,
we obtain that in all cases,
(AMSE)^{*} = (const. indep. of f) 
ì í
î

f(z) 
æ ç
è

D^{2} f(z)

ö ÷
ø

[p/(p+4)]

ü ý
þ


 (47) 
where,



ó õ

x £ 1

x^{T}H(z)x G(x) dx 
 (48)  


p å
j = 1

r_{j} 
¶^{2}f ¶z_{j}^{2}

(z) 
 (49) 

with G(x) = 1 for the knn, G(x) = K(x) for the kernel,
G(x) = K(x)+l_{0} for the double smoothers (see (39)
and (41)) and, if e_{j} denotes the jth canonical
basis vector (all zeroes except a 1 at position j),
r_{j} = 
ó õ

x £ 1

(x·e_{j})^{2} G(x) dx 
 (50) 
Notice that (49) follows from (48),
(23) and the fact that H(z) is the Hessian of f at
z. The generality of this result shows that (47) is
typical for density estimation. Thus, when f_{n} is either the knn,
the kernel, or one of the estimators in (
rodriguez86), we have:

lim
n®¥

n^{4/(p+4)}Ef_{n}(z)f(z)^{2} ³ c f(z) 
æ ç
è

D^{2} f(z)

ö ÷
ø

[p/(p+4)]


 (51) 
The positive constant c may depend on the particular estimator but
it is independent of f. Dividing both sides of (51) by
f(z), integrating over z, using theorem 3.1 and
interchanging E and ò we obtain,

lim
n®¥

n^{4/(p+4)}E 
ó õ

y_{n}(z)y(z)^{2} dz ³ 4c 
ó õ


ê ê
ê

D y(z)

ê ê
ê

[2p/(p+4)]

dz 
 (52) 
The worst case scenario is obtained by the model f = y^{2} that
maximizes the action given by the right hand side of
(52),
L = 
ó õ


ê ê
ê


1 y(z)


p å
j = 1

r_{j} 
¶^{2}y^{2} ¶z_{j}^{2}

(z) 
ê ê
ê

[2p/(p+4)]

dz 
 (53) 
This is a hard variational problem. However, it is worth noticing that
the simplest case is obtained when the exponent is 1, i.e. when the
dimension of the data space is p = 4. Assuming we were able to find a
solution, this solution would still depend on the p parameters
r_{1},¼,r_{p}. A choice of r_{j}'s is equivalent to
the choice of a global metric for the data space. Notice also, that
the exponent becomes 2 for p = ¥ and that for p ³ 3 (but not
for p = 1 or 2) there is the possibility of non trivial (i.e.
different from uniform) superefficient models for which estimation can
be done at rates higher than n^{4/(p+4)}. These superefficient
models are characterized as the non negative solutions of the Laplace
equation in the metric generated by the r_{j}'s, i.e., non
negative (f(z) ³ 0) solutions of,

p å
j = 1

r_{j} 
¶^{2}f ¶z_{j}^{2}

(z) = 0 
 (54) 
Recall that there are no non trivial (different from constant) non
negative superharmonic functions in dimensions one or two but there
are plenty of solutions in dimension three and higher. For example the
Newtonian potentials,
f(z) = cz_{r}^{(p2)} 
 (55) 
with the norm,
z_{r}^{2} = 
p å
j = 1


æ ç ç
ç è

z_{j}

ö ÷ ÷
÷ ø

2


 (56) 
will do, provided the data space is compact. The existence of (hand
picked) superefficient models is what made necessary to consider best
rates only in the minimax sense. Even though we can estimate a
Newtonian potential model at faster than usual nonparametric rates, in
any neighborhood of the Newtonian model the worst case scenario is at
best estimated at rate n^{4/(p+4)} under second order smoothness
conditions.
5 Choosing the Optimal Norm
All finite (p < ¥) dimensional Banach spaces are isomorphic (as
Banach spaces) to Â^{p} with the euclidian norm. This means, among
other things, that in finite dimensional vector spaces all norms
generate the same topology. Hence, the spheres {x Î Â^{p}:x £ r} are Borelians so they are Lebesgue measurable and thus,
estimators like the knn (2) are well defined for arbitrary
norms. It is possible, in principle, to consider norms that are not
coming from inner products but in this paper we look only at Hilbert
norms ·_{A} of the form,
z_{A}^{2} = z^{T}A^{T}Az 
 (57) 
where A Î L with L defined as the open set of real
nonsingular p×p matrices. For each A Î L define the
unit sphere,
S_{A} = {x Î Â^{p}: x^{T}A^{T}Ax £ 1} 
 (58) 
its volume,
b_{A} = l(S_{A}) = 
ó õ

S_{A}

l(dx) 
 (59) 
and the Asymmetric (i.e. ·_{A} radially symmetric)
kernel, K_{A},
K_{A}(x) = (K°A)(x) = K(Ax) 
 (60) 
where K satisfies assumption 4 and it is Isymmetric,
i.e., radially symmetric in the euclidian norm. This means that K(y)
depends on y only through the euclidian length of y, i.e. there
exists a function F such that,
The following simple theorem shows that all Asymmetric functions
are really of the form (60).
Theorem 6
For any A Î L, [K\tilde] is Asymmetric if and only if we
can write

~ K

(x) = K(Ax) for all x Î Â^{p} 
 (62) 
for some Isymmetric K.
Proof: [K\tilde](x) is Asymmetric iff [K\tilde](x) = F(x_{A}^{2}) for some function F. Choose
K(x) = [K\tilde](A^{1}x). This K is Isymmetric since
K(x) = F((AA^{1}x)^{T}(AA^{1}x)) = F(x^{T}x). More over,
[K\tilde](x) = [K\tilde](A^{1}(Ax)) = K(Ax). Thus,
(62) is necessary for Asymmetry. It is also obviously
sufficient since the assumed Isymmetry of K in (62)
implies that [K\tilde](x) = F((Ax)^{T}(Ax)) = F(x_{A}^{2}) so it
is Asymmetric.
Q.E.D.
An important corollary of theorem 5 is,
Theorem 7
Let A,B Î L. Then, [K\tilde] is ABsymmetric if and only
if [K\tilde]_{B1} is Asymmetric.
Proof: By the first part of theorem 5 we have
that [K\tilde] = K°A°B with K some Isymmetric. Thus,
[K\tilde]°B^{1} = K°A is Asymmetric by
the second part of theorem 5.
Q.E.D.
Let us denote by b(A,K) the total volume that a kernel K
assigns to the unit Asphere S_{A}, i.e.,
b(A,K) = 
ó õ

S_{A}

K(x) dx 
 (63) 
In order to understand the effect of changing the metric on a density
estimator like the kernel (6), it is convenient to write
g_{n} explicitly as a function of everything it depends on; The
point z, the metric A, the Asymmetric kernel function
[K\tilde], the positive smoothness parameter m and, the data set
{x_{1},¼,x_{n}}. Hence, we define,
g_{n}(z;A, 
~ K

,m,{x_{1},¼,x_{n}}) = 
1 n


n å
j = 1


~ K


æ ç
è


x_{j}z m

ö ÷
ø



 (64) 
The following result shows that kernel estimation with metric AB is
equivalent to estimation of a transformed problem with metric A.
The explicit form of the transformed problem indicates that a
perturbation of the metric should be regarded as composed of two
parts: Shape and volume. The shape is confounded with the form of the
kernel and the change of volume is equivalent to a change of the
smoothness parameter.
Theorem 8
Let A,B Î L, m > 0, and [K\tilde] an ABsymmetric
kernel. Then, for all z Î Â^{p} and all data sets
{x_{1}¼,x_{n}} we have,
g_{n}(z;AB, 
~ K

,m,{x_{1},¼,x_{n}}) = g_{n}( 
^ B

z;A, 
~ K

°B^{1},B^{1/p}m, { 
^ B

x_{1},¼, 
^ B

x_{n}}) 
 (65) 
where B denotes the determinant of B and [^B] = B^{1/p}B
is the matrix B rescaled to have unit determinant.
Proof: To simplify the notation let us denote,
Substituting AB for A in (64) and using
theorem 5 we can write the left hand side of
(65) as,

1 n


n å
j = 1

K 
æ ç
è

AB 
æ ç
è

x_{j}z m

ö ÷
ø

ö ÷
ø


= 
1 n


n å
j = 1

(K°A) 
æ ç ç
ç è


m_{B}

ö ÷ ÷
÷ ø

b(A,K°A) (m_{B})^{p}



where K is some Isymmetric kernel and we have made the change of
variables x = B^{1}y in b(AB,[K\tilde]) (see (63)
). The last expression is the right hand side of (65).
Notice that, K°A = [K\tilde]_{B1} is
Asymmetric.
Q.E.D.
5.1 Generalized knn Case with Uniform Kernel
In this section we find the norm of type (57) that
minimizes (47) for the estimators of the knn type with
uniform kernel which include the double smoothers with K(x) = 1. As it
is shown in theorem 5 a change in the determinant of
the matrix that defines the norm is equivalent to changing the
smoothness parameter. The quantity (57) to be minimized
was obtained by fixing the value of the smoothness parameter to the
best possible, i.e. the one that minimizes the expression of the
(AMSE) (43). Thus, to search for the best norm at a fix
value of the smoothness parameter we need to keep the determinant of
the matrix that defines the norm constant. We have,
Theorem 9
Consider the problem,

min
A = 1


æ è

ó õ

S_{A}

x^{T}H(z)x dx 
ö ø

2


 (67) 
where the minimum is taken over all p×p matrices with
determinant one, S_{A} is the unit Aball and H(z) is the
Hessian of the density f Î C^{2} at z which is
assumed to be nonsingular.
When H(z) is indefinite, i.e. when H(z) has both positive and
negative eigenvalues, the objective function in (67)
achieves its absolute minimum value of zero when A is taken as,
A = c^{1} diag( 
æ ú
Ö


,¼, 
æ ú
Ö


, 
æ ú
Ö


,¼, 
æ ú
Ö


) M 
 (68) 
where the x_{j} are the absolute value of the eigenvalues of
H(z), m is the number of positive eigenvalues, M is the matrix of
eigenvectors and c is a normalization constant to get detA = 1
(see the proof for more detailed definitions).
If H(z) is definite, i.e. when H(z) is either positive or
negative definite, then for all p×p real matrices A with
detA = 1 we have,

ê ê


ó õ

S_{A}

x^{T}H(z)x dx 
ê ê

³ 
2^{p}p p+3

p 
ê ê


det
 H(z) 
ê ê

1/p


 (69) 
with equality if and only if,
A = 
H_{+}^{1/2}(z) H_{+}^{1/2}(z)^{1/p}


 (70) 
where H_{+}^{1/2}(z) denotes the positive definite squareroot of
H(z) (see the proof below for explicit definitions).
Proof: Since f Î C^{2} the Hessian is a real
symmetric matrix and we can therefore find an orthogonal matrix M
that diagonalizes H(z), i.e. such that,
H(z) = M^{T} D M with M^{T}M = I 
 (71) 
where,
D = diag(x_{1},x_{2},¼,x_{m},x_{m+1},¼, x_{p}) 
 (72) 
where all the x_{j} > 0 and we have assumed that the columns
of M have been ordered so that all the m positive eigenvalues
appear first and all the negative eigenvalues
x_{m+1},¼,x_{p} appear last so that (71)
agrees with (72).
Split D as,


diag(x_{1},¼,x_{m},0,¼,0)  diag(0,¼,0,x_{m+1},¼,x_{p}) 
 

 (73) 

and use (71) and (73) to write,


M^{T}D_{+}M  M^{T}D_{}M 
 

(D_{+}^{1/2}M)^{T}(D_{+}^{1/2}M)  (D_{}^{1/2}M)^{T}(D_{}^{1/2}M) 
 

S_{+}^{T}S_{+}  S_{}^{T}S_{} 
 (74) 

Using (74) and the substitution y = Ax we obtain,



ó õ

y^{T}y £ 1

y^{T}(A^{1})^{T} (S_{+}^{T}S_{+}  S_{}^{T}S_{} ) A^{1}y dy 
 


ó õ

y^{T}y £ 1

< SA^{1}y, SA^{1}y > dy 
 (75) 

where,
S = S_{+} + S_{} = (D_{+} + D_{})^{1/2} M 
 (76) 
and < .,. > denotes the pseudoRiemannian inner product,
< x,y > = 
m å
i = 1

x^{i}y_{i}  
p å
i = m+1

x^{i}y_{i} 
 (77) 
By letting G = diag(1,¼,1,1,¼,1) (i.e. m ones followed
by pm negative ones) be the metric with the signature of H(z) we
can also write (77) as,
Let,
B = [b_{1}b_{2}¼b_{p}] = SA^{1} 
 (79) 
where b_{1},¼,b_{p} denote the columns of B.
Substituting (79) into (75), using the linearity
of the inner product and the symmetry of the unit euclidian sphere we
obtain,



ó õ

y^{T}y £ 1

< By,By > dy 
 


å
j


å
k

< b_{j},b_{k} > 
ó õ

S_{I}

y^{j}y^{k} dy 
 (80)  


å
j


å
k

< b_{j},b_{k} > d^{jk} r 
 

r 
p å
j = 1

< b_{j},b_{j} > 
 (81) 

where r stands for,
r = 
ó õ

S_{I}

(y^{1})^{2} dy = 
2^{p}p p+3


 (82) 
From (79) and (81) it follows that problem
(67) is equivalent to,

min
B = S


æ è

p å
j = 1

< b_{j},b_{j} > 
ö ø

2


 (83) 
When H(z) is indefinite, i.e. when m Ï {0,p} it is
possible to choose the columns of B so that
å_{j} < b_{j},b_{j} > = 0 achieving the global
minimum. There are many possible choices but the simplest one is,
B = c · diag( 
 ___ Öpm

, 
 ___ Öpm

,¼, 
 ___ Öpm

m

, 
Öm,Öm,¼,Öm pm

) 
 (84) 
since,

p å
j = 1

< b_{j},b_{j} > = c^{2} m ( 
 ___ Öpm

)^{2}  c^{2} (pm) (Öm)^{2} = 0. 
 (85) 
The scalar constant c needs to be adjusted to satisfy the
constraint that detB = detS. From
(79), (84) and (76) we obtain
that the matrix for the optimal metric can be written as,
A = B^{1}S = 
c^{1}

S_{+} + 
c^{1} Öm

S_{} 
 (86) 
From (86) we get,
A^{T}A = 
c^{2} pm

S_{+}^{T}S_{+} + 
c^{2} m

S_{}^{T}S_{} 
 (87) 
Finally from (74) we can rewrite (87) as,


c^{2} M^{T} 
æ ç
è


1 pm

D_{+} + 
1 m

D_{} 
ö ÷
ø

M 
 (88)  

c^{2} M^{T} diag( 
x_{1} pm

,¼, 
x_{m} pm

, 
x_{m+1} m

,¼, 
x_{p} m

) M 
 (89) 

Notice that when pm = m (i.e. when the number of positive equals
the number of negative eigenvalues of H(z)) the factor 1/m can be
factorized out from the diagonal matrix in (89) and in this
case the optimal A can be expressed as,
A = 
H_{+}^{1/2}(z) H_{+}^{1/2}(z)^{1/p}


 (90) 
where we have used the positive squareroot of H(z) defined as,
H_{+}^{1/2}(z) = diag(  Ö

x_{1}

,¼,  Ö

x_{p}

)M 
 (91) 
In all the other cases for which H(z) is indefinite, i.e. when
m Ï {0,p/2,p} we have,
A = c^{1} diag( 
æ ú
Ö


,¼, 
æ ú
Ö


, 
æ ú
Ö


,¼, 
æ ú
Ö


) M 
 (92) 
The normalization constant c is fixed by the constraint that
detA = 1 as,
c = (pm)^{[m/2p]} m^{[((pm))/2p]}  
det
 H(z)^{[1/2p]} 
 (93) 
This shows (68).
Let us now consider the only other remaining case when H(z) is
definite, i.e. either positive definite (m = p) or negative definite
(m = 0). Introducing l_{0} as the Lagrange multiplier
associated to the constraint detB = detS we obtain that the
problem to be solved is,

min
b_{1},¼b_{p},l_{0}

L(b_{1},b_{2},¼,b_{p},l_{0}) 
 (94) 
where the Lagrangian L is written as a function of the
columns of B as,
L(b_{1},b_{2},¼,b_{p},l_{0}) = 
æ è

p å
j = 1

< b_{j},b_{j} > 
ö ø

2

 4 l_{0} ( 
det
 (b_{1},¼,b_{p}) 
det
 S) 
 (95) 
The 4l_{0} instead of just l_{0} is chosen to simplify
the optimality equations below. The optimality conditions are,

¶L ¶b_{j}

= 0 for j = 1,¼,p and 
¶L ¶l_{0}

= 0 
 (96) 
where the functional partial derivatives are taken in the Fréchet
sense with respect to the column vectors b_{j}. The Fréchet
derivatives of quadratic and multi linear forms are standard textbook
exercises. Writing the derivatives as linear functions of the vector
parameter h we have,

¶ ¶b_{j}

< b_{j},b_{j} > (h) 


 (97) 

¶ ¶b_{j}


det
 (b_{1},¼,b_{p})(h) 



det
 (b_{1},¼, 
h jth col.

,¼,b_{p}) 
 (98) 

Thus, using (97) and (98) to compute the
derivative of (95) we obtain that for all h and all
j = 1,¼,p we must have,

¶L ¶b_{j}

(h) = 2 
ì í
î

p å
k = 1

< b_{k},b_{k} > 
ü ý
þ

2 < b_{j},h >  4l_{0} 
det
 (b_{1},¼,h,¼,b_{p}) = 0 
 (99) 
When å_{k} < b_{k},b_{k} > ¹ 0 we can rewrite (99)
as,
< b_{j},h > = c_{0}^{1} 
det
 (b_{1},¼,h,¼,b_{p}) 
 (100) 
But now we can substitute h = b_{i} with i ¹ j into
(100) and use the fact that the determinant of a matrix
with two equal columns is zero, to obtain,
< b_{j},b_{i} > = 0 for all i ¹ j. 
 (101) 
In a similar way, replacing h = b_{j} into (100), we get
< b_{j},b_{j} > = c_{0}^{1} 
det
 B = c 
 (102) 
where c is a constant that needs to be fixed in order to satisfy
the constraint that detB = detS.
We have shown that the optimal matrix B must have orthogonal columns
of the same length for the Gmetric. This can be expressed with a
single matrix equation as,
Substituting (79) into (103) and rearranging terms
we obtain,


 (104)  

c^{1} (S_{+}^{T} + S_{}^{T}) G (S_{+} + S_{}) 
 

c^{1} (S_{+}^{T}S_{+}  S_{}^{T}S_{}) 
 

 (105) 

From (105), (103), (82) and
(81) we obtain,

ê ê


ó õ

S_{A}

x^{T} H(z) x dx 
ê ê

³ rp c 
 (106) 
and replacing the values of r and c we obtain
(69).
Q.E.D.
5.2 Yet Another Proof When The Hessian is Definite
Consider the following lemma.
Lemma 1
Let A,B be two p×p nonsingular matrices with the same
determinant. Then

ó õ

S_{A}

x^{2}_{B} dx ³ 
ó õ

S_{B}

y^{2}_{B} dy 
 (107) 
Proof:
Just split S_{A} and S_{B} as,


(S_{A}S_{B}) È(S_{A}S^{c}_{B}) 
 (108)  

(S_{B}S_{A}) È(S_{B}S^{c}_{A}) 
 (109) 

and write,

ó õ

S_{A}

x^{2}_{B} dx = 
ó õ

S_{B}

x^{2}_{B} dx  
ó õ

S^{c}_{A}S_{B}

x^{2}_{B} dx + 
ó õ

S_{A}S^{c}_{B}

x^{2}_{B} dx 
 (110) 
Now clearly,

min
x Î S_{A}S^{c}_{B}

x^{2}_{B} ³ 1 ³ 
max
y Î S^{c}_{A}S_{B}

y^{2}_{B} 
 (111) 
from where it follows that,

ó õ

S_{A}S^{c}_{B}

x^{2}_{B} dx 



min
x Î S_{A}S^{c}_{B}

x^{2}_{B} 
ó õ

S_{A}S^{c}_{B}

dx 
 (112)  


max
y Î S^{c}_{A}S_{B}

y^{2}_{B} 
ó õ

S^{c}_{A}S_{B}

dy 
 (113)  


ó õ

S^{c}_{A}S_{B}

y^{2}_{B} dy 
 (114) 

where (112) and (114) follow from
(111). To justify the middle inequality
(113) notice that from (108),
(109) and the hypothesis that A = B we can write,

ó õ

S_{A}S^{c}_{B}

dx + 
ó õ

S_{A}S_{B}

dx = 
ó õ

S^{c}_{A}S_{B}

dy + 
ó õ

S_{A}S_{B}

dy 
 (115) 
The conclusion (107) follows from inequality
(114) since that makes the last two terms in
(110) nonnegative.
Q.E.D.
If B is a nonsingular matrix we define,
An immediate consequence of lemma 5.2 is,
Theorem 10
If H(z) is definite, then for all p×p matrices with A = 1
we have,
D = 
ó õ

S_{A}

x^{2}_{H1/2(z)} dx ³ 
ó õ

S_{[^H]1/2(z)}

x^{2}_{H1/2(z)} dx 
 (117) 
Proof:


H(z)^{1/p} 
ó õ

S_{A}

x^{2}_{[^H]1/2(z)} dx 
 (118)  

H(z)^{1/p} 
ó õ

S_{[^H]1/2(z)}

x^{2}_{[^H]1/2(z)} dx 
 (119)  


ó õ

S_{[^H]1/2(z)}

x^{2}_{H1/2(z)} dx 
 (120) 

where we have used lemma 5.2 to deduce the middle inequality
(119).
Q.E.D.
5.3 Best Norm With General Kernels
In this section we solve the problem of finding the optimal norm
in the general class of estimators (9).
Before we optimize the norm we need to state explicitly what it means
to do estimation with different kernels and different norms. First of
all a general kernel function is a nonnegative bounded function
defined on the unit sphere generated by a given norm. Hence, the
kernel only makes sense relative to the given norm. To indicate this
dependence on the norm we write K_{A} for the kernel associated to
the norm generated by the matrix A. We let
where K = K_{I} is a fix mother kernel defined on the euclidian unit
sphere. Equation (121) provides meaning to the notion of
changing the norm without changing the kernel. What this means is not
that the kernel is invariant under changes of A but rather
equivariant in the form specified by (121). Recall also
that a proper kernel must satisfy (22). To control bias we
must also require the kernels to satisfy (23). It is easy
to see (just change the variables) that if the mother kernel K has
these properties so do all its children K_{A} with the only proviso
that A = 1 in order to get (22). Notice also that radial
symmetry of K is a sufficient but not a necessary condition for
(23).
The optimization of the norm with general kernels looks more
complicated than when the kernel is uniform since the best
(AMSE)^{*} also depends on ò_{SA}K_{A}^{2}(x)dx. Consider the
double smoothing estimators, which are the most general case treated
in this paper. From, (39), (40) and
(45) we have,
(AMSE)^{*} = (const.) 
ì í
î

b^{1} 
ó õ

S_{A}

K_{A}^{2}(x)dx  l_{1} 
ü ý
þ

[4/(p+4)]

f(z) 
æ ç
è


D^{2} f(z)

ö ÷
ø

[p/(p+4)]


 (122) 
where the constant depends only on the dimension of the space. Even
though the dependence of (122) on A looks much more
complicated than (47) this is only apparently so. In
fact the two expressions define very similar optimization problems as
we now show.
First notice that the search for best A must be done within the
class of matrices with a fix determinant. For otherwise we will be
changing the value of the smoothness parameter that was fixed to the
best possible value in order to obtain (122).
If we let A = 1 we have,

ó õ

S_{A}

K(x) dx = b = 
ó õ

S_{A}

dx = l(S_{I}) 
 (123) 
We also have that,

ó õ

S_{A}

K_{A}^{2}(y) dy = 
ó õ

S_{A}

K^{2}(Ay) dy = 
ó õ

S_{I}

K^{2}(y) dx 
 (124) 
From (123) and (124) we deduce that the term in
(122) within cursive brackets is the same for all
matrices A and it depends only on the fix kernel K.
Finally notice that the value of D in (122) is
given by
D = 
ó õ

S_{A}

x^{T}H(z)x G(Ax) dx 
 (125) 
where G(x) = K(x)+l_{0} in the general case. By retracing again the
steps that led to (81) we can write,



å
j


å
k

< b_{j},b_{k} > 
ó õ

S_{I}

y^{j}y^{k} G(y) dy 
 (126)  


å
j


å
k

< b_{j},b_{k} > d^{jk} r_{k}(G) 
 


p å
j = 1

< b_{j},b_{j} > r_{j}(G) 
 (127) 

where now,
r_{j}(G) = 
ó õ

S_{I}

(x^{j})^{2} G(x) dx 
 (128) 
There are three cases to be considered.
 All the r_{j}(G) = r for j = 1,¼,p. The optimization
problem reduces to the case when the kernel is uniform and therefore
it has the same solution.
 All the r_{j}(G) have the same sign, i.e. they are all
positive or all negative. If e.g. all r_{j} > 0 just replace
b_{j} with Ö{r_{j}}b_{j} and use the formulas obtained
for the uniform kernel case.
 Some of the r_{j}(G) are positive and some are
negative. This case can be handled like the previous one after
taking care of the signs for different indices j.
The first case is the most important for it is the one implied when
the kernels are radially symmetric. The other two cases are only
included for completeness. Clearly if we do estimation with a non
radially symmetric kernel the optimal norm would have to correct for
this arbitrary builtin asymmetry, effectively achieving at the end the
same performance as when radially symmetric kernels are used. The
following theorem enunciates the main result.
Theorem 11
In the general class of estimators (9) with
radially symmetric (mother) kernels, best possible asymptotic
performance (under second order smoothness conditions) is achieved
when distances are measured with the best metrics obtained when the
kernel is uniform.
6 Asymptotic Relative Efficiencies
The practical advantage of using density estimators that adapt to the
form of the optimal metrics can be measured by computing the
Asymptotic Relative Efficiency (ARE) of the optimal metric to the
euclidian metric. Let us denote by AMSE(I) and AMSE(H(z)) the
expressions obtained from (122) when using the
euclidian norm and the optimal norm respectively. For the Euclidean
norm we get,
AMSE(I) = (const.) 
ì í
î

b^{1} 
ó õ

S_{I}

K^{2}(x)dx  l_{1} 
ü ý
þ

[4/(p+4)]

f(z) 
æ ç
è


(r tr H(z))^{2} f(z)

ö ÷
ø

[p/(p+4)]


 (129) 
where tr stands for the trace since,
D = 
ó õ

S_{I}

x^{T}H(z)x G(x) dx = 
å
i,j

h_{ij}(z) 
ó õ

S_{I}

x^{i}x^{j} G(x) dx = r tr H(z) 
 (130) 
Using (123), (124) and (69) we obtain
that when H(z) is definite,
  (131)  

(const.) 
ì í
î

b^{1} 
ó õ

S_{I}

K^{2}(x)dx  l_{1} 
ü ý
þ

[4/(p+4)]

f(z) 
æ ç ç
ç è


(r p  
det
 H(z)^{1/p})^{2} 
f(z)

ö ÷ ÷
÷ ø

[p/(p+4)]




Hence, when H(z) is definite the ARE is,
ARE = 
AMSE(I) AMSE(H(z))

= 
æ ç ç
ç è

tr H(z)

ö ÷ ÷
÷ ø

[2p/(p+4)]


 (132) 
If x_{1},¼,x_{p} are the absolute value of the
eigenvalues of H(z) then we can write,
ARE = 
æ ç ç ç
ç ç ç è


ö ÷ ÷ ÷
÷ ÷ ÷ ø

[2p/(p+4)]

= 
æ ç
è


arith. mean of {x_{j}} geom. mean of {x_{j}}

ö ÷
ø

[2p/(p+4)]


 (133) 
It can be easily shown that the arithmetic mean is always greater or
equal than the geometric mean (take logs, use the strict concavity of
the logarithm and Jensen's inequality) with equality if and only if
all the x_{j}'s are equal. Thus, it follows from
(133) that the only case in which the use of the
optimal metric will not increase the efficiency of the estimation of
the density at a point where the Hessian is definite is when all the
eigenvalues of H(z) are equal. It is also worth noticing that the
efficiency increases with p, the dimension of the data space. There
is of course infinite relative efficiency in the regions where the
H(z) is indefinite.
7 An Example: Radially Symmetric Distributions
When the true density f(z) has radial symmetry it is possible to
find the regions where the Hessian H(z) is positive and negative
definite. These models have horizons defined by the boundary between
the regions where H(z) is definite. We show also that when and only
when the density is linear in the radius of symmetry, the Hessian is
singular in the interior of a solid sphere. Thus, at the interior of
these spheres it is impossible to do estimation with the best metric.
Let us denote simply by L the log likelihood, i.e.,
If we also denote simply by L_{j} the partial derivative of L with
respect to z_{j} then,
and also,

¶^{2}f ¶z_{i}¶z_{j}

= 
¶f ¶z_{i}

L_{j} + f(z) L_{ij} = f(z){ L_{i}L_{j} + L_{ij} } 
 (136) 
where we have used (135) and the definition L_{ij} = [(¶L_{j})/(¶z_{i})]. It is worth noticing, by passing, that
(136) implies a notable connection with the so called
nonparametric Fisher information I(f) matrix,

ó õ

H(z) dz = I(f)  I(f) = 0 
 (137) 
our main interest here however, is the computation of the Hessian when
the density is radially symmetric. Radial symmetry about a fix point
m Î Â^{p} is obtained when L (and thus f as well) depends
on z only through the norm zm_{V1} for some symmetric
positive definite p×p matrix V. Therefore we assume that,
L = L( 
1 2

(zm)^{T}V^{1}(zm)) 
 (138) 
from where we obtain,


 (139)  

L^{¢¢} v^{i·}(zm)v^{j·}(zm)  L^{¢}v^{ij} 
 (140) 

where v^{i·} and v^{ij} denote the ith row and ijth
entries of V^{1} respectively. Replacing (139) and
(140) into (136), using the fact that V^{1} is
symmetric and that v^{j·}(zm) is a scalar and thus, equal to
its own transpose (zm)^{T}v^{·j}, we obtain
H(z) = f(z) L^{¢} 
ì í
î


æ ç
è

L^{¢}+ 
L^{¢¢} L^{¢}

ö ÷
ø

V^{1}(zm)(zm)^{T}  I 
ü ý
þ

V^{1} 
 (141) 
We have also assumed that L^{¢} is never zero. With the help of
(141) we can now find the conditions for H(z) to be
definite and singular. Clearly H(z) will be singular when the
determinant of the matrix within curly brackets in (141) is
zero. But that determinant being zero means that l = 1 is an
eigenvalue of
(L^{¢}+L^{¢¢}/L^{¢})V^{1}(zm)(zm)^{T} 
 (142) 
and since this last matrix has rank one its only nonzero eigenvalue
must be equal to its own trace. Using the cyclical property of the
trace and letting
y =  
1 2

(zm)^{T}V^{1}(zm) 

we can write,
Theorem 12
The Hessian of a radially symmetric density is singular when and
only when either L^{¢} = 0 or
L^{¢}+ 
d dy

logL^{¢} =  
1 2y


 (143) 
Notice that theorem 7 provides an equation in y after
replacing a particular function L = L(y). Theorem 7 can
also be used to find the functions L(y) that will make the Hessian
singular. Integrating (143) we obtain,
L(y) + logL^{¢}(y) =  
1 2

log(y) + c 
 (144) 
and solving for L^{¢}, separating the variables and integrating
we get,
L(y) = log 
æ è

a 
 ___ Öy

+ b 
ö ø


 (145) 
where a and b are constants of integration.
In terms of the density equation (145) translates to,
Hence, in the regions where the density is a straight line as a
function of r = zm_{V1} the Hessian is singular and
estimation with best metrics is not possible.
Moreover, from (141) we can also obtain the regions of
space where the Hessian is positive and where it is negative
definite. When L^{¢} > 0, H(z) will be negative definite
provided that the matrix,
I  (L^{¢}+L^{¢¢}/L^{¢})V^{1}(zm)(zm)^{T} 
 (147) 
is positive definite. But a matrix is positive definite when and only
when all its eigenvalues are positive. It is immediate to verify that
x is an eigenvalue for the matrix (147) if and only if
(1x) is an eigenvalue of the matrix (142). The matrix
(142) has rank one and therefore its only nonzero
eigenvalue is its trace so we arrive to,
Theorem 13
When,
L^{¢}+ 
d dy

logL^{¢} <  
1 2y


 (148) 
H(z) is negative definite when L^{¢} > 0 and positive
definite when L^{¢} < 0. When,
L^{¢}+ 
d dy

logL^{¢} >  
1 2y


 (149) 
H(z) is indefinite.
For example when f(z) is multivariate Gaussian L(y) = y+c so that
L^{¢} = 1 and the horizon is the surface of the V^{1}sphere
of radius one i.e., (zm)^{T}V^{1}(zm) = 1. Inside this sphere
the Hessian is negative definite and outside the sphere the Hessian is
indefinite. The results in this section can be applied to any other
class of radially symmetric distributions, e.g. multivariate T which
includes the Cauchy.
8 Conclusions
We have shown the existence of optimal metrics in nonparametric
density estimation. The metrics are generated by the Hessian of the
underlying density and they are unique in the regions where the
Hessian is definite. The optimal metric can be expressed as a
continuous function of the Hessian in the regions where it is
indefinite. The Hessian varies continuously from point to point thus,
associated to the general class of density estimators
(9) there is a Riemannian manifold with the
property that if the estimators are computed based on its metric the
best asymptotic mean square error is minimized. The results are
sufficiently general to show that these are absolute bounds on the
quality of statistical inference from data.
The similarities with General Relativity are evident but so are the
differences. For example, since the Hessian of the underlying density
is negative definite at local maxima, it follows that there will be a
horizon boundary where the Hessian becomes singular. The cross of the
boundary corresponds to a change of signature in the metric. These
horizons almost always are null sets and therefore irrelevant from a
probabilistic point of view. However, when the density is radially
symmetric changing linearly with the radius we get solid spots of
singularity. There is a qualitative change in the quality of inference
that can be achieved within these dark spots. But unlike GR, not only
around local maxima but also around local minima of the density we
find horizons. Besides, it is not necessary for the density to reach a
certain threshold for these horizons to appear. Nevertheless, I
believe that the infusion of new statistical ideas into the
foundations of Physics, specially at this point in history, should be
embraced with optimism. Only new data will (help to) tell.
There are many unexplored promising avenues along the lines of the
subject of this paper but one that is obvious from a GR point of
view. What is missing is the connection between curvature and
probability density, i.e. the field equation. I hope to be able to
work on this in the near future.
The existence of optimal metrics in density estimation is not only of
theoretical importance but of significant practical value as well. By
estimating the Hessian (e.g. with kernels that can take positive and
negative values, see []) we can build estimators that
adapt to the form of the optimal norm with efficiency gains that
increase with the number of dimensions. The antidote to the curse of
dimensionality!
9 Acknowledgments
I would like to thank my friends in the Maximum Entropy community
specially Gary Erickson for providing a stimulating environment
for this meeting. I am also in debt to Ariel Caticha for many
interesting conversations about life, the universe, and these things.
File translated from
T_{E}X
by
T_{T}H,
version 2.60.
On 19 Oct 2001, 08:32.