C H A P T E R 8
Estimation with Minimum Mean
Square Error
INTRODUCTION
A recurring theme in this text and in much of communication, control and signal
processing is that of making systematic estimates, predictions or decisions about
some set of quantities, based on information obtained from measurements of other
quantities. This process is commonly referred to as inference. Typically, inferring
the desired information from the measurements involves incorporating models that
represent our prior knowledge or beliefs about how the measurements relate to the
quantities of interest.
Inference about continuous random variables and ultimately about random pro-
cesses is the topic of this chapter and several that follow. One key step is the
introduction of an error criterion that measures, in a probabilistic sense, the error
between the desired quantity and our estimate of it. Throughout our discussion
in this and the related subsequent chapters, we focus primarily on choosing our
estimate to minimize the expected or mean value of the square of the error, re-
ferred to as a minimum mean-square-error (MMSE) criterion. In Section
8.1 we
consider the MMSE estimate without imposing any constraint on the form that
the estimator takes. In Section 8.3 we restrict the estimate to be a linear combina-
tion of the measurements, a form of estimation that we refer to as linear minimum
mean-square-error (LMMSE) estimation.
Later in the text we turn from inference problems for continuous random variables
to inference problems for discrete random quantities, which may be numerically
specified or may be non-numerical. In the latter case especially, the various possible
outcomes associated with the random quantity are often termed hypotheses, and
the inference task in this setting is then referred to as hypothesis testing, i.e., the
task of deciding which hypothesis applies, given measurements or observations. The
MMSE criterion may not be meaningful in such hypothesis testing problems, but we
can for instance aim to minimize the probability of an incorrect inference regarding
which hypothesis actually applies.
c
139°Alan V. Oppenheim and George C. Verghese, 2010
Z
Z
Z Z
b
b
Z
140 Chapter 8 Estimation with Minimum Mean Square Error
8.1 ESTIMATION OF A CONTINUOUS RANDOM VARIABLE
To begin the discussion, let us assume that we are interested in a random variable
Y
and we would like to estimate its value, knowing only its probability density
function. We will then broaden the discussion to estimation when we have a mea-
surement or observation of another random variable X, together with the joint
probability density function of X and Y .
Based only on knowledge of the PDF of Y , we wish to obtain an estimate of Y
which we denote as yb so as to minimize the mean square error between the
actual outcome of the experiment and our estimate yb. Specifically, we choose ybto
minimize
E[(Y yb)
2
] = (y yb)
2
f
Y
(y) dy . (8.1)
Differentiating (
8.1) with respect to yband equating the result to zero, we obtain
2 (y yb)f
Y
(y) dy = 0 (8.2)
or
yf
Y
(y) dy = yf
Y
(y) dy (8.3)
from which
y =
E[Y ] . (8.4)
The second derivative of E[(Y yb)
2
] with respect to ybis
2 f
Y
(y) dy = 2 , (8.5)
which is positive, so (
8.4) does indeed define the minimizing value of yb. Hence the
MMSE estimate of Y in this case is simply its mean value, E[Y ].
The associated error the actual MMSE is found by evaluating the expression
in (
8.1) with yb = E[Y ]. We conclude that the MMSE is just the variance of Y ,
namely σ
Y
2
:
min E[(Y yb)
2
] = E[(Y E[Y ])
2
] = σ
2
. (8.6)
Y
In a similar manner, it is possible to show that the median of Y , which has half
the probability mass of Y below it and the other half above, is the value of ybthat
minimizes the mean absolute deviation, E[ |Y yb| ]. Also, the mode of Y , which
is the value of y at which the PDF f
Y
(y) is largest, turns out to minimize the
expected value of an all-or-none cost function, i.e., a cost that is unity when the
error is outside of a vanishingly small tolerance band, and is zero within the band.
We will not be pursuing these alternative error metrics further, but it is important
to be aware that our choice of mean square error, while convenient, is only one of
many possible error metrics.
The insights from the simple problem leading to (
8.4) and (8.6) carry over directly
to the case in which we have additional information in the form of the measured or
c
°Alan V. Oppenheim and George C. Verghese, 2010
Z
b |
Section 8.1 Estimation of a Continuous Random Variable 141
observed value x of a random variable X that is related somehow to Y . The only
change from the previous discussion is that, given the additional measurement,
we work with the conditional or a posteriori density f
Y |X
(y|x), rather than the
unconditioned density f
Y
(y), and now our aim is to minimize
E[{Y yb(x)}
2
|X = x] = {y yb(x)}
2
f
Y |X
(y|x) dy . (8.7)
We have introduced the notation yb(x) for our estimate to show that in general it
will depend on the specific value x. Exactly the same calculations as in the case of
no
measurements then show that
y(x) = E[Y X = x] , (8.8)
the conditional expectation of Y , given X = x. The associated MMSE is the vari-
ance σ
2
of the conditional density f
Y |X
(y|x), i.e., the MMSE is the conditional
Y |X
variance. Thus, the only change from the case of no measurements is that we now
condition on the obtained measurement.
Going a further step, if we have multiple measurements, say X
1
= x
1
, X
2
=
x
2
, , X
L
= x
L
, then we work with the a posteriori density ···
f
Y | X
1
,X
2
,··· ,X
L
(y | x
1
, x
2
, ··· , x
L
) . (8.9)
Apart from this modification, there is no change in the structure of the solutions.
Thus, without further calculation, we can state the following:
The MMSE estimate of Y ,
given X
1
= x
1
, , X
L
= x
L
,···
is the conditional expectation of Y :
(8.10)
y(x
1
, . . . , x
L
) = E[Y X
1
= x
1
, , X
L
= x
L
]b | ···
For notational convenience, we can arrange the measured random variables into a
column vector X, and the corresponding measurements into the column vector x.
The dependence of the MMSE estimate on the measurements can now be indicated
by the notation yb(x), with
Z
yb(x) =
−∞
y f
Y |X
(y | X = x) dy = E[ Y | X = x ] . (8.11)
The minimum mean square error (or MMSE) for the given value of X is again the
conditional variance, i.e., the variance σ
Y
2
|X
of the conditional density f
Y |X
(y | x).
EXAMPLE 8.1 MMSE Estimate for Discrete Random Variables
A discrete-time discrete-amplitude sequence s[n] is stored on a noisy medium. The
retrieved sequence is r[n]. Suppose at some particular time instant n = n
0
we have
°Alan V. Oppenheim and George C. Verghese, 2010
c
142 Chapter 8 Estimation with Minimum Mean Square Error
s[n
0
] and r[n
0
] modeled as random variables, which we shall simply denote by S
and R respectively. From prior measurements, we have determined that S and R
have the joint probability mass function (PMF) shown in Figure
8.1.
r
1
s
-1 1
-1
FIGURE 8.1 Joint PMF of S and R.
Based on receiving the value R = 1, we would like to make an MMSE estimate sb
of S. From (
8.10), sb= E(S|R = 1), which can be determined from the conditional
PMF P
S|R
(s|R = 1), which in turn we can obtain as
P
R,S
(R = 1, s)
P
S|R
(s|R = 1) =
P
R
(R = 1)
. (8.12)
From Figure
8.1,
2
P
R
(1) =
(8.13)
7
and
P
R,S
(1, s) =
0 s = 1
1/7 s = 0
1/7 s = +1
Consequently,
½
1/2 s = 0
P
S|R
(s|R = 1) =
1/2 s = +1
Thus, the MMSE estimate is sb=
1
. Note that although this estimate minimizes
2
the mean square error, we have not constrained it to take account of the fact that
S
can only have the discrete values of +1, 0 or 1. In a later chapter we will
return to this example and consider it from the perspective of hypothesis testing,
i.e., determining which of the three known possible values will result in minimizing
°Alan V. Oppenheim and George C. Verghese, 2010
c
Section 8.1 Estimation of a Continuous Random Variable 143
a suitable error criterion.
EXAMPLE 8.2 MMSE Estimate of Signal in Additive Noise
A discrete-time sequence s[n] is transmitted over a noisy channel and retrieved.
The received sequence r[n] is modeled as r[n] = s[n] + w[n] where w[n] represents
the noise. At a particular time instant n = n
0
, suppose r[n
0
], s[n
0
] and w[n
0
] are
random variables, which we denote as R, S and W respectively. We assume that
1
2
1
2
S and W are independent, that W is uniformly distributed between +
and
and S is uniformly distributed between 1 and +1. The specific received value is
1
4
,
R =
, and we want the MMSE estimate sbfor S. From (
8.10),
1
b |
4
) (8.14)
s = E(S R =
1
4
):
which can be determined from f
S|R
(s R =|
1
4
f
R|S
(
s)f
S
(s)
1
|
f
R
(
f
S|R
(s|R =
4
) =
. (8.15)
1
4
)
We evaluate separately the numerator and denominator terms in (
8.15). The PDF
f
R|S
(r
indicated in Figure
8.2 below.
s) is identical in shape to the PDF of W , but with the mean shifted to s, as |
1
4
|s) is as shown in Figure
8.3,Consequently, f
R|S
(
s)f
S
(s) is shown in Figure
8.4.
and f
R|S
(
1
4
|
f
R|S
(r|s)
r
1
1
2
+ s +
1
2
+ s
FIGURE 8.2 Conditional PDF of R given S, f
R|S
(r|s).
1
4
1
4
To obtain f
S|R
(s R|
tained by evaluating the convolution of the PDF’s of S and W
) we divide Figure
8.4 by f
R
(
), which can easily be ob-
=
at the argument
1
4
1
4
More simply, since f
S|R
(s R|
same as Figure
8.4 but scaled by f
R
(
) must have total area of unity and it is the =.
1
4
), we can easily obtain it by just normalizing
Figure 8.4 to have an area of 1. The resulting value for sbis the mean associated
1
4
with the PDF f
S|R
(s R =|
), which will be
1
b
4
. (8.16)
s =
°Alan V. Oppenheim and George C. Verghese, 2010
c
½
b |
144 Chapter 8 Estimation with Minimum Mean Square Error
1
s
1
4 0
3
4
1
4
|s).
Plot of f
R|S
(FIGURE 8.3
1
2
s
1
4
3
40
1
4
|
Plot of f
R|S
(
s)f
S
(s).
FIGURE 8.4
1
12
.
The associated MMSE is the variance of this PDF, namely
EXAMPLE 8.3 MMSE Estimate for Bivariate Gaussian Random Variables
Two random variables X and Y are said to have a bivariate Gaussian joint PDF if
the joint density of the centered (i.e. zero-mean) and normalized (i.e. unit-variance)
random variables
V =
X µ
X
, W =
Y µ
Y
(8.17)
σ
X
σ
Y
is given by
1 (v
2
2ρvw + w
2(1 ρ
2
)
2
)
¾
. (8.18)
f
V,W
(v, w) =
2π
p
1 ρ
2
exp
Here µ
X
and µ
Y
are the means of X and Y respectively, and σ
X
, σ
Y
are the respec-
tive standard deviations of X and Y . The number ρ is the correlation coefficient
of X and Y , and is defined by
σ
XY
ρ = ,
with σ
XY
= E[XY ] µ
X
µ
Y
(8.19)
σ
X
σ
Y
where σ
XY
is the covariance of X and Y .
Now, consider yb(x), the MMSE estimate of Y given X = x, when X and Y are
bivariate Gaussian random variables. From (
8.10),
y(x) = E[Y X = x] (8.20)
°Alan V. Oppenheim and George C. Verghese, 2010
c
Section 8.2 From Estimates to an Estimator 145
or, in terms of the zero-mean normalized random variables V and W ,
·
x
µ
X
¸
y(x) = E (σ
Y
W + µ
Y
) V =b |
σ
X
= σ
Y
E
·
W | V =
x
σ
X
µ
X
¸
+ µ
Y
. (8.21)
It is straightforward to show with some computation that f
W |V
(w v) is Gaussian
with mean ρv, and variance 1 ρ
2
, from which it follows that
|
·
x
µ
X
¸ ·
x µ
X
¸
E W V = = ρ . (8.22) |
σ
X
σ
X
Combining (8.21) and (8.22),
y(x) = E[ Y X = x ]b |
σ
Y
= µ
Y
+ ρ
(x µ
X
) (8.23)
σ
X
The MMSE estimate in the case of bivariate Gaussian variables has a nice linear
(or more correctly, affine, i.e., linear plus a constant) form.
The minimum mean square error is the variance of the conditional PDF f
Y |X
(y|X =
x):
E[ (Y yb(x))
2
| X = x ] = σ
Y
2
(1 ρ
2
) . (8.24)
Note that σ
Y
2
is the mean square error in Y in the absence of any additional infor-
mation. Equation (
8.24) shows what the residual mean square error is after we have
a measurement of X. It is evident and intuitively reasonable that the larger the
magnitude of the correlation coefficient between X and Y , the smaller the residual
mean square error.
8.2 FROM ESTIMATES TO AN ESTIMATOR
The MMSE estimate in (
8.8) is based on knowing the specific value x that the
random variable X takes. While X is a random variable, the specific value x is not,
and consequently yb(x) is also not a random variable.
As we move forward in the discussion, it is important to draw a distinction between
the estimate of a random variable and the procedure by which we form the estimate.
This is completely analogous to the distinction between the value of a function at
a
point and the function itself. We will refer to the procedure or function that
produces the estimate as the estimator.
For instance, in Example 8.1
we determined the MMSE estimate of S for the specific
value of R = 1. We could more generally determine an estimate of S for each of
the possible values of R, i.e., 1, 0, and + 1. We could then have a tabulation of
these results available in advance, so that when we retrieve a specific value of R
c
°Alan V. Oppenheim and George C. Verghese, 2010
146 Chapter 8 Estimation with Minimum Mean Square Error
we can look up the MMSE estimate. Such a table or more generally a function
of R would correspond to what we term the MMSE estimator. The input to the
table or estimator would be the specific retrieved value and the output would be
the estimate associated with that retrieved value.
We have already introduced the notation yb(x) to denote the estimate of Y given
X =
x. The function yb( ) determines the corresponding estimator, which we ·
will denote by yb(X), or more simply by just Y
b
, if it is understood what random
variable the estimator is operating on. Note that the estimator Y
b
= yb(X) is a
random variable. We have already seen that the MMSE estimate yb(x) is given by
the conditional mean, E[Y X = x], which suggests yet another natural notation for |
the MMSE estimator:
Y
b
= yb(X) = E[Y |X] . (8.25)
Note that E[Y X] denotes a random variable, not a number. |
The preceding discussion applies essentially unchanged to the case where we observe
several random variables, assembled in the vector X. The MMSE estimator in this
case is denoted by
Y
b
= yb(X) = E[Y |X] . (8.26)
Perhaps not surprisingly, the MMSE estimator for Y given X minimizes the mean
square error, averaged over all Y and X. This is because the MMSE estimator
minimizes the mean square error for each particular value x of X. More formally,
E
Y,X
³
[Y yb(X)]
2
´
= E
X
³
E
Y |X
³
[Y yb(X)]
2
| X
´´
=
Z
³
E
Y |X
³
[Y yb(x)]
2
| X = x
´
f
X
(x) dx . (8.27)
−∞
(The subscripts on the expectation operators are used to indicate explicitly which
densities are involved in computing the associated expectations; the densities and
integration are multivariate when X is not a scalar.) Because the estimate yb(x)
is chosen to minimize the inner expectation E
Y |X
for each value x of X, it also
minimizes the outer expectation E
X
, since f
X
(X) is nonnegative.
EXAMPLE 8.4 MMSE Estimator for Bivariate Gaussian Random Variables
We have already, in Example 8.
3, constructed the MMSE estimate of one member
of a pair of bivariate Gaussian random variables, given a measurement of the other.
Using the same notation as in that example, it is evident that the MMSE estimator
is simply obtained on replacing x by X in (
8.23):
σ
Y
Y
b
= yb(X) = µ
Y
+ ρ
σ
X
(X µ
X
) . (8.28)
The conditional MMSE given X = x was found in the earlier example to be σ
2
(1
Y
ρ
2
), which did not depend on the value of x, so the MMSE of the estimator, averaged
°Alan V. Oppenheim and George C. Verghese, 2010
c
Section 8.2 From Estimates to an Estimator 147
over all X, ends up still being σ
2
(1 ρ
2
).
Y
EXAMPLE 8.5 MMSE Estimator for Signal in Additive Noise
Suppose the random variable X is a noisy measurement of the angular position Y of
an antenna, so X = Y + W , where W denotes the additive noise. Assume the noise
is independent of the angular position, i.e., Y and W are independent random
variables, with Y uniformly distributed in the interval [1, 1] and W uniformly
distributed in the interval [2, 2]. (Note that the setup in this example is essentially
the same as in Example 8.
2, though the context, notation and parameters are
different.)
Given that X = x, we would like to determine the MMSE estimate yb(x), the
resulting mean square error, and the overall mean square error averaged over all
possible values x that the random variable X can take. Since yb(x) is the conditional
expectation of Y given X = x, we need to determine f
Y |X
(y|x). For this, we first
determine the joint density of Y and W , and from this the required conditional
density.
From the independence of Y and W :
1
2 w 2, 1 y 1
f
Y,W
(y, w) = f
Y
(y)f
W
(w) =
8
0
otherwise
y
1
2 0 2 w
1
FIGURE 8.5 Joint PDF of Y and W for Example 8.5.
Conditioned on Y = y, X is the same as y + W , uniformly distributed over the
interval [y 2, y + 2]. Now
1 1 1
f
X,Y
(x, y) = f
X|Y
(x|y)f
Y
(y) = (
4
)(
2
) =
8
°Alan V. Oppenheim and George C. Verghese, 2010
c
148 Chapter 8 Estimation with Minimum Mean Square Error
for 1 y 1, y 2 x y + 2, and zero otherwise. The joint PDF is therefore
uniform over the parallelogram shown in the Figure
8.6.
y
1
x
3 2 1 0 1 2 3
1
FIGURE 8.6 Joint PDF of X and Y and plot of the MMSE estimator of Y from X
for Example 8.5.
y y y y y y y
1
0
1
1
1
1
2
1
2
1
2
f
Y
|X
(y | 3) f
Y
|X
(y | 1) f
Y
|X
(y | 1) f
Y
|X
(y | 3)
f
Y
|X
(y | 2) f
Y
|X
(y | 0) f
Y
|X
(y | 2)
FIGURE 8.7 Conditional PDF f
Y |X
for various realizations of X for Example 8.5.
Given X = x, the conditional PDF f
Y |X
is uniform on the corresponding vertical
section of the parallelogram:
f
Y |X
(y, x) =
1
3 x 1 , 1 y x + 2
3 + x
1
1 x 1 , 1 y 1
(8.29)
2
1
3 x
1 x 3 , x 2 y 1
°Alan V. Oppenheim and George C. Verghese, 2010
c
b
Z Z Z
Section 8.2 From Estimates to an Estimator 149
The MMSE estimate yb(x) is the conditional mean of Y given X = x, and the
conditional mean is the midpoint of the corresponding vertical section of the paral-
lelogram. The conditional mean is displayed as the heavy line on the parallelogram
in the second plot. In analytical form,
1
1
+ x 3 x < 1
2 2
y(x) = E[Y
The minimum mean square error associated with this estimate is the variance of
the uniform distribution in eq. (8.29), specifically:
X = x] =
0 1 x < 1
(8.30)
|
1
1
2
+
2
1 x 3 x
X = x]E[{Y yb(x)}
2
|
(3 + x)
2
3 x < 1
12
1
3
(3 x)
2
12
1 x < 1
(8.31)
1 x 3
Equation (
8.31) specifies the mean square error that results for any specific value
x
of the measurement of X. Since the measurement is a random variable, it is also
of interest to know what the mean square error is, averaged over all possible values
of the measurement, i.e. over the random variable X. To determine this, we first
determine the marginal PDF of X:
f
X
(x) =
f
X,Y
(x, y)
f
Y |X
(y | x)
=
3 + x
3 x < 1
8
1
4
1 x < 1
3 x
1 x 3
8
0
otherwise
This could also be found by convolution, f
X
= f
Y
f
W
, since Y and W are
statistically independent. Then,
Z
E
X
[E
Y |X
{(Y yb(x)}
2
| X = x]] =
E[(Y yb(x))
2
| X = x]f
X
(x)dx
−∞
=
1
(
(3 + x)
2
12
1 3
)( )dx + ( )( )dx + (
(3 x)
2
12
3 + x 1 1
)(
3 x
8
)dx
8 3 4
3
1 1
1
=
4
°Alan V. Oppenheim and George C. Verghese, 2010
c
150 Chapter 8 Estimation with Minimum Mean Square Error
Compare this with the mean square error if we just estimated Y by its mean, namely
0. The mean square error would then be the variance σ
Y
2
:
σ
2
[1 (1)]
2
1
,
= =
Y
12 3
so the mean square error is indeed reduced by allowing ourselves to use knowledge
of X and of the probabilistic relation between Y and X.
8.2.1 Orthogonality
A further important property of the MMSE estimator is that the residual error
Y
yb(X) is orthogonal to any function h(X) of the measured random variables:
E
Y,X
[{Y yb(X)}h(X)] = 0 , (8.32)
where the expectation is computed over the joint density of Y and X. Rearranging
this, we have the equivalent condition
E
Y,X
[yb(X)h(X)] = E
Y,X
[Y h(X)] , (8.33)
i.e., the MMSE estimator has the same correlation as Y does with any function of
X. In particular, choosing h(X) = 1, we find that
E
Y,X
[yb(X)] = E
Y
[Y ] . (8.34)
The latter property results in the estimator being referred to as unbiased: its
expected value equals the expected value of the random variable being estimated.
We can invoked the unbiasedness property to interpret (8.32) as stating that the
estimation error of the MMSE estimator is uncorrelated with any function of the
random variables used to construct the estimator.
The proof of the correlation matching property in (
8.33) is in the following sequence
of equalities:
E
Y,X
[yb(X)h(X)] = E
X
[E
Y |X
[Y |X]h(X)] (8.35)
=
E
X
[E
Y |X
[Y h(X)|X]] (8.36)
=
E
Y,X
[Y h(X)] . (8.37)
Rearranging the final result here, we obtain the orthogonality condition in (
8.32).
8.3 LINEAR MINIMUM MEAN SQUARE ERROR ESTIMATION
In general, the conditional expectation E(Y X) required for the MMSE estimator |
developed in the preceding sections is difficult to determine, because the conditional
density f
Y |X
(y|x) is not easily determined. A useful and widely used compromise
c
°Alan V. Oppenheim and George C. Verghese, 2010
Section 8.3 Linear Minimum Mean Square Error Estimation 151
is to restrict the estimator to be a fixed linear (or actually affine, i.e., linear plus
a
constant) function of the measured random variables, and to choose the linear
relationship so as to minimize the mean square error. The resulting estimator is
called the linear minimum mean square error (LMMSE) estimator. We begin with
the simplest case.
Suppose we wish to construct an estimator for the random variable Y in terms of
another random variable X, restricting our estimator to be of the form
Y
b
= yb
(X) = aX + b , (8.38)
where a and b are to be determined so as to minimize the mean square error
E
Y,X
[(Y Y
b
)
2
] = E
Y,X
[{Y (aX + b)}
2
] . (8.39)
Note that the expectation is taken over the joint density of Y and X; the linear
estimator is picked to be optimum when averaged over all possible combinations of
Y
and X that may occur. We have accordingly used subscripts on the expectation
operations in (
8.39) to make explicit for now the variables whose joint density the
expectation is being computed over; we shall eventually drop the subscripts.
Once the optimum a and b have been chosen in this manner, the estimate of Y ,
given a particular x, is just yb
(x) = ax + b, computed with the already designed
values of a and b. Thus, in the LMMSE case we construct an optimal linear
estimator, and for any particular x this estimator generates an estimate that is
not claimed to have any individual optimality property. This is in contrast to the
MMSE case considered in the previous sections, where we obtained an optimal
MMSE estimate for each x, namely E[Y X = x], that minimized the mean square |
error conditioned on X = x. The distinction can be summarized as follows: in
the unrestricted MMSE case, the optimal estimator is obtained by joining together
all the individual optimal estimates, whereas in the LMMSE case the (generally
non-optimal) individual estimates are obtained by simply evaluating the optimal
linear estimator.
We turn now to minimizing the expression in (
8.39), by differentiating it with
respect to the parameters a and b, and setting each of the derivatives to 0. (Con-
sideration of the second derivatives will show that we do indeed find minimizing
values in this fashion, but we omit the demonstration.) First differentiating (
8.39)
with respect to b, taking the derivative inside the integral that corresponds to the
expectation operation, and then setting the result to 0, we conclude that
E
Y,X
[Y (aX + b)] = 0 , (8.40)
or equivalently
E[Y ] = E[aX + b] = E[Y
b
] , (8.41)
from which we deduce that
b =
µ
Y
X
, (8.42)
where µ
Y
= E[Y ] = E
Y,X
[Y ] and µ
X
= E[X] = E
Y,X
[X]. The optimum value of
b
specified in (
8.42) in effect serves to make the linear estimator unbiased, i.e., the
c
°Alan V. Oppenheim and George C. Verghese, 2010
152 Chapter 8 Estimation with Minimum Mean Square Error
expected value of the estimator becomes equal to the expected value of the random
variable we are trying to estimate, as (
8.41) shows.
Using (
8.42) to substitute for b in (8.38), it follows that
Y
b
= µ
Y
+ a(X µ
X
) . (8.43)
In other words, to the expected value µ
Y
of the random variable Y that we are
estimating, the optimal linear estimator adds a suitable multiple of the difference
X
µ
X
between the measured random variable and its expected value. We turn
now to finding the optimum value of this multiple, a.
First rewrite the error criterion (
8.39) as
E[{(Y µ
Y
) (Y
b
µ
Y
)}
2
] = E[( Y
e
aX
e
)
2
] , (8.44)
where
Y
e
= Y µ
Y
and X
e
= X µ
X
, (8.45)
and where we have invoked (
8.43) to obtain the second equality in (8.44). Now
taking the derivative of the error criterion in (8.44) with respect to a, and setting
the result to 0, we find
E[( Y
e
aX
e
)X
e
] = 0 . (8.46)
Rearranging this, and recalling that E[Y
e
X
e
] = σ
Y X
, i.e., the covariance of Y and
X, and that E[X
e
2
] = σ
2
, we obtain
X
σ
Y X
σ
Y
a = =
ρ
Y X
σ
2
σ
X
, (8.47)
X
where ρ
Y X
which we shall simply write as ρ when it is clear from context what
variables are involved denotes the correlation coefficient between Y and X.
It is also enlightening to understand the above expression for a in terms of the
vector-space picture for random variables developed in the previous chapter.
aX
e
FIGURE 8.8 Expression for a from Eq. (
8.47) illustrated in vector space.
The expression (8.44) for the error criterion shows that we are looking for a vector
aX
e
, which lies along the vector X
e
, such that the squared length of the error vector
°Alan V. Oppenheim and George C. Verghese, 2010
e
Y
e
Y a
e
X = Y
b
Y
e
X
c
Section 8.3 Linear Minimum Mean Square Error Estimation 153
Y
e
aX
e
is minimum. It follows from familiar geometric reasoning that the optimum
choice of aX
e
must be the orthogonal projection of Y
e
on X
e
, and that this projection
is
<
e
X > Y ,
e
X =
X . (8.48)
a
e
e
X >
e
< X,
e
Here, as in the previous chapter, < U, V > denotes the inner product of the vec-
tors U and V , and in the case where the “vectors” are random variables, denotes
E[UV ]. Our expression for a in (8.47) follows immediately. Figure 8.8 shows the
construction associated with the requisite calculations. Recall from the previous
chapter that the correlation coefficient ρ denotes the cosine of the angle between
the vectors Y
e
and X
e
.
The preceding projection operation implies that the error Y
e
aX
e
, which can also
be written as Y Y
b
, must be orthogonal to X
e
= X µ
X
. This is precisely what
(
8.46) says. In addition, invoking the unbiasedness of Y
b
shows that Y Y
b
must
be orthogonal to µ
X
(or any other constant), so Y Y
b
is therefore orthogonal to
X itself:
E[(Y Y
b
)X] = 0 . (8.49)
In other words, the optimal LMMSE estimator is unbiased and such that the esti-
mation error is orthogonal to the random variable on which the estimator is based.
(Note that the statement in the case of the MMSE estimator in the previous section
was considerably stronger, namely that the error was orthogonal to any function
h(X) of the measured random variable, not just to the random variable itself.)
The preceding development shows that the properties of (i) unbiasedness of the
estimator, and (ii) orthogonality of the error to the measured random variable,
completely characterize the LMMSE estimator. Invoking these properties yields
the LMMSE estimator.
Going a step further with the geometric reasoning, we find from Pythagoras’s the-
orem applied to the triangle in Figure
8.8 that the minimum mean square error
(MMSE) obtained through use of the LMMSE estimator is
MMSE = E[( Y
e
aX
e
)
2
] = E[Y
e
2
](1 ρ
2
) = σ
Y
2
(1 ρ
2
) . (8.50)
This result could also be obtained purely analytically, of course, without recourse
to the geometric interpretation. The result shows that the mean square error σ
Y
2
that we had prior to estimation in terms of X is reduced by the factor 1 ρ
2
when
we use X in an LMMSE estimator. The closer that ρ is to +1 or 1 (corresponding
to strong positive or negative correlation respectively), the more our uncertainty
about Y is reduced by using an LMMSE estimator to extract information that X
carries about Y .
Our results on the LMMSE estimator can now be summarized in the following
expressions for the estimator, with the associated minimum mean square error
being given by (
8.50):
σ
Y X
σ
Y
Y
b
= yb
(X) = µ
Y
+
σ
2
(X µ
X
) = µ
Y
+ ρ
σ
X
(X µ
X
) , (8.51)
X
c
°Alan V. Oppenheim and George C. Verghese, 2010
154 Chapter 8 Estimation with Minimum Mean Square Error
or the equivalent but perhaps more suggestive form
Y
b
µ
Y
= ρ
X µ
X
. (8.52)
σ
Y
σ
X
The latter expression states that the normalized deviation of the estimator from its
mean is ρ times the normalized deviation of the observed variable from its mean; the
more highly correlated Y and X are, the more closely we match the two normalized
deviations.
Note that our expressions for the LMMSE estimator and its mean square error are
the same as those obtained in Example 8.4
for the MMSE estimator in the bivariate
Gaussian case. The reason is that the MMSE estimator in that case turned out to
be linear (actually, affine), as already noted in the example.
EXAMPLE 8.6 LMMSE Estimator for Signal in Additive Noise
We return to Example 8.
5, for which we have already computed the MMSE esti-
mator, and we now design an LMMSE estimator. Recall that the random vari-
able X denotes a noisy measurement of the angular position Y of an antenna, so
X = Y + W ,
where W denotes the additive noise. We assume the noise is inde-
pendent of the angular position, i.e., Y and W are independent random variables,
with Y uniformly distributed in the interval [1, 1] and W uniformly distributed
in the interval [2, 2].
For the LMMSE estimator of Y in terms of X, we need to determine the respective
means and variances, as well as the covariance, of these random variables. It is easy
to see that
1
4
2
2
= 0 , µ
W
= 0 , µ
X
= 0 , σ
, σ
µ
Y
=
= ,
Y
W
3 3
5
1 1
σ
2
X
= σ
2
Y
+ σ
2
2
Y
, σ
Y X
= σ
3
, ρ
Y X
=
5
=
=
.
W
3
2
The LMMSE estimator is accordingly
1
b
5
X ,
Y
=
and the associated MMSE is
Y
(1 ρ
2
) =
4
.
15
σ
1
3
1
4
obtained
obtained
This MMSE should be compared with the (larger) mean square error of
if we simply use µ
Y
= 0 as our estimator for Y , and the (smaller) value
using the MMSE estimator in Example 8.
5.
°Alan V. Oppenheim and George C. Verghese, 2010
c
Section 8.3 Linear Minimum Mean Square Error Estimation 155
EXAMPLE 8.7 Single-Point LMMSE Estimator for Sinusoidal Random Process
Consider a sinusoidal signal of the form
X(t) = A cos(ω
0
t + Θ) (8.53)
where ω
0
is assumed known, while A and Θ are statistically independent random
variables, with the PDF of Θ being uniform in the interval [0, 2π]. Thus X(t) is a
random signal, or equivalently a set or “ensemble” of signals corresponding to the
various possible outcomes for A and Θ in the underlying probabilistic experiment.
We will discuss such signals in more detail in the next chapter, where we will refer
to them as random processes. The value that X(t) takes at some particular time
t =
t
0
is simply a random variable, whose specific value will depend on which
outcomes for A and Θ are produced by the underlying probabilistic experiment.
Suppose we are interested in determining the LMMSE estimator for X(t
1
) based
on a measurement of X(t
0
), where t
0
and t
1
are specified sampling times. In other
words, we want to choose a and b in
X
b
(t
1
) = aX(t
0
) + b (8.54)
so as to minimize the mean square error between X(t
1
) and X
b
(t
1
).
We have established that b must be chosen to ensure the estimator is unbiased:
E[X
b
(t
1
)] = aE[X(t
0
)] + b = E[X(t
1
)] .
Since A and Θ are independent,
Z
2π
1
E[X(t
0
)] = E{A}
cos(ω
0
t
0
+ θ) = 0
2π
0
and similarly E[X(t
1
)] = 0, so we choose b = 0.
Next we use the fact that the error of the LMMSE estimator is orthogonal to the
data:
E[( X
b
(t
1
) X(t
1
))X(t
0
)] = 0
and consequently
aE[X
2
(t
0
)] = E[X(t
1
)X(t
0
)]
or
E[X(t
1
)X(t
0
)]
a = .
(8.55)
E[X
2
(t
0
)]
The numerator and denominator in (
8.55) are respectively
Z
2π
1
E[X(t
1
)X(t
0
)] = E[A
2
] cos(ω
0
t
1
+ θ) cos(ω
0
t
0
+ θ)
2π
E[A
2
]
0
=
cos{ω
0
(t
1
t
0
)}
2
°Alan V. Oppenheim and George C. Verghese, 2010
c
b
156 Chapter 8 Estimation with Minimum Mean Square Error
and E[X
2
(t
0
)] =
E[A
2
]
. Thus a = cos{ω
0
(t
1
t
0
)}, so the LMMSE estimator is
2
X(t
1
) = X(t
0
) cos{ω
0
(t
1
t
0
)} . (8.56)
It is interesting to observe that the distribution of A doesn’t play a role in this
equation.
To evaluate the mean square error associated with the LMMSE estimator, we com-
pute the correlation coefficient between the samples of the random signal at t
0
and
t
1
. It is easily seen thatρ = a = cos{ω
0
(t
1
t
0
)}, so the mean square error is
E[A
2
]
³
1 cos
2
{ω
0
(t
1
t
0
)}
´
=
E[A
2
]
sin
2
{ω
0
(t
1
t
0
)} . (8.57)
2 2
We now extend the LMMSE estimator to the case where our estimation of a random
variable Y is based on observations of multiple random variables, say X
1
, . . . , X
L
,
gathered in the vector X. The affine estimator may then be written in the form
L
Y
b
= yb
(X) = a
0
+
X
a
j
X
j
. (8.58)
j=1
As we shall see, the coefficient a
i
of this LMMSE estimator can be found by solving
a
linear system of equations that is completely defined by the first and second
moments (i.e., means, variances and covariances) of the random variables Y and
X
j
. The fact that the model (
8.58) is linear in the parameters a
i
is what results in a
linear system of equations; the fact that the model is affine in the random variables
is what makes the solution only depend on their first and second moments. Linear
equations are easy to solve, and first and second moments are generally easy to
determine, hence the popularity of LMMSE estimation.
The development below follows along the same lines as that done earlier in this
section for the case where we just had a single observed random variable X, but
we use the opportunity to review the logic of the development and to provide a few
additional insights.
We want to minimize the mean square error
L
E
Y (a
0
+
X
a
j
X
j
)
´
2
i
, (8.59)
j=1
where the expectation is computed using the joint density of Y and X. We use the
joint density rather than the conditional because the parameters are not going to
be picked to be best for a particular set of measured values x otherwise we could
do
as well as the nonlinear estimate in this case, by setting a
0
= E[Y X = x] and |
setting all the other a
i
to zero. Instead, we are picking the parameters to be the best
averaged over all possible X. The linear estimator will in general not be as good
°Alan V. Oppenheim and George C. Verghese, 2010
c
Section 8.3 Linear Minimum Mean Square Error Estimation 157
as the unconstrained estimator, except in special cases (some of them important,
as in the case of bivariate Gaussian random variables) but this estimator has the
advantage that it is easy to solve for, as we now show.
To minimize the expression in (
8.59), we differentiate it with respect to a
i
for
i
= 0, 1, , L, and set each of the derivatives to 0. (Again, calculations involving ···
second derivatives establish that we do indeed obtain minimizing values, but we
omit these calculation here.) First differentiating with respect to a
0
and setting
the result to 0, we conclude that
L
E[Y ] = E[ a
0
+
X
a
j
X
j
] = E[Y
b
] (8.60)
j=1
or
L
a
0
= µ
Y
X
a
j
µ
X
j
, (8.61)
j=1
where µ
Y
= E[Y ] and µ
X
j
= E[X
j
]. This optimum value of a
0
serves to make the
linear estimator unbiased, in the sense that (
8.60) holds, i.e., the expected value of
the estimator is the expected value of the random variable we are trying to estimate.
Using (
8.61) to substitute for a
0
in (8.58), it follows that
L
Y
b
= µ
Y
+
X
a
j
(X
j
µ
X
j
) . (8.62)
j=1
In other words, the estimator corrects the expected value µ
Y
of the variable we
are estimating, by a linear combination of the deviations X
j
µ
X
j
between the
measured random variables and their respective expected values.
Taking account of (
8.62), we can rewrite our mean square error criterion (8.59) as
L
E[{(Y µ
Y
) (Y
b
µ
Y
)}
2
] = E
Y
e
X
a
j
X
e
j
)
´
2
i
, (8.63)
j=1
where
Y
e
= Y µ
Y
and X
e
j
= X
j
µ
X
j
. (8.64)
Differentiating this with respect to each of the remaining coefficients a
i
, i = 1, 2, ...L,
and setting the result to zero produces the equations
L
E[( Y
e
X
a
j
X
e
j
)X
e
i
] = 0 i = 1, 2, ..., L . (8.65)
j=1
or equivalently, if we again take account of (
8.62),
E[(Y Y
b
)X
e
i
] = 0 i = 1, 2, ..., L . (8.66)
c
°Alan V. Oppenheim and George C. Verghese, 2010
158 Chapter 8 Estimation with Minimum Mean Square Error
Yet another version follows on noting from (
8.60) that Y Y
b
is orthogonal to all
constants, in particular to µ
X
i
, so
E[(Y Y
b
)X
i
] = 0 i = 1, 2, ..., L . (8.67)
All three of the preceding sets of equations express, in slightly different forms, the
orthogonality of the estimation error to the random variables used in the estimator.
One
moves between these forms by invoking the unbiasedness of the estimator.
The last of these, (
8.67), is the usual statement of the orthogonality condition that
governs the LMMSE estimator. (Note once more that the statement in the case of
the MMSE estimator in the previous section was considerably stronger, namely that
the error was orthogonal to any function h(X) of the measured random variables,
not just to the random variables themselves.) Rewriting this last equation as
E[Y X
i
] = E[Y
b
X
i
] i = 1, 2, ..., L (8.68)
yields an equivalent statement of the orthogonality condition, namely that the
LMMSE estimator Y
b
has the same correlations as Y with the measured variables
X
i
.
The orthogonality and unbiasedness conditions together determine the LMMSE
estimator completely. Also, the preceding developments shows that the first and
second moments of Y and the X
i
are exactly matched by the corresponding first
and second moments of Y
b
and the X
i
. It follows that Y and Y
b
cannot be told
apart on the basis of only first and second moments with the measured variables
X
i
.
We focus now on (
8.65), because it provides the best route to a solution for the
coefficients a
j
, j = 1, . . . , L. This set of equations can be expressed as
L
X
σ
X
i
X
j
a
j
= σ
X
i
Y
, (8.69)
j=1
where σ
X
i
X
j
is the covariance of X
i
and X
j
(so σ
X
i
X
i
is just the variance σ
2
),
X
i
and σ
X
i
Y
is the covariance of X
i
and Y . Collecting these equations in matrix form,
we obtain
σ
X
1
X
1
σ
X
1
X
2
· · · σ
X
1
X
L
a
1
σ
X
1
Y
σ
X
2
X
1
.
.
.
σ
X
2
X
2
.
.
.
· · ·
.
.
.
σ
X
2
X
L
.
.
.
a
2
.
.
.
=
σ
X
2
Y
.
.
.
.
(8.70)
σ
X
L
X
1
σ
X
L
X
2
· · · σ
X
L
X
L
a
L
σ
X
L
Y
This set of equations is referred to as the normal equations. We can rewrite the
normal equations in more compact matrix notation:
(C
XX
) a = C
XY
(8.71)
where the definitions of C
XX
, a, and C
XY
should be evident on comparing the last
two equations. The solution of this set of L equations in L unknowns yields the
°Alan V. Oppenheim and George C. Verghese, 2010
c
Section 8.3 Linear Minimum Mean Square Error Estimation 159
{a
j
} for j = 1, , L, and these values may be substituted in (
8.62) to completely
···
specify the estimator. In matrix notation, the solution is
a
= (C
XX
)
1
C
XY
. (8.72)
It can be shown quite straightforwardly (though we omit the demonstration) that
the minimum mean square error obtained with the LMMSE estimator is
σ
Y
2
C
Y X
(C
XX
)
1
C
XY
= σ
Y
2
C
Y X
a , (8.73)
where C
Y X
is the transpose of C
XY
.
EXAMPLE 8.8 Estimation from Two Noisy Measurements
R
1
L
X
1
|
Y
|
L
X
2
R
2
FIGURE 8.9 Illustration of relationship between random variables from Eq. (
8.75)
for Example 8.8.
Assume that Y , R
1
and R
2
are mutually uncorrelated, and that R
1
and R
2
have zero
means and equal variances. We wish to find the linear MMSE estimator for Y , given
measurements of X
1
and X
2
. This estimator takes the form Y
b
= a
0
+a
1
X
1
+a
2
X
2
.
Our
requirement that Y
b
be unbiased results in the constraint
a
0
= µ
Y
a
1
µ
X
1
a
2
µ
X
2
= µ
Y
(1 a
1
a
2
) (8.74)
Next, we need to write down the normal equations, for which some preliminary
calculations are required. Since
X
1
= Y + R
1
X
2
= Y + R
2
(8.75)
and Y , R
1
and R
2
are mutually uncorrelated, we find
E[X
i
2
] = E[Y
2
] + E[R
2
i
] ,
E[X
1
X
2
] = E[Y
2
] ,
E[X
i
Y ] = E[Y
2
] . (8.76)
°Alan V. Oppenheim and George C. Verghese, 2010
c
¸
¸
160 Chapter 8 Estimation with Minimum Mean Square Error
The normal equations for this case thus become
·
σ
2
+ σ
2
σ
2
·
σ
2
Y
2 2 2
2
σ σ σ σ+
Y
·
σ
2
+ σ
2
σ
2
R
Y
Y
R
Y
Y
R
Y
Y
¸·
a
1
¸
¸·
2
σ
Y
Y
σ
2
(8.77) =
a
2
from which we conclude that
·
a
1
¸
2
σ+
R
2
σ
Y
2
σ
Y
1
=
(σ
2
+ σ
2
σ
2
= .
R
R
2 2
2σ σ+
Y
Y
Y
σ
4
Y
·
¸
1
)
2
a
2
(8.78)
1
Finally, therefore,
2
(σ
R
2
σ+
R
1
2 2
σ X σ+ +
1
YY
2
2
σ σ
R
Y
2σ
2
Y
and applying (8.73) we get that the associated minimum mean square error (MMSE)
is
Y
b
X
2
) (8.79)
=
µ
Y
. (8.80)
2 2
sonable values at extreme ranges of the signal-to-noise ratio σ
RY
2 2
2σ σ+
RY
One can easily check that both the estimator and the associated MMSE take rea-
.
°Alan V. Oppenheim and George C. Verghese, 2010
c
MIT OpenCourseWare
http://ocw.mit.edu
6.011 Introduction to Communication, Control, and Signal Processing
Spring 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms
.