- Information
- AI Chat
Was this document helpful?
Principles of Statistics 2016-2017 Example Sheet 4
Module: Principles of Statistics
5 Documents
Students shared 5 documents in this course
University: University of Cambridge
Was this document helpful?
PRINCIPLES OF STATISTICS – EXAMPLES 4/4
Part II, Michaelmas 2016, Quentin Berthet (email: q.berthetl@statslab.cam.ac.uk)
1. Consider classifying an observation of a random vector Xin Rpinto either a N(µ1,Σ) or
aN(µ2,Σ) population, where Σ is a known nonsingular covariance matrix and where µ16=µ2
are two distinct known mean vectors.
a) For a prior πassigning probability qto µ1and 1 −qto µ2, show that the Bayes classifier
is unique and assigns Xto N(µ1,Σ) whenever
U≡D−1
2(µ1+µ2)TΣ−1(µ1−µ2)
exceeds log((1 −q)/q), where D=XTΣ−1(µ1−µ2) is the discriminant function.
b) Show that U∼N(∆2/2,∆2) whenever X∼N(µ1,Σ), and that U∼N(−∆2/2,∆2)
whenever X∼N(µ2,Σ), where ∆ is the Mahalanobis distance between µ1and µ2given by
∆2= (µ1−µ2)TΣ−1(µ1−µ2).
c) Show that a minimax classifier is obtained from selecting N(µ1,Σ) whenever U≥0.
2. Consider classification of an observation Xinto a population described by a probability
density equal to either f1or f2. Assume Pfi(f1(X)/f2(X) = k) = 0 for all k∈[0,∞], i ∈ {1,2}.
Show that any admissible classification rule is a Bayes classification rule for some prior π.
3. Based on an i.i.d. sample X1, . . . , Xn, consider an estimator Tn=T(X1, . . . , Xn) of a
parameter θ∈R. Suppose the bias function Bn(θ) = ETn−θcan be approximated as
Bn(θ) = a
n+b
n2+O(n−3)
for some real numbers a, b. Show that the jackknife bias corrected estimate ˜
Tnof θbased on Tn
satisfies
E˜
Tn−θ=O(n−2).
4. For F:R→[0,1] a probability distribution function, define its generalised inverse
F−(u) = inf{x:F(x)≥u}, x ∈[0,1]. If Uis a uniform U[0,1] random variable, show that the
random variable F−(U) has distribution function F.
5. Let f, g :R→[0,∞) be bounded probability density functions such that f(x)≤Mg(x)
for all x∈Rand some constant M > 0. Suppose you can simulate a random variable Xof
density gand a random variable Ufrom a uniform U[0,1] distribution. Consider the following
‘accept-reject’ algorithm:
Step 1. Draw X∼g, U ∼U[0,1].
Step 2. Accept Y=Xif U≤f(X)/(Mg(X)), and return to Step 1 otherwise.
Show that Yhas density f.
6. Let U1, U2be i.i.d. uniform U[0,1] and define
X1=p−2 log(U1) cos(2πU2), X2=p−2 log(U1) sin(2πU2).
Show that X1, X2are i.i.d. N(0,1).
1