Skip to document

Principles of Statistics 2016-2017 Example Sheet 4

Module

Principles of Statistics

5 Documents
Students shared 5 documents in this course
Academic year: 2016/2017
Uploaded by:
Anonymous Student
This document has been uploaded by a student, just like you, who decided to remain anonymous.
Newcastle University

Comments

Please sign in or register to post comments.

Preview text

PRINCIPLES OF STATISTICS – EXAMPLES 4/

Part II, Michaelmas 2016, Quentin Berthet (email: q@statslab.cam.ac)

1 classifying an observation of a random vectorXinRpinto either aN(μ 1 ,Σ) or aN(μ 2 ,Σ) population, where Σ is a known nonsingular covariance matrix and whereμ 16 =μ 2 are two distinct known mean vectors. a) For a priorπassigning probabilityqtoμ 1 and 1−qtoμ 2 , show that the Bayes classifier is unique and assignsXtoN(μ 1 ,Σ) whenever

U≡D−

1

2

(μ 1 +μ 2 )TΣ− 1 (μ 1 −μ 2 )

exceeds log((1−q)/q), whereD=XTΣ− 1 (μ 1 −μ 2 ) is the discriminant function. b) Show thatU ∼N(∆ 2 / 2 ,∆ 2 ) wheneverX ∼N(μ 1 ,Σ), and thatU ∼N(−∆ 2 / 2 ,∆ 2 ) wheneverX∼N(μ 2 ,Σ), where ∆ is theMahalanobis distancebetweenμ 1 andμ 2 given by

∆ 2 = (μ 1 −μ 2 )TΣ− 1 (μ 1 −μ 2 ).

c) Show that a minimax classifier is obtained from selectingN(μ 1 ,Σ) wheneverU≥0. 2. Consider classification of an observationXinto a population described by a probability density equal to eitherf 1 orf 2. AssumePfi(f 1 (X)/f 2 (X) =k) = 0 for allk∈[0,∞], i∈{ 1 , 2 }. Show that any admissible classification rule is a Bayes classification rule for some priorπ.

  1. Based on an i.i. sampleX 1 ,... , Xn, consider an estimatorTn=T(X 1 ,... , Xn) of a parameterθ∈R. Suppose the bias functionBn(θ) =ETn−θcan be approximated as

Bn(θ) =

a n

+

b n 2

+O(n− 3 )

for some real numbersa, b. Show that the jackknife bias corrected estimateT ̃nofθbased onTn satisfies ET ̃n−θ=O(n− 2 ).

  1. ForF :R →[0,1] a probability distribution function, define its generalised inverse F−(u) = inf{x:F(x)≥u}, x∈[0,1]. IfUis a uniformU[0,1] random variable, show that the random variableF−(U) has distribution functionF.

5, g:R→[0,∞) be bounded probability density functions such thatf(x)≤M g(x) for allx∈Rand some constantM >0. Suppose you can simulate a random variableXof densitygand a random variableUfrom a uniformU[0,1] distribution. Consider the following ‘accept-reject’ algorithm: Step 1. DrawX∼g, U∼U[0,1]. Step 2. Accept Y=X if U≤f(X)/(M g(X)), and return to Step 1 otherwise. Show thatYhas densityf. 6 1 , U 2 be i.i. uniformU[0,1] and define

X 1 =

−2 log(U 1 ) cos(2πU 2 ), X 2 =

−2 log(U 1 ) sin(2πU 2 ).

Show thatX 1 , X 2 are i.i.d(0,1).

1

7 1 ,... , Xnbe drawn i.i. random variables from distributionPwith unknown mean μand varianceσ 2. WriteX ̄n=n 1

∑n i=1Xifor the sample mean, and letX ̄

b n= (1/n)

∑n i=1X

b nibe the mean of a bootstrap sample (Xbni:i= 1,... , n)∼i.i.d from theXi’s. Choosing rootsRnsuch that

Pn

(

|X ̄nb−X ̄n|≤

Rn √ n

)

= 1−α

for some 0< α <1, let

Cbn=

{

v∈R:|X ̄n−v|≤

Rn √ n

}

be the corresponding bootstrap confidence interval. Show thatRnconverges to a constant in PN-probability and deduce further thatCbnis an exact asymptotic level 1−αconfidence set, that is, show that, asn→∞, PN(μ∈Cnb)→ 1 −α.

8 1 ,... , Xnbe drawn i.i. from a continuous distribution functionF:R→[0,1], and letFn(t) = (1/n)

∑n i=1 1 (−∞,t](Xi) be the empirical distribution function. Use the Kolmogorov- Smirnov theorem to construct a confidence band for the unknown functionFof the form

{Cn(x) = [Fn(x)−Rn, Fn(x) +Rn] :x∈R}

that satisfiesPFN(F(x)∈Cn(x)∀x∈R)→ 1 −αasn→ ∞, and whereRn=R/

nfor some fixed quantile constantR >0.

  1. LetX 1 ,... , Xnbe drawn i.i. from a differentiable probability densityf:R→[0,∞), and assume that supx∈R(|f(x)|+|f′(x)|)≤1. Define the density estimator

fˆn(x) = 1 n 2 / 3

∑n

i=

1 {− 1 / 2 ≤n 1 / 3 (x−Xi)≤ 1 / 2 }, x∈R.

Show that, for everyx∈Rand everyn∈N,

E|fˆn(x)−f(x)|≤

2

n 1 / 3

.

2

Was this document helpful?

Principles of Statistics 2016-2017 Example Sheet 4

Module: Principles of Statistics

5 Documents
Students shared 5 documents in this course
Was this document helpful?
PRINCIPLES OF STATISTICS EXAMPLES 4/4
Part II, Michaelmas 2016, Quentin Berthet (email: q.berthetl@statslab.cam.ac.uk)
1. Consider classifying an observation of a random vector Xin Rpinto either a N(µ1,Σ) or
aN(µ2,Σ) population, where Σ is a known nonsingular covariance matrix and where µ16=µ2
are two distinct known mean vectors.
a) For a prior πassigning probability qto µ1and 1 qto µ2, show that the Bayes classifier
is unique and assigns Xto N(µ1,Σ) whenever
UD1
2(µ1+µ2)TΣ1(µ1µ2)
exceeds log((1 q)/q), where D=XTΣ1(µ1µ2) is the discriminant function.
b) Show that UN(∆2/2,2) whenever XN(µ1,Σ), and that UN(2/2,2)
whenever XN(µ2,Σ), where is the Mahalanobis distance between µ1and µ2given by
2= (µ1µ2)TΣ1(µ1µ2).
c) Show that a minimax classifier is obtained from selecting N(µ1,Σ) whenever U0.
2. Consider classification of an observation Xinto a population described by a probability
density equal to either f1or f2. Assume Pfi(f1(X)/f2(X) = k) = 0 for all k[0,], i {1,2}.
Show that any admissible classification rule is a Bayes classification rule for some prior π.
3. Based on an i.i.d. sample X1, . . . , Xn, consider an estimator Tn=T(X1, . . . , Xn) of a
parameter θR. Suppose the bias function Bn(θ) = ETnθcan be approximated as
Bn(θ) = a
n+b
n2+O(n3)
for some real numbers a, b. Show that the jackknife bias corrected estimate ˜
Tnof θbased on Tn
satisfies
E˜
Tnθ=O(n2).
4. For F:R[0,1] a probability distribution function, define its generalised inverse
F(u) = inf{x:F(x)u}, x [0,1]. If Uis a uniform U[0,1] random variable, show that the
random variable F(U) has distribution function F.
5. Let f, g :R[0,) be bounded probability density functions such that f(x)Mg(x)
for all xRand some constant M > 0. Suppose you can simulate a random variable Xof
density gand a random variable Ufrom a uniform U[0,1] distribution. Consider the following
‘accept-reject’ algorithm:
Step 1. Draw Xg, U U[0,1].
Step 2. Accept Y=Xif Uf(X)/(Mg(X)), and return to Step 1 otherwise.
Show that Yhas density f.
6. Let U1, U2be i.i.d. uniform U[0,1] and define
X1=p2 log(U1) cos(2πU2), X2=p2 log(U1) sin(2πU2).
Show that X1, X2are i.i.d. N(0,1).
1