Skip to document

Principles of Statistics 2016-2017 Example Sheet 3

Module

Principles of Statistics

5 Documents
Students shared 5 documents in this course
Academic year: 2016/2017
Uploaded by:
Anonymous Student
This document has been uploaded by a student, just like you, who decided to remain anonymous.
Newcastle University

Comments

Please sign in or register to post comments.

Preview text

PRINCIPLES OF STATISTICS – EXAMPLES 3/

Part II, Michaelmas 2016, Quentin Berthet (email: q@statslab.cam.ac)

Throughout, for observationsXarising from a parametric model{f(·, θ) :θ∈Θ},Θ⊆R, the quadratic risk of a decision ruleδ(X) is defined to beR(δ, θ) =Eθ(δ(X)−θ) 2.

  1. ConsiderX|θ∼P oisson(θ), θ∈Θ = [0,∞),and suppose the prior forθis a Gamma distribution with parametersα, λ. Show that the posterior distributionθ|Xis also a Gamma distribution and find its parameters.

2∈Nfixed, supposeXis binomiallyBin(n, θ)-distributed whereθ∈Θ = [0,1]. a) Consider a prior forθfrom a Beta(a, b), a, b > 0 ,distribution. Show that the posterior distribution is Beta(a+X, b+n−X) and compute the posterior meanθ ̄n(X) =E(θ|X). b) Show that the maximum likelihood estimator forθisnotidentical to the posterior mean with ‘ignorant’ uniform priorθ∼U[0,1]. c) Assuming thatXis sampled from a fixedBin(n, θ 0 ), θ 0 ∈(0,1),distribution, derive the asymptotic distribution of

n(θ ̄n(X)−θ 0 ) asn→∞. 3 1 ,... , Xnbe i.i. copies of a random variableXand consider the Bayesian model X|θ∼N(θ,1) with priorπasθ∼N(μ, v 2 ). For 0< α <1, consider the credible interval

Cn={θ∈R:|θ−Eπ(θ|X 1 ,... , Xn)|≤Rn}

whereRnis chosen such thatπ(Cn|X 1 ,... , Xn) = 1−α. Now assumeX∼N(θ 0 ,1) for some fixedθ 0 ∈R, and show that, asn→∞,PθN 0 (θ 0 ∈Cn)→ 1 −α.

4 a general decision problem, show that a) a decision ruleδthat has constant risk and is admissible is also minimax; b) any unique Bayes rule is admissible.

  1. Consider an observationXfrom a parametric model{f(·, θ) :θ∈Θ}with priorπon Θ⊆Rand a general risk functionR(δ, θ) =EθL(δ(X), θ). Assume that there exists some decision ruleδ 0 such thatR(δ 0 , θ)<∞for allθ∈Θ. a) For the loss functionL(a, θ) =|a−θ|show that the Bayes rule associated toπequals any median of the posterior distributionπ(·|X). b) For weight functionw: Θ→[0,∞) and loss functionL(a, θ) =w(θ)[a−θ] 2 show that the Bayes ruleδπassociated toπis unique and equals

δπ(X) =

Eπ[w(θ)θ|X] Eπ[w(θ)|X]

,

assuming that the expectations in the last ratio exist, are finite, andthatEπ[w(θ)|X]>0.

  1. a) ConsideringX 1 ,... , Xni.i. from aN(θ,1)-model withθ∈Θ =R, show that the maximum likelihood estimator isnota Bayes rule for estimatingθin quadratic risk for any prior distributionπ. b) LetX∼Bin(n, θ) whereθ∈Θ = [0,1]. Find all prior distributionsπon Θ for which the maximum likelihood estimator is a Bayes rule for estimatingθin quadratic risk.

7 estimation ofθ∈Θ = [0,1] in aBin(n, θ) model under quadratic risk. a) Find the unique minimax estimatorθ ̃nofθand deduce that the maximum likelihood estimatorθˆnofθisnotminimax for fixed sample sizen∈N. [Hint: Find first a Bayes rule of risk constant inθ∈Θ.]

1

b) Show, however, that the maximum likelihood estimator dominates ̃θnin the large sample limit by proving that, asn→∞,

lim n

supθR(θˆn, θ) supθR(θ ̃n, θ)

= 1

and that

lim n

R(ˆθn, θ) R( ̃θn, θ)

<1 for allθ∈[0,1], θ 6 =

1

2

.

  1. ConsiderX 1 ,... , Xni.i. from aN(θ,1)-model whereθ∈Θ = [0,∞). Show that the sample meanX ̄nis inadmissible for quadratic risk, but that it is still minimax. What happens if Θ = [a, b] for some 0< a < b <∞?

9 multivariate normalN(θ, I) whereθ∈Θ =Rp, p≥3, and whereIis thep×p identity matrix. Consider estimators

̃θ(c)=

(

1 −c

p− 2 ‖X‖ 2

)

X, 0 < c < 2 ,

forθ, under the risk functionR(δ, θ) =Eθ‖δ(X)−θ‖ 2 where‖·‖is the standard Euclidean norm onRp. a) Show that the James-Stein estimatorθ ̃(1)dominates all estimatorsθ ̃(c), c 6 = 1. b) Letθˆbe the maximum likelihood estimator ofθ. Show that, for any 0< c <2,

sup θ∈Θ

R(θ ̃(c), θ) = sup θ∈Θ

R(θ, θˆ ).

  1. Forσ 2 a fixed positive constant, considerX 1 ,... , Xn|θ∼i.i(θ, σ 2 ) with prior dis- tributionθ∼N(μ, v 2 ), μ ∈R, v 2 >0. Show that the posterior distribution ofθgiven the observations is

θ|X 1 ,... , Xn∼N

(

nX ̄ σ 2 +

μ v 2 n σ 2 +

1 v 2

,

1

n σ 2 +

1 v 2

)

, whereX ̄=

1

n

∑n

i=

Xi.

  1. ConsiderX 1 ,... , Xn|μ, σ 2 i.i.d(μ, σ 2 ) withimproper prior densityπ(μ, σ) propor- tional toσ− 2 (constant inμ). Argue that the resulting ‘posterior distribution’ has a density proportional to

σ−(n+2)exp

{

1

2 σ 2

∑n

i=

(Xi−μ) 2

}

,

and that thus the distribution ofμ|σ 2 , X 1 ,... , XnisN(X, σ ̄ 2 /n), whereX ̄= (1/n)

∑n i=1Xi. For 0< α <1 and assumingσ 2 is known, construct a level 1−αcredible set for the posterior distributionμ|σ 2 , X 1 ,... , Xnthat is also an exact level 1−α(frequentist) confidence set.

12 1 ,... , Xni.i. from aN(θ,1)-model withθ∈Θ =Rand recall the Hodges’ estimator, equal to the maximum likelihood estimatorX ̄nofθwhenever|X ̄n|≥n− 1 / 4 and zero otherwise. Recall the asymptotic distribution of

n( ̃θn−θ) asn→∞underPθfor everyθ∈Θ, and compare it to the asymptotic distribution of

n(X ̄n−θ). Now compute the asymptotic maximal risk lim n sup θ∈Θ

Eθ[

n(Tn−θ)] 2

for bothTn=X ̄nandTn=θ ̃n.

2

Was this document helpful?

Principles of Statistics 2016-2017 Example Sheet 3

Module: Principles of Statistics

5 Documents
Students shared 5 documents in this course
Was this document helpful?
PRINCIPLES OF STATISTICS EXAMPLES 3/4
Part II, Michaelmas 2016, Quentin Berthet (email: q.berthet@statslab.cam.ac.uk)
Throughout, for observations Xarising from a parametric model {f(·, θ) : θΘ},ΘR,
the quadratic risk of a decision rule δ(X) is defined to be R(δ, θ) = Eθ(δ(X)θ)2.
1. Consider X|θP oisson (θ), θ Θ = [0,),and suppose the prior for θis a Gamma
distribution with parameters α, λ. Show that the posterior distribution θ|Xis also a Gamma
distribution and find its parameters.
2. For nNfixed, suppose Xis binomially Bin(n, θ)-distributed where θΘ = [0,1].
a) Consider a prior for θfrom a Beta(a, b), a, b > 0,distribution. Show that the posterior
distribution is Beta(a+X, b +nX) and compute the posterior mean ¯
θn(X) = E(θ|X).
b) Show that the maximum likelihood estimator for θis not identical to the posterior mean
with ‘ignorant’ uniform prior θU[0,1].
c) Assuming that Xis sampled from a fixed Bin(n, θ0), θ0(0,1),distribution, derive the
asymptotic distribution of n(¯
θn(X)θ0) as n .
3. Let X1, . . . , Xnbe i.i.d. copies of a random variable Xand consider the Bayesian model
X|θN(θ, 1) with prior πas θN(µ, v2). For 0 < α < 1, consider the credible interval
Cn={θR:|θEπ(θ|X1, . . . , Xn)| Rn}
where Rnis chosen such that π(Cn|X1, . . . , Xn) = 1 α. Now assume XN(θ0,1) for some
fixed θ0R, and show that, as n ,PN
θ0(θ0Cn)1α.
4. In a general decision problem, show that a) a decision rule δthat has constant risk and is
admissible is also minimax; b) any unique Bayes rule is admissible.
5. Consider an observation Xfrom a parametric model {f(·, θ) : θΘ}with prior πon
ΘRand a general risk function R(δ, θ) = EθL(δ(X), θ). Assume that there exists some
decision rule δ0such that R(δ0, θ)<for all θΘ.
a) For the loss function L(a, θ) = |aθ|show that the Bayes rule associated to πequals any
median of the posterior distribution π(·|X).
b) For weight function w: Θ [0,) and loss function L(a, θ) = w(θ)[aθ]2show that the
Bayes rule δπassociated to πis unique and equals
δπ(X) = Eπ[w(θ)θ|X]
Eπ[w(θ)|X],
assuming that the expectations in the last ratio exist, are finite, and that Eπ[w(θ)|X]>0.
6. a) Considering X1, . . . , Xni.i.d. from a N(θ, 1)-model with θΘ = R, show that the
maximum likelihood estimator is not a Bayes rule for estimating θin quadratic risk for any prior
distribution π.
b) Let XBin(n, θ) where θΘ = [0,1]. Find all prior distributions πon Θ for which the
maximum likelihood estimator is a Bayes rule for estimating θin quadratic risk.
7. Consider estimation of θΘ = [0,1] in a Bin(n, θ) model under quadratic risk.
a) Find the unique minimax estimator ˜
θnof θand deduce that the maximum likelihood
estimator ˆ
θnof θis not minimax for fixed sample size nN. [Hint: Find first a Bayes rule of
risk constant in θΘ.]
1