- Information
- AI Chat
Was this document helpful?
Principles of Statistics 2016-2017 Example Sheet 3
Module: Principles of Statistics
5 Documents
Students shared 5 documents in this course
University: University of Cambridge
Was this document helpful?
PRINCIPLES OF STATISTICS – EXAMPLES 3/4
Part II, Michaelmas 2016, Quentin Berthet (email: q.berthet@statslab.cam.ac.uk)
Throughout, for observations Xarising from a parametric model {f(·, θ) : θ∈Θ},Θ⊆R,
the quadratic risk of a decision rule δ(X) is defined to be R(δ, θ) = Eθ(δ(X)−θ)2.
1. Consider X|θ∼P oisson (θ), θ ∈Θ = [0,∞),and suppose the prior for θis a Gamma
distribution with parameters α, λ. Show that the posterior distribution θ|Xis also a Gamma
distribution and find its parameters.
2. For n∈Nfixed, suppose Xis binomially Bin(n, θ)-distributed where θ∈Θ = [0,1].
a) Consider a prior for θfrom a Beta(a, b), a, b > 0,distribution. Show that the posterior
distribution is Beta(a+X, b +n−X) and compute the posterior mean ¯
θn(X) = E(θ|X).
b) Show that the maximum likelihood estimator for θis not identical to the posterior mean
with ‘ignorant’ uniform prior θ∼U[0,1].
c) Assuming that Xis sampled from a fixed Bin(n, θ0), θ0∈(0,1),distribution, derive the
asymptotic distribution of √n(¯
θn(X)−θ0) as n→ ∞.
3. Let X1, . . . , Xnbe i.i.d. copies of a random variable Xand consider the Bayesian model
X|θ∼N(θ, 1) with prior πas θ∼N(µ, v2). For 0 < α < 1, consider the credible interval
Cn={θ∈R:|θ−Eπ(θ|X1, . . . , Xn)| ≤ Rn}
where Rnis chosen such that π(Cn|X1, . . . , Xn) = 1 −α. Now assume X∼N(θ0,1) for some
fixed θ0∈R, and show that, as n→ ∞,PN
θ0(θ0∈Cn)→1−α.
4. In a general decision problem, show that a) a decision rule δthat has constant risk and is
admissible is also minimax; b) any unique Bayes rule is admissible.
5. Consider an observation Xfrom a parametric model {f(·, θ) : θ∈Θ}with prior πon
Θ⊆Rand a general risk function R(δ, θ) = EθL(δ(X), θ). Assume that there exists some
decision rule δ0such that R(δ0, θ)<∞for all θ∈Θ.
a) For the loss function L(a, θ) = |a−θ|show that the Bayes rule associated to πequals any
median of the posterior distribution π(·|X).
b) For weight function w: Θ →[0,∞) and loss function L(a, θ) = w(θ)[a−θ]2show that the
Bayes rule δπassociated to πis unique and equals
δπ(X) = Eπ[w(θ)θ|X]
Eπ[w(θ)|X],
assuming that the expectations in the last ratio exist, are finite, and that Eπ[w(θ)|X]>0.
6. a) Considering X1, . . . , Xni.i.d. from a N(θ, 1)-model with θ∈Θ = R, show that the
maximum likelihood estimator is not a Bayes rule for estimating θin quadratic risk for any prior
distribution π.
b) Let X∼Bin(n, θ) where θ∈Θ = [0,1]. Find all prior distributions πon Θ for which the
maximum likelihood estimator is a Bayes rule for estimating θin quadratic risk.
7. Consider estimation of θ∈Θ = [0,1] in a Bin(n, θ) model under quadratic risk.
a) Find the unique minimax estimator ˜
θnof θand deduce that the maximum likelihood
estimator ˆ
θnof θis not minimax for fixed sample size n∈N. [Hint: Find first a Bayes rule of
risk constant in θ∈Θ.]
1