24 Birthday Problems
Q1
Problem
[4 marks]
Given that \(A\) and \(B\) are sets like so:
$A$ and $B$ are non-empty sets
Part
[1 marks]
Give an expression for the conditional probability \(P(A|B)\)
Part
[1 marks]
Hence or otherwise derive Bayes' (first) rule for conditional probability:
\[P(A|B) = P(B|A) \times \frac{P(A)}{P(B)}\]
Part
[2 marks]
Prove that the symmetric difference \(A\Delta B = (A - B) \cup (B - A)\) is the same as \((A\cup B) - (A\cap B)\).
Q2
Problem
[7 marks]
The Gamma Function
Part
[1 marks]
Recall the integral of \begin{equation}\label{eq:1}\int e^{-x}\; \mathrm{d}x\end{equation}
Part
[2 marks]
Apply integration by parts on \[\int x e^{-x}\mathrm{d}x.\] Factorise your result.
Part
[1 marks]
These above are special cases of $\alpha=1$ and $\alpha=2$.
More generally we can manipulate the form of the integral \begin{equation}\label{eq:33}\int x^{\alpha-1} e^{-x}\mathrm{d}x.\end{equation} with IBP to produce:
Part
[3 marks]
Fixing the limits of integration leads to the Gamma function, defined by \[\Gamma(\alpha) = \int^\infty_0 x^{\alpha-1}e^{-x} \mathrm{d}x, \quad\text{for }\alpha > 0\]
Show that with $\eqref{eq:33}$, it follows that \[\Gamma(\alpha) = (\alpha-1)\times\Gamma(\alpha-1).\]
So, for any integral $k$:
\begin{align*}\Gamma(k) &= (k-1)\times(k-2)\times\ldots\times 2\times\Gamma(1)\\&=(k-1)!\end{align*}
with $\Gamma(1) = 1$ $\eqref{eq:1}$, and the factorial function has a recursive structure intimately linked with the number $e$.
Q3
Problem
[5 marks]
Monty Hall Problem:
Part
[2 marks]
Suppose there are three curtains.
Behind one curtain there is a nice prize, while behind the other two there are worthless prizes (perhaps 1 car and 2 goats).
A contestant selects one curtain at random, and then Monte Hall opens one of the other two curtains to reveal a worthless prize (goat).
Hall then expresses the willingness to trade the curtain that the contestant has chosen for the other curtain that has not been opened.
Should the contestant switch curtains or stick with the one that she has?
Part
[3 marks]
Now consider the variant where the host does not know which door hides the car. What is the posterior probability of winning the prize / car if our contestant switches?
Q4
Problem
[7 marks]
Part
[2 marks]
Prove that there are equally as many natural numbers as there are integers, i.e. that \(|\mathbb{Z}| = |\mathbb{N}|\).
Part
[2 marks]
Prove that there are equally as many integers as there are rational numbers, i.e. that \(|\mathbb{Z}| = |\mathbb{Q}|\).
Part
[3 marks]
Prove that the real numbers are uncountable; i.e. that \(|\mathbb{R}| \neq |\mathbb{N}|\).
Q5
Problem
[4 marks]
Part
[2 marks]
What does the series \(1+\tfrac14+\tfrac19+\tfrac1{16}+\ldots\) converge to? (If at all)
Part
[2 marks]
What does the series \(1+\tfrac12+\tfrac13+\tfrac14+\ldots\) converge to? (If at all)
Q6
Problem
[4 marks]
Which is larger asymptotically as \(n \rightarrow \infty\)? \[2^n \ll n!\] OR \[2^n \gg n!\] Give a proof by induction.
Q7
Problem
[12 marks]
Part
[4 marks]
Find the eigenspaces of the following matrices:
- \begin{bmatrix}1&0\\1&1\end{bmatrix}
- \begin{bmatrix}-2&2\\2&1\end{bmatrix}
- \begin{bmatrix}2&3&0\\1&4&3\\0&0&1\end{bmatrix}
- \begin{bmatrix}1&1&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0\end{bmatrix}
Part
[8 marks]
Determine if the following matrices are diagonalisable. If so, determine their diagonal form and a basis with respect to which the transformation matrices are diagonal.
If they are not diagonal, give reasons why not.
If they are not diagonal, give reasons why not.
- \begin{bmatrix}0&1\\-8&4\end{bmatrix}
- \begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}
- \begin{bmatrix}5&-6&-6\\-1&4&2\\3&-6&-4\end{bmatrix}
- \begin{bmatrix}5&4&2&1\\0&1&-1&-1\\-1&-1&3&0\\1&1&-1&2\end{bmatrix}
Q8
Problem
[4 marks]
Consider the following bivariate distribution $p(x,y)$ of two discrete random variables $X$ and $Y$.
Part
[2 marks]
Compute the marginal distributions $p(x)$ and $p(y)$.
Part
[2 marks]
Compute the conditional distributions $p(x|Y = y_1)$ and $p(y|X = x_3)$.
Q9
Problem
(Law of Iterated Expectations)
[3 marks]
Consider two random variables, $x$, $y$ with joint distribution $p(x,y)$. Show that
\[\mathbb{E}_X[x] = \mathbb{E}_Y[\mathbb{E}_X[x|y]]\]
Here, $\mathbb{E}_X[x|y]$ denotes the expected value of $x$ under the conditional distribution $p(x|y)$.
Q10
Problem
[7 marks]
Last year we qualitatively described a number of Probability Distributions, this year we shall discover more results.
To begin, find the Probability Mass/Density Functions for the following distributions:
To begin, find the Probability Mass/Density Functions for the following distributions:
- Bernoulli
- Binomial
- Geometric
- Poisson
- Exponential
- Gamma
- Normal
Q11
Problem
[23 marks]
Part
[2 marks]
What is the formula for the Moment Generating Function? What about the \(k\textup{th}\) (central) MGF?
Hint
\[\large\varphi_X (s) = \mathbb{E}\left( ? \right) = \int ? \]
Part
[21 marks]
Hence, or otherwise derive the Expectation, Variance and Moment Generating Function for all 7 distributions in Q10. Show minimal working.
Q12
Problem
[4 marks]
Shortest distance between an arbitrary point $\mathbf{x}_0 \in \mathbb{R}^n$ and a hyperplane given by $\mathbf{w}^T \mathbf{x} + b = 0$
Q13
Problem
[4 marks]
Consider minimising the strictly convex quadratic function
\[ q(\mathbf{x}) = \frac{1}{2}\mathbf{x}^T\mathbf{G}\mathbf{x} + \mathbf{d}^T\mathbf{x} + c \] on $\mathbb{R}^n$, where $n > 1$, $\mathbf{d}$ is a constant $n \times 1$ vector, $\mathbf{G}$ is a constant $n \times n$ symmetric positive definite matrix and $c$ is a scalar. Let $\mathbf{x}^* \in \mathbb{R}^n$ be the global minimiser for $q(\mathbf{x})$, $\mathbf{x}^{(1)} \in \mathbb{R}^n$ and $\mathbf{x}^{(1)} \neq \mathbf{x}^*$.
Consider applying Newton's method to $q(\mathbf{x})$ starting at $\mathbf{x}^{(1)}$
\[ q(\mathbf{x}) = \frac{1}{2}\mathbf{x}^T\mathbf{G}\mathbf{x} + \mathbf{d}^T\mathbf{x} + c \] on $\mathbb{R}^n$, where $n > 1$, $\mathbf{d}$ is a constant $n \times 1$ vector, $\mathbf{G}$ is a constant $n \times n$ symmetric positive definite matrix and $c$ is a scalar. Let $\mathbf{x}^* \in \mathbb{R}^n$ be the global minimiser for $q(\mathbf{x})$, $\mathbf{x}^{(1)} \in \mathbb{R}^n$ and $\mathbf{x}^{(1)} \neq \mathbf{x}^*$.
Consider applying Newton's method to $q(\mathbf{x})$ starting at $\mathbf{x}^{(1)}$
Part
[2 marks]
Write down the Newton direction at $\mathbf{x}^{(1)}$ and show that it is a descent direction.
Part
[2 marks]
How many iteration(s) will Newton's method take to reach the minimiser $\mathbf{x}^*$ of $q(\mathbf{x})$. Give reasons for your answer.
Q14
Problem
[15 marks]
Consider the following inequality constrained optimisation problem
\begin{align}
\text{(IP)} \quad \underset{\mathbf{x}\in\mathbb{R}^n}{\text{minimise}} \quad & \mathbf{a}^T\mathbf{x}\notag \\
\text{subject to} \quad & \mathbf{x}^TQ\mathbf{x} - 1 \leq 0,
\label{eq:ip}
\end{align}
where $n\ge 1$, $Q$ is a symmetric and positive definite $n\times n$ matrix, $\mathbf{a}\in\mathbb{R}^n, \mathbf{a} \neq \mathbf{0}$ and the constraint function $\mathbf{c}(\mathbf{x}) = \mathbf{x}^TQ\mathbf{x} -1$.
Part
[2 marks]
Write down the gradient $\nabla c(\mathbf{x})$ and the Hessian $\nabla^2 c(\mathbf{x})$.
Part
[2 marks]
Show that $c(\mathbf{x})$ is a convex function.
Part
[2 marks]
Show that the problem (IP) is a convex optimisation problem.
Part
[3 marks]
Show that \(\large \mathbf{x}^* = \frac{-Q^{-1}\mathbf{a}}{\sqrt{\mathbf{a}^TQ^{-1}\mathbf{a}}}\) is a constrained stationary point for the problem (IP).
Part
[2 marks]
Determine whether \(\large \mathbf{x}^* = \frac{-Q^{-1}\mathbf{a}}{\sqrt{\mathbf{a}^TQ^{-1}\mathbf{a}}}\) is a local minimiser, global minimiser or neither for the problem (IP).
Part
[2 marks]
Write down the Wolfe dual problem for (IP).
Part
[2 marks]
What is the optimal objective value of the Wolfe dual problem in part f)? Give reasons for your answer. You do not need to solve the Wolfe dual problem.
Q15
Problem
[15 marks]
Consider the following equality constrained optimisation problem
\begin{align*} \text{(EP)} \quad \min_{\mathbf{x}\in\mathbb{R}^n} \quad & \mathbf{a}^T\mathbf{x} \\ \text{s.t.} \quad & \mathbf{x}^T\mathbf{x} - 1 = 0, \end{align*} where $n \geq 1$, $\mathbf{a} \in \mathbb{R}^n$, $\mathbf{a} \neq \mathbf{0}$ and $\|\mathbf{a}\|_2 = \sqrt{\mathbf{a}^T\mathbf{a}}$.
Let $\mathbf{z}^* = \frac{-\mathbf{a}}{\|\mathbf{a}\|_2}$.
\begin{align*} \text{(EP)} \quad \min_{\mathbf{x}\in\mathbb{R}^n} \quad & \mathbf{a}^T\mathbf{x} \\ \text{s.t.} \quad & \mathbf{x}^T\mathbf{x} - 1 = 0, \end{align*} where $n \geq 1$, $\mathbf{a} \in \mathbb{R}^n$, $\mathbf{a} \neq \mathbf{0}$ and $\|\mathbf{a}\|_2 = \sqrt{\mathbf{a}^T\mathbf{a}}$.
Let $\mathbf{z}^* = \frac{-\mathbf{a}}{\|\mathbf{a}\|_2}$.
Part
[2 marks]
Show that global minima of (EP) must exist.
Part
[2 marks]
Show that $\mathbf{z}^*$ is a regular feasible point for (EP).
Part
[2 marks]
Show rigorously that the feasible region $\Omega = \{\mathbf{x} \in \mathbb{R}^n : \mathbf{x}^T\mathbf{x} - 1 = 0\}$ of (EP) is not a convex set.
Part
[3 marks]
Verify that $\mathbf{z}^*$ is a constrained stationary point for (EP).
Part
[3 marks]
Using the second-order sufficient optimality conditions, show that $\mathbf{z}^*$ is a strict local minimiser for (EP).
Part
[3 marks]
Show that $\mathbf{z}^*$ is a global minimiser for (EP).
Q16
Problem
[6 marks]
Part
[2 marks]
Prove that
Proposition
\[|a| \leq b \iff -b \leq a \leq b\]
Proof
Part
[2 marks]
Prove that
Proposition
\[\forall a, b \in \mathbb{R}, \quad |a|\cdot |b| = |a\cdot b |\]
Proof
Part
[2 marks]
Complete the following
Proposition
Two real numbers, $a$ and $b$ are equal if and only if, for every $\epsilon > 0$, $|a-b| <\epsilon$.
Proof
Q17
Problem
[3 marks]
Complete the following
Proposition
Let $\mathcal{A}\subseteq \mathbb{R}$ and $m\in \mathcal{A}$ be the minimum of $\mathcal{A}$, then $\inf \mathcal{A} = m$.
Proof
Q18
Problem
[7 marks]
Part
[4 marks]
State and prove the Cauchy-Schwarz inequality in $\mathbb{R}^n$.
Proposition
Proof
Part
[3 marks]
Hence or otherwise, state and prove the Triangle Inequality in $\mathbb{R}^n$.
Proposition
Proof
Q19
Problem
[14 marks]
Part
[8 marks]
Give the definitions of these elementary analysis facts:
Definition
(Metric Space)
Definition
(Epsilon Ball)
Definition
(Interior Point)
Definition
(Open Set)
Part
[6 marks]
Complete the following
Proposition
Every $\epsilon$-ball in a metric space is open.
Proof
Q20
Problem
[12 marks]
Infinite-dimensional vector spaces
Part
[8 marks]
What are the sets $c_{00}, c_0, \ell^p$ and $\ell^\infty$?
Give examples of sequences in each.
Definition
($c_{00}$)
Example
($c_{00}$)
Definition
($c_{0}$)
Example
($c_{0}$)
Definition
($\ell^p$)
Example
($\ell^{p}$)
Definition
($\ell^\infty$)
Example
($\ell^{\infty}$)
Remark
Relationships:
\[
c_{00} \subsetneq c_0 \subsetneq \ell^p \subsetneq \ell^\infty \quad \text{(for any } 1 \leq p < \infty\text{)}
\]
Part
[4 marks]
Give the definition and at least 2 examples for each each:
Definition
(Hilbert Space)
Example
(Hilbert Space)
Definition
(Banach Space)
Example
(Banach Space)
Q21
Problem
[8 marks]
Part
[2 marks]
State pointwise convergence for a sequence of functions.
Part
[2 marks]
State uniform convergence for a sequence of functions
Part
[2 marks]
Does the following function converge pointwise? What about uniformly?
For each integer $k\ge1$ define \[ f_k : [0,1] \longrightarrow \mathbb{R}, \qquad f_k(x)=\max\{0,1-4k^{2}|\,x-\tfrac{1}{k^{2}}|\}.\]
For each integer $k\ge1$ define \[ f_k : [0,1] \longrightarrow \mathbb{R}, \qquad f_k(x)=\max\{0,1-4k^{2}|\,x-\tfrac{1}{k^{2}}|\}.\]
Part
[2 marks]
What does uniform convergence imply about a series?
Q22
Problem
[7 marks]
Part
[2 marks]
Give the definition of a topological space $(X, \tau)$.
Part
[3 marks]
Give the definitions of these Topological spaces:
Definition
(Co-countable Topology)
Definition
(Co-finite Topology)
Definition
(Discrete Topology)
Part
[2 marks]
Are limits unique in a topological space? What about in a metric space?
Q23
Problem
[14 marks]
Let $(X_1, \ldots, X_n) \stackrel{\text{i.i.d}}{\sim} \mathcal{P}(\lambda)$.
Part
[3 marks]
Let \[\bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i\] be an estimator of $\lambda$.
Find the bias, standard error and Mean Squared Error of $\bar{X}_n$.
Part
[2 marks]
Let \[\widehat{T}_n = \frac{X_1 + 2X_2 + \ldots + nX_n}{1+2+\dots+n}.\]
Compute the bias and MSE of $\widehat{T}_n$ as an estimator of $\lambda$.
Part
[2 marks]
Compare $\widehat{T}_n$ to $\bar{X}_n$ in terms of MSE.
Which estimator is preferable? Intuitively, why is one better than the other?
Part
[2 marks]
Find the moment estimator for $\lambda$.
Part
[3 marks]
Find the maximum likelihood estimator for $\lambda$.
Part
[2 marks]
Find the Fisher information $\mathcal{I}(\lambda)$
Q24
Problem
[3 marks]
What is the maximum straight-line distance in an N-dimensional unit hyper-cube?
Backlinks
1. 24 Birthday Problems /wiki/bday-problems/24th/
As is tradition, the prize pool has increased (to $300 this year).
I have collapsed first and second place into a winner-takes-all arrangement (c'est la vie).
Furthermore, there are additional changes to the structure of this Game:
- you must now pass the problem set to be awarded the prize money;
- you may submit your solutions to the problem set at any point in the future;
- if you plagiarise work, I reserve the right to ban you from all subsequent competitions — grim trigger
the problem and solution set will now be courteously supported by MathJaX, TikZ, and my own JavaScript