Mle

23 Birthday Problems

Solutions

Q1

a)

We use the update rule \[x_{n+1} = x_n - \frac{f(x_n)}{f’(x_n)}\]

Read more >

Linear Regression

This is a page for closed-form and approximation methods to the Linear Regression problem.

The derivation will take the form of assuming a normal distribution on the conditional expectation \[\mathcal{E}[Y|X=x]\].

Closed form of OLE

Taking the derivative of \[\hat{\beta} = \arg \min_{\beta} \frac{1}{n}\|y-X \beta\|^2_2\] and equating to 0: \[\begin{align*}\hat{\beta} &= \arg \min_{\beta} \frac{1}{n}\|y-X \beta\|^2_2\\ &= (X^TX)^{-1}X^T y\end{align*} \]

Example Pred

import numpy as np
# synthetic data for the rest of the linear models:
np.random.seed(5)
n = 100 # samples
p = 5 # features
sigma = 0.2 # std
X = np.random.normal(0, 1, size=(n,p))
beta_true = np.random.randint(-4, 2, p)
noise = np.random.normal(0, sigma, size=(n))
y = X @ beta_true + noise

betahat = np.linalg.inv(X.T @ X) @ X.T @ y
print("betahat: ", betahat)
print("beta true:", beta_true)

Iterative Approach

An idea that permeates throughout all of Machine Learning is that

Read more >