Machine Learning

Non-parametric Models

We also have models that walk around with the dataset in their carry-on. These are models such as:

  1. Decision Trees
  2. SVM
  3. Nonparametric Regressions: K-nearest neighbours, Locally Weighted
  4. Random Forests

Optimiser Paradigms in Machine Learning

deep learning pipeline

Recall that a Neural Network follows the following construction:

  1. Pass data (forward) through model to get predicted values
  2. Calculate loss with predicted values against labels
  3. Perform backpropagation w.r.t each weight / bias to get the direction in which to move that weight such that it moves closer to the global minima
  4. Update parameters with gradients using an optimiser.

momentum

ball's pace slows down this makes total fkn sense! if the gradient signs are the same, increasing your confidence in that direction and move further. you want to take less steps over all

Read more >

Parametric Modelling

This accounts for about 60% of the Machine Learning Methods we have.

By definition a parametric model is one that has fixed parameters to learn, i.e. weights in Linear Regression: $w_0, w_1, ..., w_n$. Conversely, a non-parametric model does not have a fixed number of parameters to learn: K-means clustering for example just clusters the data as best as it can.

We can list some more models:

  1. Linear Regression
  2. Ridge Regression
  3. Lasso Regression
  4. Logistic Regression
  5. Neural Networks
  6. Perceptron
  7. Naive Bayes