{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Logistic Regression\n", "\n", "In this exercise, you will implement logistic regression and apply it to two different datasets. \n", "\n", "\n", "# Outline\n", "- [ 1 - Packages ](#1)\n", "- [ 2 - Logistic Regression](#2)\n", " - [ 2.1 Problem Statement](#2.1)\n", " - [ 2.2 Loading and visualizing the data](#2.2)\n", " - [ 2.3 Sigmoid function](#2.3)\n", " - [ 2.4 Cost function for logistic regression](#2.4)\n", " - [ 2.5 Gradient for logistic regression](#2.5)\n", " - [ 2.6 Learning parameters using gradient descent ](#2.6)\n", " - [ 2.7 Plotting the decision boundary](#2.7)\n", " - [ 2.8 Evaluating logistic regression](#2.8)\n", "- [ 3 - Regularized Logistic Regression](#3)\n", " - [ 3.1 Problem Statement](#3.1)\n", " - [ 3.2 Loading and visualizing the data](#3.2)\n", " - [ 3.3 Feature mapping](#3.3)\n", " - [ 3.4 Cost function for regularized logistic regression](#3.4)\n", " - [ 3.5 Gradient for regularized logistic regression](#3.5)\n", " - [ 3.6 Learning parameters using gradient descent](#3.6)\n", " - [ 3.7 Plotting the decision boundary](#3.7)\n", " - [ 3.8 Evaluating regularized logistic regression model](#3.8)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "_**NOTE:** To prevent errors from the autograder, you are not allowed to edit or delete non-graded cells in this lab. Please also refrain from adding any new cells. \n", "**Once you have passed this assignment** and want to experiment with any of the non-graded code, you may follow the instructions at the bottom of this notebook._" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 1 - Packages \n", "\n", "First, let's run the cell below to import all the packages that you will need during this assignment.\n", "- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.\n", "- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.\n", "- ``utils.py`` contains helper functions for this assignment. You do not need to modify code in this file." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from utils import *\n", "import copy\n", "import math\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 2 - Logistic Regression\n", "\n", "In this part of the exercise, you will build a logistic regression model to predict whether a student gets admitted into a university.\n", "\n", "\n", "### 2.1 Problem Statement\n", "\n", "Suppose that you are the administrator of a university department and you want to determine each applicant’s chance of admission based on their results on two exams. \n", "* You have historical data from previous applicants that you can use as a training set for logistic regression. \n", "* For each training example, you have the applicant’s scores on two exams and the admissions decision. \n", "* Your task is to build a classification model that estimates an applicant’s probability of admission based on the scores from those two exams. \n", "\n", "\n", "### 2.2 Loading and visualizing the data\n", "\n", "You will start by loading the dataset for this task. \n", "- The `load_dataset()` function shown below loads the data into variables `X_train` and `y_train`\n", " - `X_train` contains exam scores on two exams for a student\n", " - `y_train` is the admission decision \n", " - `y_train = 1` if the student was admitted \n", " - `y_train = 0` if the student was not admitted \n", " - Both `X_train` and `y_train` are numpy arrays.\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "# load dataset\n", "X_train, y_train = load_data(\"data/ex2data1.txt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### View the variables\n", "Let's get more familiar with your dataset. \n", "- A good place to start is to just print out each variable and see what it contains.\n", "\n", "The code below prints the first five values of `X_train` and the type of the variable." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First five elements in X_train are:\n", " [[34.62365962 78.02469282]\n", " [30.28671077 43.89499752]\n", " [35.84740877 72.90219803]\n", " [60.18259939 86.3085521 ]\n", " [79.03273605 75.34437644]]\n", "Type of X_train: \n" ] } ], "source": [ "print(\"First five elements in X_train are:\\n\", X_train[:5])\n", "print(\"Type of X_train:\",type(X_train))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now print the first five values of `y_train`" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "First five elements in y_train are:\n", " [0. 0. 0. 1. 1.]\n", "Type of y_train: \n" ] } ], "source": [ "print(\"First five elements in y_train are:\\n\", y_train[:5])\n", "print(\"Type of y_train:\",type(y_train))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Check the dimensions of your variables\n", "\n", "Another useful way to get familiar with your data is to view its dimensions. Let's print the shape of `X_train` and `y_train` and see how many training examples we have in our dataset." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The shape of X_train is: (100, 2)\n", "The shape of y_train is: (100,)\n", "We have m = 100 training examples\n" ] } ], "source": [ "print ('The shape of X_train is: ' + str(X_train.shape))\n", "print ('The shape of y_train is: ' + str(y_train.shape))\n", "print ('We have m = %d training examples' % (len(y_train)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Visualize your data\n", "\n", "Before starting to implement any learning algorithm, it is always good to visualize the data if possible.\n", "- The code below displays the data on a 2D plot (as shown below), where the axes are the two exam scores, and the positive and negative examples are shown with different markers.\n", "- We use a helper function in the ``utils.py`` file to generate this plot. \n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEGCAYAAACKB4k+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3df5QU9Znv8fczgIMYjaA4AYkMZAkxyE8nxolG2KDRRFeTm6h40RBusmhijMa4iqsbhj3HPSS6d1d3c7OrMcomHIy6/oq7cSXqxCQQ42CQoIRFcVBWHEZU4k8E5rl/VHXbDD093T1dXVXdn9c5c7q7pnvq6Z6Zeur7fH+UuTsiIiIADXEHICIiyaGkICIiWUoKIiKSpaQgIiJZSgoiIpI1OO4ABuLQQw/15ubmuMMQEUmV1atXv+zuI/N9L9VJobm5mY6OjrjDEBFJFTPb3Nf3VD4SEZEsJQUREcmKLCmY2Y/MbJuZrcvZNsLMVpjZxvB2eM73rjSzZ8xsg5mdHFVcIiLStyj7FG4F/hn4t5xtC4GH3H2JmS0MH19hZh8F5gCTgNHAL8zsw+6+J8L4RKSCdu3axZYtW3jnnXfiDkVCQ4cOZcyYMQwZMqTo10SWFNz9UTNr7rX5DGBWeH8p0A5cEW6/zd13As+Z2TPAMcCqqOITkcrasmULBx54IM3NzZhZ3OHUPXdn+/btbNmyhXHjxhX9umr3KTS5+1aA8PawcPvhwAs5z9sSbtuHmS0wsw4z6+ju7o402L50dS1j1apm2tsbWLWqma6uZbHEIZIk77zzDocccogSQkKYGYccckjJLbekdDTn+yvKu3yru9/o7i3u3jJyZN5htpHq6lrGhg0L2LlzM+Ds3LmZDRsWKDGIgBJCwpTz+6h2Uugys1EA4e22cPsW4IM5zxsDvFjl2IqyadNV9PS8tde2np632LTpqpgiEhGpnGonhfuAeeH9ecC9OdvnmFmjmY0DJgC/q3JsRdm58/mStotIdd19992YGX/84x/zfn/WrFklTXrt6Ojgm9/8JgDt7e2sXLky+7177rmHp59+uuQY3/e+95X8mmqJckjqcoKO4olmtsXMvgIsAU4ys43ASeFj3P0p4HbgaeAB4MKkjjxqbDyipO2V1tbWVpX9SLLU8u+90u9t+fLlHH/88dx2220V+XktLS3ccMMNQOWSQqK5e2q/jj76aK+2l176if/yl8P8kUfIfv3yl8P8pZd+UpX9B78yqTdp+L0//fTTZb2uku/t9ddf99GjR/uGDRt84sSJ7u7+1ltv+dlnn+2TJ0/2s846y4855hh//PHH3d39gAMO8Msvv9xnzJjhs2fP9scee8xnzpzp48aN83vvvdfd3R955BE/9dRT/bnnnvOmpiYfPXq0T5061dvb23348OHe3NzsU6dO9WeeecafeeYZP/nkk33GjBl+/PHH+/r1693dfdOmTX7sscd6S0uLX3311X7AAQdU7D33J9/vBejwPo6rSeloTo2mprlMnHgjjY1jAaOxcSwTJ95IU9PcuEOTCNXymXotueeeezjllFP48Ic/zIgRI3jiiSf4wQ9+wLBhw1i7di1XXXUVq1evzj7/zTffZNasWaxevZoDDzyQq6++mhUrVnD33Xfzne98Z6+f3dzczAUXXMC3vvUt1qxZw8yZMzn99NO59tprWbNmDR/60IdYsGAB//RP/8Tq1au57rrr+PrXvw7AxRdfzNe+9jUef/xxPvCBD1T1MymVkkIZmprm0trayaxZPbS2dkaeENra2jCz7EiCzH0dqKLT+7NdvHhxLDGk5fdeakxRvbfly5czZ84cAObMmcPy5ct59NFHOffccwGYMmUKU6ZMyT5/v/3245RTTgFg8uTJzJw5kyFDhjB58mQ6OztL2vcbb7zBypUrOfPMM5k2bRrnn38+W7duBeA3v/kN55xzDgDnnXfegN5j1FK9Smq9aGtry/6zmBlB60+itHjx4tgPvgP9vee+PmqLFy/m7LPPLvr5UfxNb9++nYcffph169ZhZuzZswczY/r06X0OzRwyZEj2ew0NDTQ2Nmbv7969u6T99/T0cPDBB7NmzZq830/LcF21FET6kKYz9XziaN3E6c477+RLX/oSmzdvprOzkxdeeIFx48YxY8YMli0L5hGtW7eOtWvXlr2PAw88kNdffz3v44MOOohx48Zxxx13AEF/7ZNPPgnAcccdl+34zsSSVEoK/Uja7OVFixbFuv9a1jsJZA6qmc880xEXR1JI4u+99+e1efNmOjo6ePHF0qYYVeq9LV++nM9//vN7bfvCF75AZ2cnb7zxBlOmTOF73/sexxxzTNn7+Iu/+Avuvvtupk2bxq9+9SvmzJnDtddey/Tp03n22WdZtmwZN998M1OnTmXSpEnce28w6v7666/n+9//Ph/72MfYsWPHgN5n1CzNpYiWlhaP8iI7mdnLuZPVGhqGqWO5DvQuaaSlbNfW1pa3hbBo0aJIk5mZ8fTTT3PkkUdGtg8pz/r16/f5vZjZandvyfd8tRQK0OxlyUjimXo+bW1t2RYNxNu6kXRSUihAs5frV+8koINqYWlJmtI/JYUC4p69LPGphSRQzQN1LXxeElBSKGD8+GtoaBi217aGhmGMH39NTBGJFE8HaimHkkIBmr0sIvVGk9f60dQ0V0lAEquaE9SkPqilIJJi9TZBrT9mxre//e3s4+uuu67fpFmJlU6bm5t5+eWXi37+fffdx5IlS/Lu/9Zbby15rkdnZydHHXVUSa/pi5KCiMQiiomhjY2N3HXXXSUdoONY/vr0009n4cKFefdfTlKoJCUFkZRJ4/IbvQ9yUV3WdvDgwSxYsIB/+Id/2Od7mzdvZvbs2UyZMoXZs2fz/PPPs3LlSu677z7+6q/+imnTpvHss8/u9Zqf/exnfPzjH2f69OmceOKJdHV1AcE6S5/+9KeZPn06559/fnZeSGdnJx/5yEf46le/ylFHHcXcuXP5xS9+wXHHHceECRP43e+Ca4fdeuutfOMb39hn/9/97nfp6Ohg7ty5TJs2jbfffpvVq1czc+ZMjj76aE4++eTsInurV69m6tSptLa28v3vf39An9te+lpTOw1fcVxPQSRJSNB1FgpdTyFz/YKMlSvH7nVNkszXypVjBxTDAQcc4Dt27PCxY8f6a6+95tdee60vWrTI3d1PO+00v/XWW93d/eabb/YzzjjD3d3nzZvnd9xxR96f98orr3hPT4+7u990001+6aWXurv7RRdd5IsXL3Z39/vvv98B7+7u9ueee84HDRrka9eu9T179viMGTN8/vz53tPT4/fcc092n7fccotfeOGFefc/c+bM7Of17rvvemtrq2/bts3d3W+77TafP3++u7tPnjzZ29vb3d39sssu80mTJuV9D6VeT0EdzSJSdVFODD3ooIP40pe+xA033MD++++f3b5q1SruuusuIFi++vLLL+/3Z23ZsoWzzz6brVu38u677zJu3DgAHn300ezPOvXUUxk+fHj2NePGjWPy5MkATJo0idmzZ2NmZS3HvWHDBtatW8dJJ50EwJ49exg1ahQ7duzgtddeY+bMmdn38/Of/7ykn90XJQWRFEvyTOIXX3xxr7JRZp2y0aNH09h4RFg62lulJoZecsklzJgxg/nz5/f5nGKWsr7ooou49NJLOf3002lvb9+rRNfX6zPLb8PAl+N2dyZNmsSqVav22v7aa69FthS3+hREUizJ/QijR4+mpaWFlpZg3bXM/dGjR0c+MXTEiBGcddZZ3Hzzzdltn/jEJ/Zavvr4448H9l0OO9eOHTs4/PDDAVi6dGl2+wknnJBdAvvnP/85r776atmxFlqOe+LEiXR3d2eTwq5du3jqqac4+OCDef/738+vf/3r7PupFCUFEdlLNRJNNSaGfvvb395rFNINN9zALbfcwpQpU/jxj3/M9ddfD7DP8te52traOPPMM/nkJz/JoYcemt2+aNEiHn30UWbMmMGDDz7IEUeU38Lpvf8vf/nLXHDBBUybNo09e/Zw5513csUVVzB16lSmTZvGypUrAbjlllu48MILaW1t3atMNlCxLJ1tZhcDfwkYcJO7/6OZjQB+CjQDncBZ7l4w/Ua9dLZIPSp3mfDeSzT3Lh9ljB49mtGjRw8oRile4pfONrOjCBLCMcBU4DQzmwAsBB5y9wnAQ+FjEamCKFoHhcpHklxxlI+OBH7r7m+5+27gl8DngTOATNFuKfC5GGKTBElyvbzWLF68OHVzHyQacSSFdcAJZnaImQ0DPgt8EGhy960A4e1h+V5sZgvMrMPMOrq7u6sWtFSflnCorsw49dz7pSaFvspOah3Eo5wyYNWTgruvB74LrAAeAJ4Eih6n5e43unuLu7eMHDkyoiilN50x1p6+ZkaXa+jQoWzfvj3vgagWkkKcS0+Uw93Zvn07Q4cOLel1sV+j2cz+DtgCXAzMcvetZjYKaHf3iYVeq47m6qnWNYrjusZwvcv9/Za78uquXbvYsmUL77zzToWjS4bNmzczduzYuMMoydChQxkzZgxDhgzZa3uhjua4Rh8d5u7bzOwI4EGgFfhrYLu7LzGzhcAIdy845VBJoXriuHB9HPusV/qs+1dLn1GiRh+F/t3MngZ+BlwYDj1dApxkZhuBk8LHEqM0Lrwm5ck3M7qef8+Z916P/wOxl48GQi2FfUV10ZU4zpJ0AZl41dKZcanyvfda+jyS2FKQiNTSiB0lBKlXcf7tKylIUZK88JpUTj2WSzL6e+/V/B+I8+RO5aMaoBE7A6dS1b5qqVxSqrjfe9T7V/moxrW1tVVk4lE9q6Wy20DobyY+SWmlKSmIJFBcB+fc5FjPJcM43ntSTu6UFGpMPf8jlyopZ2b5JKHlkoTPIS71/N6VFGpMPf8xlyopZ2ZxyvSlJDU51qs4T+7U0SxC/B2LEM+Agd7vOwmfg0SvUEezrtEsQjLKbrkjoHRwlriofCRCfZXdCpWLkpAcJV5qKYgkUJQHZ7VIpBC1FEQSqJ5aLpIsSgoidUzlIulNSUGkjqlFIr0pKYiISJaSgoiIZCkpiIhIlpKCSIWoPi+1QElBpEKSsIidyEApKYiISFYsScHMvmVmT5nZOjNbbmZDzWyEma0ws43h7fA4YhMphVYYlVpT9VVSzexw4NfAR939bTO7HfhP4KPAK+6+xMwWAsPd/YpCP0urpEqSaMkISYskXo5zMLC/mQ0GhgEvAmcAS8PvLwU+F1NsIiJ1q+pJwd3/B7gOeB7YCuxw9weBJnffGj5nK3BYvteb2QIz6zCzju7u7mqFnWhdXctYtaqZ9vYGVq1qpqtrWdwh1SUtGSG1oOpJIewrOAMYB4wGDjCzc4t9vbvf6O4t7t4ycuTIqMJMja6uZWzYsICdOzcDzs6dm9mwYYESQwyS0o+QlDgkneIoH50IPOfu3e6+C7gL+ATQZWajAMLbbTHEljqbNl1FT89be23r6XmLTZuuiikiiZuGxspAxJEUngeONbNhFgzZmA2sB+4D5oXPmQfcG0NsJYu7dLNz5/MlbZfo6AxdakEcfQqPAXcCTwB/CGO4EVgCnGRmG4GTwseJloTSTWPjESVtl+jEeYauobFSKVUfklpJcQ9JXbWqOUwIe2tsHEtra2dVYsgkptwSUkPDMCZOvJGmprlViUECSRmSmpQ46lnu1e2SKIlDUmtCEko3TU1zmTjxRhobxwJGY+NYJYQq0hm65JPmfh0lhQFISummqWkura2dzJrVQ2trpxJCFbW1teHu2TPzzP04k0Iah8YqiSaHksIAjB9/DQ0Nw/ba1tAwjPHjrxnwz467A1vSK40H2DSfWWfUSqtRSWEAoirdJKEDu1KS9g8RZTxpPEOXykliq7Ec6mhOoCR0YFdK0jo9kxZPrSmlg7WtrS1vC2HRokWpO5D2lvS/s0IdzUoKA9DVtYxNm65i587naWw8gvHjr6lIPb+9vQHI93sxZs3qGfDPr6ak/XMkLZ5aU+7nW2u/F40+qkNRlniS0oFdrqTVVpMWj9S+NP9t9ZsUwpnHf2NmN4WPJ5jZadGHlmxRLi8RZQd2NSSttpq0eGpNJZKu+mOSo9/ykZn9FFgNfMndjzKz/YFV7j6tGgEWEmf5KOoST1SlqWpLWlkgafHUmt6fb9LLKPVqoOWjD7n794BdAO7+NmAVjC+Voi7x1Mrcg6SdASYtnlpXC0NN600xSeHdsHXgAGb2IWBnpFGlQNpLPNWStLPEpMVTa5R006+YpLAIeAD4oJktAx4CLo80qhTob46CJp+ljxLGwGXKRerYT6+CfQpm1gB8kSARHEtQNvqtu79cnfAKi3tIal+0SF06qR5eeerDSaay+xTcvQf4hrtvd/f/cPf7k5IQkkwXvqkNqodLkkV1wlJM+WiFmV1mZh80sxGZr0iiqRFJWD1ViqNSR7TUxxCdqE5aihmS+lyeze7u4yOJqARJLR/V0jIV9cTMWLRoUc0uvSC1ZSCluQENSXX3cXm+Yk8ISVbpkUnqtK4eTXSTJKtGy3Zwf08wsyHA14ATwk3twL+6+66KRVFjMp3JlZh81rvTOrOcRu5+pDJU6pCkyx38EFUnfjHlox8CQ4Cl4abzgD3u/tWKR1OipJaPKkmlqPho9JEkWWzlI+Bj7j7P3R8Ov+YDHysrEimZOq3jo4SwL30myRFVy7aYpLAnnMUMgJmNB/aUu0Mzm2hma3K+/mRml4SjmlaY2cbwdni5+6glaV8xtZbU2gGxnPdTS8N00/77jCr+YspHs4FbgE0Ek9fGAvPd/ZEB79xsEPA/wMeBC4FX3H2JmS0Ehrv7FYVeXw/lo1qdCJfG0kytTcQq5/3U0mdQS++lVAMdffQQMAH4Zvg1sRIJITQbeNbdNwNn8F6/xVLgcxXaR6pFdcnPuNXSGWet01yO+lLM9RQuBPZ397Xu/iQwzMy+XqH9zwGWh/eb3H0rQHh7WB/xLDCzDjPr6O7urlAYyVYrK6amUa0dEMt5P30N002jNPw++4sl6liLKR+t6X3tBDP7vbtPH9COzfYDXgQmuXuXmb3m7gfnfP9Vdy/Yr1AP5aNakvZr8tZauWGg5aO0fx5Jjb+/uCoR90BHHzVYJq2S7QfYb0ARBT4DPOHuXeHjLjMbFe5jFLCtAvuQBNHEsPTTXI7aV0xS+C/gdjObbWafIij3PFCBfZ/De6UjgPuAeeH9ecC9FdiHSMXU2gGx3PeT9PJLsZL0++yvrFXNslcx5aMGYAFwIsHooweBH7r7QIalDgNeAMa7+45w2yHA7cARwPPAme7+SqGfo/JReqVx9JHsLanll7SLu3zUb1Lo9YNGAGPcfe2AIqoQJQWR+CgpRCPupFDM6KN2MzsoTAhrgFvM7P8OKCIRSb0klV9qSX+fa9SfezHlo9+7+3Qz+yrwQXdfZGZr3X1KpJEVQS2F0nV1LavIQn1SOSqlSbUNdPTR4HA00FnA/RWNTKoqMzs6WGDPsyuuainueGkinyRJMUnhbwlGID3j7o+Hax9tjDYsiYIuEyrSN7XWAsUsc3GHu09x96+Hjze5+xeiD00qTSuuJkcaZtbWm0q32NL6uyxp9FHS1FqfQtT1fl2bIZk0iicZKv17SPLvdaB9ClIF1aj3V/oyoXFJ6xmYJI9abPtSUkiIatT7a2XF1VrrmNXQzvhUeumVWkgyBctHZvYR4HDgMXd/I2f7Ke5eiaUuBqSWykft7Q1Avt+FMWtWT7XDSbQkN8slvVQ+CvTZUjCzbxKsP3QRsM7Mzsj59t9VNkTRFdYKq4UzMEk2tdgCfbYUzOwPQKu7v2FmzcCdwI/d/fpKLJ1dCbXUUqjVK6xFIclnYCIZSZ6UWKilMLjA6wZlSkbu3mlms4A7zSwoSEtFZQ789TDbWLOqpR4kNSH0p1BSeMnMprn7GoCwxXAa8CNgclWiqzNNTXNr/uDYu0WUGWUFFP3e1cwXiU6h8tEYYLe7v5Tne8e5+2+iDq4/tVQ+qheaKyESv7LKR+6+pcD3Yk8Ikk6aVS2SbJqnIFWlUVYiyaakIFVVK7OqRWpV0Ukhc6GdzFeUQUntqpVZ1ZWQ1tEpEp9q/M0Uc5Gd8wmWz36b96bcuruPjzi2fqmjOb00LFXzLaR0lfqbKXeeQsZlwCR3f3nAkdQgHdxKV4lhqSISjWLKR88Cb/X7rBKY2cFmdqeZ/dHM1ptZa1iWWmFmG8Pb4ZXcZxR0JbP3dHUtY9WqZtrbG1i1qrngZ1DPF/vRch1Sqmr/zRRTPpoO3AI8BuzMbHf3b5a9U7OlwK/c/Ydmth8wDPhr4BV3X2JmC4Hh7n5FoZ8Td/lIY+4DpS7REfXif0leXiCXykdSqmqUj4ppKfwr8DDwW2B1zle5wRwEnADcDODu77r7a8AZwNLwaUuBz5W7j2rRmPtAqWf+UQ9LrbWltUWqqZg+hd3ufmkF9zke6AZuMbOpBAnmYqDJ3bcCuPtWMzss34vNbAGwAOCII+Id297YeEQfLYX6GnNfanIcP/6avC2LehuWquU6pFTV+JsppqXwiJktMLNRFRqSOhiYAfwgXGn1TWBhsS929xvdvcXdW0aOHDmAMAZOY+4DpZ75RzEsNY21+iTHJsmUlCGpz+XZXPaQVDP7APBbd28OH3+SICn8GTArbCWMAtrdfWKhnxV3nwJo9BEkb9lv1epFChvQkFR3H1fJYNz9JTN7wcwmuvsGYDbwdPg1D1gS3t5byf1GpR5WNu1PPS37LVLriulTwMyOAj4KDM1sc/d/G8B+LwKWhSOPNgHzCUpZt5vZV4DngTMH8POlypKUHFWrFylfMeWjRcAsgqTwn8BngF+7+xcjj64fSSgfiUh80jL8OGkGOiT1iwQlnpfcfT4wFWisYHwiImXR8OPKKyYpvO3uPcDucI7BNoJhpZJipcxAFomTWgLVVUxS6DCzg4GbCOYUPAH8LtKoJFJankPSpHdrII3Dj9Ok3z6FvZ5s1gwc5O5rowqoFOpTKI+W55A0KTTEWMOPyzOgPoVwNBAA7t4JPBV2PktKaXkOSTq1BuJTTPlotpn9Zzij+SiCNZAOjDguiZAuiSlJ19bWhrtnWwGZ+72TgoYfV16/ScHd/zfBAnV/IBiSeom7XxZ1YBIdLc8htUIth8orpnw0gWDBun8HOoHzzGxYwRdJoumSmJImag1UVzGT1/4IXOjuD1lQ4LsU+D/uPqkaARaijmYRkdIN9HKcx7j7nyBYBQ/4ezO7r5IBiohIMvRZPjKzywHc/U9m1nsdovmRRiUiIrEo1KcwJ+f+lb2+d0oEsYiISMwKJQXr436+xyISEY2wkWoqlBS8j/v5HotEQms0adE3qa5CHc1TzexPBK2C/cP7hI+H9v0yqRdRX3Wu9xXdMms0AVUdPqur60k96bOl4O6D3P0gdz/Q3QeH9zOPh1QzSEmeaiyqt2nTVXtd4hOgp+ctNm26qmL76E9ciwdqmQeJS0kL4iWN5inEpxqL6rW3N5C/UmnMmtVTkX30JwmLB2rRN6m0gV5kR2Qf1VhULwlrNGnxQKk3SgpSlmocsJOwRlMSEpOWeZBqUlKQslTjgB33Gk1dXcvYvfuNfbZXOzGpH0GqqZhlLirOzDqB14E9wG53bzGzEcBPgWaChffOcvdX44hP+pc5MOcblVPJ0TpNTXNjGenTe+RTxuDBhzBhwvUafSQ1K5akEPpzd3855/FC4CF3X2JmC8PHV0SxYw0xrIx8B+ykDCMdqHwjnwAGDXpfqt6HSKmSVD46g+C6DYS3n4tiJ7o+cbSSMIy0EtTBLPUqrqTgwINmttrMFoTbmtx9K0B4e1i+F5rZAjPrMLOO7u7ukndcKwetpOr7YLrvsM4kS0IHs0gc4koKx7n7DOAzwIVmdkKxL3T3G929xd1bRo4cWfKOdQYYrb4Pmpaq1lgSRj6JOtnjEEtScPcXw9ttwN3AMUCXmY0CCG+3RbFvnQFGKzho5lsv0VPVGot75JMEtO5T9VU9KZjZAWZ2YOY+8GlgHXAfMC982jzg3ij2rzPAaAUHzfyzb9PWGmtqmktrayezZvXQ2tqphCB1IY6WQhPwazN7Evgd8B/u/gCwBDjJzDYCJ4WPK79znQFGLvhs821XayzNqlXK0bpP8dLaRxWkoa6BfGP8GxqGKfmmXBxrMGndp2ho7aMq0FDX9+RrjX3gA/PYtOmqur4uQiG6boQkhZJChWio695y6/Hjx1/DSy8tVcLsQ1JOKPKVZ+Iu5Wjdp+pT+ahCkrDMc1IlYfnpJEvK59NfqUalnNqh8lEVaKhrfl1dy/qcuJa20UhR0dyZ2pa2DnIlhQrRUNd9Zcoifan3hJkR5wlFKeWhWi/lRHXwTttcC5WPKkijj/bWV1kENBopV1JGa9V7eSiq95/Ez1XloyrRZKe9FSp/KCG8p9bnzqStfFIJcXfQD4RaChKZpHSgSnHa2toiOWgl8Uw5o62tLW95Z9GiRRX7LJL4/gu1FOo2KVSi1KNyUWFJKYtIvJJ4UMxH5aNAXZaPKjEuPCljy5Os1ssi0reoyidpKL/0lrYO+rpsKZRS1uirNaDSiEhxKnmmHOVZd275LKpSWlKofNRLsRPNCpU/1q8/r6ifIdKXeik/piUpxLGfuKh81Eux48ILLV2hyWq1I451h+qp/FiofFLMZ5/mkTxpVJdJodiJZoVmmmqyWm2I6+BcT2tl9XXwLvazb2trw92zZ+6Z+5VOCko+gbpMCsV2gBZqDagTtTbEdXDW0hbJS4zVSj5JNzjuAOLS1DS33wP4+PHX5O1TyLQGivkZkmxxHZwbG4/oY6BCdcuPcfZrlPPZp20kTxrVZUuhWGoN1L64+oaSUH6Mu1+jnM9eS3ZHry5HH4lkVHuCXe6Z+eDBI3CHPXteiWX0UdzDqjW5MT6FRh/VbflIBMgefKpRQul9ENy9ezsNDcM48sgfx3IQjLtfI/OeN268mN27twNgtn9V9i19U1KQuletvqFCHatxJIWk9Gv09Lydvb9nz/bscutqLcQjtj4FMxtkZr83s/vDxyPMbIWZbQxvh8cVm0gU4j4z7y0J/RpJG4Ek8XY0Xwysz3m8EHjI3ScAD4WPRWx2Ss4AAAoBSURBVGpG0iY8JmEgRdISpcSUFMxsDHAq8MOczWcAS8P7S4HPVTsuSZc4ZiIPRBLOzHuL+xogSUuUEl9L4R+By4HcRYKa3H0rQHh7WL4XmtkCM+sws47u7u7oI5VEins4ZTmScGaeNElMlKVI24lJMao+JNXMTgM+6+5fN7NZwGXufpqZvebuB+c871V3L9ivoCGp9Svu4ZRSOWldGDDNQ2qTNiT1OOB0M/ssMBQ4yMx+AnSZ2Sh332pmo4BtMcQmKaFadO1I68oA5YwmS0MCrHr5yN2vdPcx7t4MzAEedvdzgfuAeeHT5gH3Vjs2SQ/VoiVupZ6YpKXkmaRlLpYAJ5nZRuCk8LFIXmmvRUv6lXpikpbht7EmBXdvd/fTwvvb3X22u08Ib1+JMzZJNnXaStxKPTFJS8lTM5oltdJai5baUOoSKUmZQd4fJQURkTKVcmLS31L8SZGkPgURkZqVlpKnWgoiIlWShpKnWgqSGrU4e1QkadRSkFToPXs0M8YbtMSySCWppSCpkJYx3iJpp6QgqZCWMd4iaaekIKmgZS1EqkNJQVJBy1qIVIeSgqRCWsZ4i6SdRh9JaqRhjLdI2qmlICIiWUoKIiKSpaQgIiJZSgoiIpKlpCAiIllKCiIikqWkIFLntPqs5NI8BZE6ptVnpbeqtxTMbKiZ/c7MnjSzp8xscbh9hJmtMLON4e3wascmUm+0+qz0Fkf5aCfwKXefCkwDTjGzY4GFwEPuPgF4KHwsIhHS6rPSW9WTggfeCB8OCb8cOANYGm5fCnyu2rGJ1ButPiu9xdLRbGaDzGwNsA1Y4e6PAU3uvhUgvD2sj9cuMLMOM+vo7u6uXtAiNUirz0pvsSQFd9/j7tOAMcAxZnZUCa+90d1b3L1l5MiR0QUpUge0+qz0FuvoI3d/zczagVOALjMb5e5bzWwUQStCRCKm1WclVxyjj0aa2cHh/f2BE4E/AvcB88KnzQPurXZsIiL1Lo6WwihgqZkNIkhKt7v7/Wa2CrjdzL4CPA+cGUNsIiJ1repJwd3XAtPzbN8OzK52PCIi8h4tcyEiIllKCiIikmXuHncMZTOzbmBzmS8/FHi5guFETfFGJ02xQrriTVOsUD/xjnX3vGP6U50UBsLMOty9Je44iqV4o5OmWCFd8aYpVlC8oPKRiIjkUFIQEZGsek4KN8YdQIkUb3TSFCukK940xQqKt377FEREZF/13FIQEZFelBRERCSrLpJCGi8BGl5z4vdmdn/4OMmxdprZH8xsjZl1hNuSHO/BZnanmf3RzNabWWsS4zWzieFnmvn6k5ldksRYM8zsW+H/2DozWx7+7yUyXjO7OIzzKTO7JNyWmFjN7Edmts3M1uVs6zM+M7vSzJ4xsw1mdnK5+62LpEA6LwF6MbA+53GSYwX4c3efljNmOsnxXg884O4fAaYSfM6Ji9fdN4Sf6TTgaOAt4G4SGCuAmR0OfBNocfejgEHAHBIYb3gNl78EjiH4GzjNzCaQrFhvJbisQK688ZnZRwk+60nha/5fuOho6dy9rr6AYcATwMeBDcCocPsoYEPc8YWxjAl/4Z8C7g+3JTLWMJ5O4NBe2xIZL3AQ8BzhIIukx5sT36eB3yQ5VuBw4AVgBMFim/eHcScuXoJVmH+Y8/hvgMuTFivQDKzLeZw3PuBK4Mqc5/0X0FrOPuulpTCgS4DG4B8J/kB7crYlNVYIrrH9oJmtNrMF4bakxjse6AZuCctzPzSzA0huvBlzgOXh/UTG6u7/A1xHsPT9VmCHuz9IMuNdB5xgZoeY2TDgs8AHSWasufqKL5OQM7aE20pWN0nBB3AJ0Goys9OAbe6+Ou5YSnCcu88APgNcaGYnxB1QAYOBGcAP3H068CYJKGcUYmb7AacDd8QdSyFhffsMYBwwGjjAzM6NN6r83H098F1gBfAA8CSwO9agBsbybCtrvkHdJIUMd38NaCfnEqAACboE6HHA6WbWCdwGfMrMfkIyYwXA3V8Mb7cR1LyPIbnxbgG2hC1FgDsJkkRS44Ug2T7h7l3h46TGeiLwnLt3u/su4C7gEyQ0Xne/2d1nuPsJwCvARhIaa46+4ttC0NLJGAO8WM4O6iIpWIouAeruV7r7GHdvJigZPOzu55LAWAHM7AAzOzBzn6CGvI6ExuvuLwEvmNnEcNNs4GkSGm/oHN4rHUFyY30eONbMhpmZEXy260lovGZ2WHh7BPC/CD7jRMaao6/47gPmmFmjmY0DJgC/K2sPcXf4VKmzZgrwe2AtwQHrO+H2Qwg6dDeGtyPijrVX3LN4r6M5kbES1OifDL+eAq5KcrxhbNOAjvDv4R5geFLjJRgYsR14f862RMYaxraY4IRrHfBjoDGp8QK/IjgheBKYnbTPliBJbQV2EbQEvlIoPuAq4FmCzujPlLtfLXMhIiJZdVE+EhGR4igpiIhIlpKCiIhkKSmIiEiWkoKIiGQpKUhNMrM9vVYYrdqs5XyrW4qkhYakSk0yszfc/X0x7fsE4A3g3zxYLbQa+xzk7nuqsS+pbWopSN0ws/eHa81PDB8vN7O/DO//wMw6LOd6G+H2TjP7OzNbFX5/hpn9l5k9a2YX5NuPuz9KsGxCoVjODNfyf9LMHg23DTKz6yy4NsVaM7so3D47XLzvD2ErpDEntu+Y2a+BM83s02GcT5jZHWYWS1KUdFNSkFq1f6/y0dnuvgP4BnCrmc0Bhrv7TeHzr/LgWhBTgJlmNiXnZ73g7q0EM2BvBb4IHAv87QDi+w5wsgfX+Dg93LaAYDG56e4+BVhmZkPDfZ7t7pMJFvT7Ws7Pecfdjwd+AVwNnOjB4oQdwKUDiE/q1OC4AxCJyNserIq7F3dfYWZnAt8nuLhKxlnhst+DCdap/yjBMhgQrCsD8Afgfe7+OvC6mb1jZgd7sMhiqX5DkJxuJ1g4DoI1uf7F3XeHsb5iZlMJFpn77/A5S4ELCZZXB/hpeHtsGPNvgmWH2A9YVUZcUueUFKSumFkDcCTwNsHFYLaEC4hdBnzM3V81s1uBoTkv2xne9uTczzwu63/I3S8ws48DpwJrzGwawfLHvTv58i2JnOvNnOetcPdzyolHJEPlI6k33yJYufMc4EdmNoTgamxvAjvMrIlgqepImdmH3P0xd/8O8DLBsscPAheY2eDwOSMIFpdrNrM/C196HvDLPD/yt8BxmeeFK5V+OOr3IbVHLQWpVfuHV9rLeAD4EfBV4Bh3fz3s4L3a3ReZ2e8JVnndRFDaKZuZLSdY4fZQM9sCLHL3m3s97VoLrglsBKtdPkmwsuiHgbVmtgu4yd3/2czmA3eEyeJx4F9679Pdu83sy8DyTEc0QR/Df/d+rkghGpIqIiJZKh+JiEiWkoKIiGQpKYiISJaSgoiIZCkpiIhIlpKCiIhkKSmIiEjW/wf7zgwS1Yx1fAAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# Plot examples\n", "plot_data(X_train, y_train[:], pos_label=\"Admitted\", neg_label=\"Not admitted\")\n", "\n", "# Set the y-axis label\n", "plt.ylabel('Exam 2 score') \n", "# Set the x-axis label\n", "plt.xlabel('Exam 1 score') \n", "plt.legend(loc=\"upper right\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Your goal is to build a logistic regression model to fit this data.\n", "- With this model, you can then predict if a new student will be admitted based on their scores on the two exams." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.3 Sigmoid function\n", "\n", "Recall that for logistic regression, the model is represented as\n", "\n", "$$ f_{\\mathbf{w},b}(x) = g(\\mathbf{w}\\cdot \\mathbf{x} + b)$$\n", "where function $g$ is the sigmoid function. The sigmoid function is defined as:\n", "\n", "$$g(z) = \\frac{1}{1+e^{-z}}$$\n", "\n", "Let's implement the sigmoid function first, so it can be used by the rest of this assignment.\n", "\n", "\n", "### Exercise 1\n", "Please complete the `sigmoid` function to calculate\n", "\n", "$$g(z) = \\frac{1}{1+e^{-z}}$$\n", "\n", "Note that \n", "- `z` is not always a single number, but can also be an array of numbers. \n", "- If the input is an array of numbers, we'd like to apply the sigmoid function to each value in the input array.\n", "\n", "If you get stuck, you can check out the hints presented after the cell below to help you with the implementation." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# UNQ_C1\n", "# GRADED FUNCTION: sigmoid\n", "\n", "def sigmoid(z):\n", " \"\"\"\n", " Compute the sigmoid of z\n", "\n", " Args:\n", " z (ndarray): A scalar, numpy array of any size.\n", "\n", " Returns:\n", " g (ndarray): sigmoid(z), with the same shape as z\n", " \n", " \"\"\"\n", " \n", " ### START CODE HERE ### \n", " \n", " ### END SOLUTION ### \n", " \n", " return 1/(1+np.exp(-z))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Click for hints\n", " \n", " * `numpy` has a function called [`np.exp()`](https://numpy.org/doc/stable/reference/generated/numpy.exp.html), which offers a convinient way to calculate the exponential ( $e^{z}$) of all elements in the input array (`z`).\n", " \n", "
\n", " Click for more hints\n", " \n", " - You can translate $e^{-z}$ into code as `np.exp(-z)` \n", " \n", " - You can translate $1/e^{-z}$ into code as `1/np.exp(-z)` \n", " \n", " If you're still stuck, you can check the hints presented below to figure out how to calculate `g` \n", " \n", "
\n", " Hint to calculate g\n", " g = 1 / (1 + np.exp(-z))\n", "
\n", "\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you are finished, try testing a few values by calling `sigmoid(x)` in the cell below. \n", "- For large positive values of x, the sigmoid should be close to 1, while for large negative values, the sigmoid should be close to 0. \n", "- Evaluating `sigmoid(0)` should give you exactly 0.5. \n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "deletable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sigmoid(0) = 0.5\n" ] } ], "source": [ "# Note: You can edit this value\n", "value = 0\n", "\n", "print (f\"sigmoid({value}) = {sigmoid(value)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", "
sigmoid(0) 0.5
\n", " \n", "- As mentioned before, your code should also work with vectors and matrices. For a matrix, your function should perform the sigmoid function on every element." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sigmoid([ -1, 0, 1, 2]) = [0.26894142 0.5 0.73105858 0.88079708]\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "print (\"sigmoid([ -1, 0, 1, 2]) = \" + str(sigmoid(np.array([-1, 0, 1, 2]))))\n", "\n", "# UNIT TESTS\n", "from public_tests import *\n", "sigmoid_test(sigmoid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", " \n", "
sigmoid([-1, 0, 1, 2])[0.26894142 0.5 0.73105858 0.88079708]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.4 Cost function for logistic regression\n", "\n", "In this section, you will implement the cost function for logistic regression.\n", "\n", "\n", "### Exercise 2\n", "\n", "Please complete the `compute_cost` function using the equations below.\n", "\n", "Recall that for logistic regression, the cost function is of the form \n", "\n", "$$ J(\\mathbf{w},b) = \\frac{1}{m}\\sum_{i=0}^{m-1} \\left[ loss(f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}), y^{(i)}) \\right] \\tag{1}$$\n", "\n", "where\n", "* m is the number of training examples in the dataset\n", "\n", "\n", "* $loss(f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}), y^{(i)})$ is the cost for a single data point, which is - \n", "\n", " $$loss(f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}), y^{(i)}) = (-y^{(i)} \\log\\left(f_{\\mathbf{w},b}\\left( \\mathbf{x}^{(i)} \\right) \\right) - \\left( 1 - y^{(i)}\\right) \\log \\left( 1 - f_{\\mathbf{w},b}\\left( \\mathbf{x}^{(i)} \\right) \\right) \\tag{2}$$\n", " \n", " \n", "* $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)})$ is the model's prediction, while $y^{(i)}$, which is the actual label\n", "\n", "* $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) = g(\\mathbf{w} \\cdot \\mathbf{x^{(i)}} + b)$ where function $g$ is the sigmoid function.\n", " * It might be helpful to first calculate an intermediate variable $z_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) = \\mathbf{w} \\cdot \\mathbf{x^{(i)}} + b = w_0x^{(i)}_0 + ... + w_{n-1}x^{(i)}_{n-1} + b$ where $n$ is the number of features, before calculating $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) = g(z_{\\mathbf{w},b}(\\mathbf{x}^{(i)}))$\n", "\n", "Note:\n", "* As you are doing this, remember that the variables `X_train` and `y_train` are not scalar values but matrices of shape ($m, n$) and ($𝑚$,1) respectively, where $𝑛$ is the number of features and $𝑚$ is the number of training examples.\n", "* You can use the sigmoid function that you implemented above for this part.\n", "\n", "If you get stuck, you can check out the hints presented after the cell below to help you with the implementation." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "# UNQ_C2\n", "# GRADED FUNCTION: compute_cost\n", "def compute_cost(X, y, w, b, *argv):\n", " \"\"\"\n", " Computes the cost over all examples\n", " Args:\n", " X : (ndarray Shape (m,n)) data, m examples by n features\n", " y : (ndarray Shape (m,)) target value \n", " w : (ndarray Shape (n,)) values of parameters of the model \n", " b : (scalar) value of bias parameter of the model\n", " *argv : unused, for compatibility with regularized version below\n", " Returns:\n", " total_cost : (scalar) cost \n", " \"\"\"\n", "\n", " m, n = X.shape\n", " \n", " ### START CODE HERE ###\n", " total_cost = 0\n", " for i in range(m):\n", " z = sigmoid(w@X[i] + b)\n", " total_cost += (-y[i]*np.log(z)) - (1-y[i])*np.log(1-z)\n", " ### END CODE HERE ### \n", "\n", " return total_cost / m" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "Click for hints\n", " \n", "* You can represent a summation operator eg: $h = \\sum\\limits_{i = 0}^{m-1} 2i$ in code as follows:\n", "\n", "```python\n", " h = 0\n", " for i in range(m):\n", " h = h + 2*i\n", "```\n", "
\n", "\n", "* In this case, you can iterate over all the examples in `X` using a for loop and add the `loss` from each iteration to a variable (`loss_sum`) initialized outside the loop.\n", "\n", "* Then, you can return the `total_cost` as `loss_sum` divided by `m`.\n", "\n", "* If you are new to Python, please check that your code is properly indented with consistent spaces or tabs. Otherwise, it might produce a different output or raise an `IndentationError: unexpected indent` error. You can refer to [this topic](https://community.deeplearning.ai/t/indentation-in-python-indentationerror-unexpected-indent/159398) in our community for details.\n", " \n", "
\n", " Click for more hints\n", " \n", "* Here's how you can structure the overall implementation for this function\n", " \n", "```python\n", "def compute_cost(X, y, w, b, *argv):\n", " m, n = X.shape\n", "\n", " ### START CODE HERE ###\n", " loss_sum = 0 \n", " \n", " # Loop over each training example\n", " for i in range(m): \n", " \n", " # First calculate z_wb = w[0]*X[i][0]+...+w[n-1]*X[i][n-1]+b\n", " z_wb = 0 \n", " # Loop over each feature\n", " for j in range(n): \n", " # Add the corresponding term to z_wb\n", " z_wb_ij = # Your code here to calculate w[j] * X[i][j]\n", " z_wb += z_wb_ij # equivalent to z_wb = z_wb + z_wb_ij\n", " # Add the bias term to z_wb\n", " z_wb += b # equivalent to z_wb = z_wb + b\n", " \n", " f_wb = # Your code here to calculate prediction f_wb for a training example\n", " loss = # Your code here to calculate loss for a training example\n", " \n", " loss_sum += loss # equivalent to loss_sum = loss_sum + loss\n", " \n", " total_cost = (1 / m) * loss_sum \n", " ### END CODE HERE ### \n", " \n", " return total_cost\n", "```\n", "
\n", "\n", "If you're still stuck, you can check the hints presented below to figure out how to calculate `z_wb_ij`, `f_wb` and `cost`.\n", "\n", "
\n", "Hint to calculate z_wb_ij\n", "     z_wb_ij = w[j]*X[i][j] \n", "
\n", " \n", "
\n", " Hint to calculate f_wb\n", "     $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) = g(z_{\\mathbf{w},b}(\\mathbf{x}^{(i)}))$ where $g$ is the sigmoid function. You can simply call the `sigmoid` function implemented above.\n", "
\n", "     More hints to calculate f\n", "     You can compute f_wb as f_wb = sigmoid(z_wb) \n", "
\n", "
\n", "\n", "
\n", " Hint to calculate loss\n", "     You can use the np.log function to calculate the log\n", "
\n", "     More hints to calculate loss\n", "     You can compute loss as loss = -y[i] * np.log(f_wb) - (1 - y[i]) * np.log(1 - f_wb)\n", "
\n", "
\n", " \n", "
\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the cells below to check your implementation of the `compute_cost` function with two different initializations of the parameters $w$ and $b$" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cost at initial w and b (zeros): 0.693\n" ] } ], "source": [ "m, n = X_train.shape\n", "\n", "# Compute and display cost with w and b initialized to zeros\n", "initial_w = np.zeros(n)\n", "initial_b = 0.\n", "cost = compute_cost(X_train, y_train, initial_w, initial_b)\n", "print('Cost at initial w and b (zeros): {:.3f}'.format(cost))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", "
Cost at initial w and b (zeros) 0.693
" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cost at test w and b (non-zeros): 0.218\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "# Compute and display cost with non-zero w and b\n", "test_w = np.array([0.2, 0.2])\n", "test_b = -24.\n", "cost = compute_cost(X_train, y_train, test_w, test_b)\n", "\n", "print('Cost at test w and b (non-zeros): {:.3f}'.format(cost))\n", "\n", "\n", "# UNIT TESTS\n", "compute_cost_test(compute_cost)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", "
Cost at test w and b (non-zeros): 0.218
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.5 Gradient for logistic regression\n", "\n", "In this section, you will implement the gradient for logistic regression.\n", "\n", "Recall that the gradient descent algorithm is:\n", "\n", "$$\\begin{align*}& \\text{repeat until convergence:} \\; \\lbrace \\newline \\; & b := b - \\alpha \\frac{\\partial J(\\mathbf{w},b)}{\\partial b} \\newline \\; & w_j := w_j - \\alpha \\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j} \\tag{1} \\; & \\text{for j := 0..n-1}\\newline & \\rbrace\\end{align*}$$\n", "\n", "where, parameters $b$, $w_j$ are all updated simultaniously" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "### Exercise 3\n", "\n", "Please complete the `compute_gradient` function to compute $\\frac{\\partial J(\\mathbf{w},b)}{\\partial w}$, $\\frac{\\partial J(\\mathbf{w},b)}{\\partial b}$ from equations (2) and (3) below.\n", "\n", "$$\n", "\\frac{\\partial J(\\mathbf{w},b)}{\\partial b} = \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - \\mathbf{y}^{(i)}) \\tag{2}\n", "$$\n", "$$\n", "\\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j} = \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - \\mathbf{y}^{(i)})x_{j}^{(i)} \\tag{3}\n", "$$\n", "* m is the number of training examples in the dataset\n", "\n", " \n", "* $f_{\\mathbf{w},b}(x^{(i)})$ is the model's prediction, while $y^{(i)}$ is the actual label\n", "\n", "\n", "- **Note**: While this gradient looks identical to the linear regression gradient, the formula is actually different because linear and logistic regression have different definitions of $f_{\\mathbf{w},b}(x)$.\n", "\n", "As before, you can use the sigmoid function that you implemented above and if you get stuck, you can check out the hints presented after the cell below to help you with the implementation." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "# UNQ_C3\n", "# GRADED FUNCTION: compute_gradient\n", "def compute_gradient(X, y, w, b, *argv): \n", " \"\"\"\n", " Computes the gradient for logistic regression \n", " \n", " Args:\n", " X : (ndarray Shape (m,n)) data, m examples by n features\n", " y : (ndarray Shape (m,)) target value \n", " w : (ndarray Shape (n,)) values of parameters of the model \n", " b : (scalar) value of bias parameter of the model\n", " *argv : unused, for compatibility with regularized version below\n", " Returns\n", " dj_dw : (ndarray Shape (n,)) The gradient of the cost w.r.t. the parameters w. \n", " dj_db : (scalar) The gradient of the cost w.r.t. the parameter b. \n", " \"\"\"\n", " m, n = X.shape\n", " dj_dw = np.zeros(w.shape)\n", " dj_db = 0.\n", "\n", " ### START CODE HERE ### \n", " for i in range(m):\n", " z_wb = 0\n", " for j in range(n): \n", " z_wb_ij = X[i, j]*w[j]\n", " z_wb += z_wb_ij\n", " z_wb += b\n", " f_wb = sigmoid(z_wb)\n", " \n", " dj_db_i = f_wb - y[i]\n", " dj_db += dj_db_i\n", " \n", " for j in range(n):\n", " dj_dw_ij = (f_wb - y[i]) * X[i][j]\n", " dj_dw[j] += dj_dw_ij\n", " \n", " dj_dw = dj_dw * 1/m\n", " dj_db = dj_db * 1/m\n", " ### END CODE HERE ###\n", "\n", " \n", " return dj_db, dj_dw" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Click for hints\n", " \n", " \n", "* Here's how you can structure the overall implementation for this function\n", " ```python \n", " def compute_gradient(X, y, w, b, *argv): \n", " m, n = X.shape\n", " dj_dw = np.zeros(w.shape)\n", " dj_db = 0.\n", " \n", " ### START CODE HERE ### \n", " for i in range(m):\n", " # Calculate f_wb (exactly as you did in the compute_cost function above)\n", " f_wb = \n", " \n", " # Calculate the gradient for b from this example\n", " dj_db_i = # Your code here to calculate the error\n", " \n", " # add that to dj_db\n", " dj_db += dj_db_i\n", " \n", " # get dj_dw for each attribute\n", " for j in range(n):\n", " # You code here to calculate the gradient from the i-th example for j-th attribute\n", " dj_dw_ij = \n", " dj_dw[j] += dj_dw_ij\n", " \n", " # divide dj_db and dj_dw by total number of examples\n", " dj_dw = dj_dw / m\n", " dj_db = dj_db / m\n", " ### END CODE HERE ###\n", " \n", " return dj_db, dj_dw\n", " ```\n", "\n", " * If you are new to Python, please check that your code is properly indented with consistent spaces or tabs. Otherwise, it might produce a different output or raise an `IndentationError: unexpected indent` error. You can refer to [this topic](https://community.deeplearning.ai/t/indentation-in-python-indentationerror-unexpected-indent/159398) in our community for details.\n", " * If you're still stuck, you can check the hints presented below to figure out how to calculate `f_wb`, `dj_db_i` and `dj_dw_ij` \n", " \n", "
\n", " Hint to calculate f_wb\n", "     Recall that you calculated f_wb in compute_cost above — for detailed hints on how to calculate each intermediate term, check out the hints section below that exercise\n", "
\n", "     More hints to calculate f_wb\n", "     You can calculate f_wb as\n", "
\n",
    "               for i in range(m):   \n",
    "                   # Calculate f_wb (exactly how you did it in the compute_cost function above)\n",
    "                   z_wb = 0\n",
    "                   # Loop over each feature\n",
    "                   for j in range(n): \n",
    "                       # Add the corresponding term to z_wb\n",
    "                       z_wb_ij = X[i, j] * w[j]\n",
    "                       z_wb += z_wb_ij\n",
    "            \n",
    "                   # Add bias term \n",
    "                   z_wb += b\n",
    "        \n",
    "                   # Calculate the prediction from the model\n",
    "                   f_wb = sigmoid(z_wb)\n",
    "    
\n", " \n", "
\n", "
\n", " Hint to calculate dj_db_i\n", "     You can calculate dj_db_i as dj_db_i = f_wb - y[i]\n", "
\n", " \n", "
\n", " Hint to calculate dj_dw_ij\n", "     You can calculate dj_dw_ij as dj_dw_ij = (f_wb - y[i])* X[i][j]\n", "
\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the cells below to check your implementation of the `compute_gradient` function with two different initializations of the parameters $w$ and $b$" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dj_db at initial w and b (zeros):-0.1\n", "dj_dw at initial w and b (zeros):[-12.00921658929115, -11.262842205513591]\n" ] } ], "source": [ "# Compute and display gradient with w and b initialized to zeros\n", "initial_w = np.zeros(n)\n", "initial_b = 0.\n", "\n", "dj_db, dj_dw = compute_gradient(X_train, y_train, initial_w, initial_b)\n", "print(f'dj_db at initial w and b (zeros):{dj_db}' )\n", "print(f'dj_dw at initial w and b (zeros):{dj_dw.tolist()}' )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
dj_db at initial w and b (zeros) -0.1
dj_dw at initial w and b (zeros): [-12.00921658929115, -11.262842205513591]
" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dj_db at test w and b: -0.5999999999991071\n", "dj_dw at test w and b: [-44.831353617873795, -44.37384124953978]\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "# Compute and display cost and gradient with non-zero w and b\n", "test_w = np.array([ 0.2, -0.5])\n", "test_b = -24\n", "dj_db, dj_dw = compute_gradient(X_train, y_train, test_w, test_b)\n", "\n", "print('dj_db at test w and b:', dj_db)\n", "print('dj_dw at test w and b:', dj_dw.tolist())\n", "\n", "# UNIT TESTS \n", "compute_gradient_test(compute_gradient)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
dj_db at test w and b (non-zeros) -0.5999999999991071
dj_dw at test w and b (non-zeros): [-44.8313536178737957, -44.37384124953978]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.6 Learning parameters using gradient descent \n", "\n", "Similar to the previous assignment, you will now find the optimal parameters of a logistic regression model by using gradient descent. \n", "- You don't need to implement anything for this part. Simply run the cells below. \n", "\n", "- A good way to verify that gradient descent is working correctly is to look\n", "at the value of $J(\\mathbf{w},b)$ and check that it is decreasing with each step. \n", "\n", "- Assuming you have implemented the gradient and computed the cost correctly, your value of $J(\\mathbf{w},b)$ should never increase, and should converge to a steady value by the end of the algorithm." ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters, lambda_): \n", " \"\"\"\n", " Performs batch gradient descent to learn theta. Updates theta by taking \n", " num_iters gradient steps with learning rate alpha\n", " \n", " Args:\n", " X : (ndarray Shape (m, n) data, m examples by n features\n", " y : (ndarray Shape (m,)) target value \n", " w_in : (ndarray Shape (n,)) Initial values of parameters of the model\n", " b_in : (scalar) Initial value of parameter of the model\n", " cost_function : function to compute cost\n", " gradient_function : function to compute gradient\n", " alpha : (float) Learning rate\n", " num_iters : (int) number of iterations to run gradient descent\n", " lambda_ : (scalar, float) regularization constant\n", " \n", " Returns:\n", " w : (ndarray Shape (n,)) Updated values of parameters of the model after\n", " running gradient descent\n", " b : (scalar) Updated value of parameter of the model after\n", " running gradient descent\n", " \"\"\"\n", " \n", " # number of training examples\n", " m = len(X)\n", " \n", " # An array to store cost J and w's at each iteration primarily for graphing later\n", " J_history = []\n", " w_history = []\n", " \n", " for i in range(num_iters):\n", "\n", " # Calculate the gradient and update the parameters\n", " dj_db, dj_dw = gradient_function(X, y, w_in, b_in, lambda_) \n", "\n", " # Update Parameters using w, b, alpha and gradient\n", " w_in = w_in - alpha * dj_dw \n", " b_in = b_in - alpha * dj_db \n", " \n", " # Save cost J at each iteration\n", " if i<100000: # prevent resource exhaustion \n", " cost = cost_function(X, y, w_in, b_in, lambda_)\n", " J_history.append(cost)\n", "\n", " # Print cost every at intervals 10 times or as many iterations if < 10\n", " if i% math.ceil(num_iters/10) == 0 or i == (num_iters-1):\n", " w_history.append(w_in)\n", " print(f\"Iteration {i:4}: Cost {float(J_history[-1]):8.2f} \")\n", " \n", " return w_in, b_in, J_history, w_history #return w and J,w history for graphing" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's run the gradient descent algorithm above to learn the parameters for our dataset.\n", "\n", "**Note**\n", "The code block below takes a couple of minutes to run, especially with a non-vectorized version. You can reduce the `iterations` to test your implementation and iterate faster. If you have time later, try running 100,000 iterations for better results." ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Iteration 0: Cost 0.96 \n", "Iteration 1000: Cost 0.31 \n", "Iteration 2000: Cost 0.30 \n", "Iteration 3000: Cost 0.30 \n", "Iteration 4000: Cost 0.30 \n", "Iteration 5000: Cost 0.30 \n", "Iteration 6000: Cost 0.30 \n", "Iteration 7000: Cost 0.30 \n", "Iteration 8000: Cost 0.30 \n", "Iteration 9000: Cost 0.30 \n", "Iteration 9999: Cost 0.30 \n" ] } ], "source": [ "np.random.seed(1)\n", "initial_w = 0.01 * (np.random.rand(2) - 0.5)\n", "initial_b = -8\n", "\n", "# Some gradient descent settings\n", "iterations = 10000\n", "alpha = 0.001\n", "\n", "w,b, J_history,_ = gradient_descent(X_train ,y_train, initial_w, initial_b, \n", " compute_cost, compute_gradient, alpha, iterations, 0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " Expected Output: Cost 0.30, (Click to see details):\n", "\n", "\n", " # With the following settings\n", " np.random.seed(1)\n", " initial_w = 0.01 * (np.random.rand(2) - 0.5)\n", " initial_b = -8\n", " iterations = 10000\n", " alpha = 0.001\n", " #\n", "\n", "```\n", "Iteration 0: Cost 0.96 \n", "Iteration 1000: Cost 0.31 \n", "Iteration 2000: Cost 0.30 \n", "Iteration 3000: Cost 0.30 \n", "Iteration 4000: Cost 0.30 \n", "Iteration 5000: Cost 0.30 \n", "Iteration 6000: Cost 0.30 \n", "Iteration 7000: Cost 0.30 \n", "Iteration 8000: Cost 0.30 \n", "Iteration 9000: Cost 0.30 \n", "Iteration 9999: Cost 0.30 \n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.7 Plotting the decision boundary\n", "\n", "We will now use the final parameters from gradient descent to plot the linear fit. If you implemented the previous parts correctly, you should see a plot similar to the following plot: \n", "\n", "\n", "We will use a helper function in the `utils.py` file to create this plot." ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAEGCAYAAACKB4k+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3deXxU9dX48c8BJBSLgigoRcAoRBaRXVN9EBX6YOsjdV/QYgtaBZei/kSlkuhTKj4qKhZUxLJUxL0qdUUUF+rSgCA7KAINakQq4opAzu+P700McRImM/fOvXfmvF+veSUzmcycuZPMud/tfEVVMcYYYwDqhR2AMcaY6LCkYIwxppIlBWOMMZUsKRhjjKlkScEYY0ylBmEHkI59991X27VrF3YYxhgTKwsWLPhMVfdL9LNYJ4V27dpRUlISdhjGGBMrIrK+pp9Z95ExxphKlhSMMcZUCiwpiMhfReRTEVla5bZ9RGSOiKzxvjar8rNrReR9EVklIv8dVFzGGGNqFuSYwjTgL8CMKrddA8xV1XEico13fZSIdALOAjoDrYCXRKSDqu4MMD5jjGH79u2Ulpby3XffhR2K7xo1akTr1q3ZY489kv6dwJKCqr4mIu2q3TwI6Od9Px2YB4zybn9IVbcBH4rI+0Af4M2g4jPGGIDS0lKaNGlCu3btEJGww/GNqrJ582ZKS0s56KCDkv69TI8ptFTVjwG8ry28238G/LvK/Uq92yKprGwmb77Zjnnz6vHmm+0oK5sZdkjGmBR99913NG/ePKsSAoCI0Lx58zq3gKIyJTXRu5GwfKuIXAhcCNCmTZsgY0qorGwmq1ZdSHn5NwBs27aeVasuBKBly8EZj8cYk75sSwgVUnldmW4plInIAQDe10+920uBA6vcrzXwUaIHUNXJqtpLVXvtt1/CtReBWrt2dGVCqFBe/g1r147OeCzGGOO3TCeFp4Eh3vdDgKeq3H6WiOSJyEFAe+CdDMeWlG3bNtTpdr8VFxdn5HlMtNj7bqr7y1/+wiGHHIKI8Nlnn/n2uEFOSZ2FGyguEJFSERkKjAMGiMgaYIB3HVVdBjwCLAeeB0ZEdeZRXl7iLquabvfbDTfckJHnMdFi73v0hJ2ojzrqKF566SXatm3r6+MGlhRU9WxVPUBV91DV1qp6v6puVtXjVbW99/U/Ve4/VlUPVtUCVX0uqLjSlZ8/lnr1Gu9yW716jcnPHxtSRCYTwv4AMNHjV6K+/vrrufPOOyuvjx49mgkTJuz297p3704Qtd9sRXMdtWw5mIKCyeTltQWEvLy2FBRMDnSQubi4GBGpHDSq+N4+qIJT/diGcaYep/c9ijHFxdChQ5k+fToA5eXlPPTQQwwaNIhu3bolvCxfvjzYgFQ1tpeePXtqrnFvmQla9eMc9nFP5fmLior8D6QGYR+fdCxfvjzp+xYVFSluZuQul3SPdf/+/XXhwoX63HPP6amnnlqn323btq1u2rSpxp8nen1AidbwuWothd2wNQm5K05n6onYOIT/iouLKz884YeT6nT/JoYNG8a0adOYOnUqv/vd7/jyyy+tpZDKJeiWwiefPKCvvtpYX3mFysurrzbWTz55INDnrU0mz/5yTW1ngYR8JpzK+x50zEGdNWdaXVoKVfl5fLdt26YdOnTQgw46SHfs2FGn37WWQgZFcU1CXM5S4yios0A/JBtDJls3UT5emVBUVOTbYzVs2JBjjz2WM844g/r16yf1OxMmTKB169aUlpbStWtXhg0b5kssUvGGxlGvXr00yE125s2rR+KF1UK/fuWBPa8Jn4hQ9X+juLg4dh921V9DtjyX31asWEHHjh1DjaG8vJwePXrw6KOP0r59e18fO9HrE5EFqtor0f2tpVCLsNckmPBUPwuMW0LIND/PmnPN8uXLOeSQQzj++ON9TwipiErto0jKzx+7S50jsDUJuSIbkkAmP6iz4XiFpVOnTqxduzbsMCpZS6EWYaxJMMYv9kFtUpGzSeGDD5K7X8uWgyksXEe/fuUUFq6zhGAixT74jd9yMin885/Qvj0MHQqbNoUdjTGpi8taBEte8ZGTSeGww+DKK2HGDCgogHvvhZ2RLL9nTHaIS/IyOZoUmjSBW26BRYuga1e46CIoLIQAZ7ca45u4r7Q2/vjwww854ogjaN++PWeeeSbff/+9L4+bk0mhQufO8Mor8MADsGED9OkDw4fD55+HHZkxNYvLorFsTV5RKX0zatQoRo4cyZo1a2jWrBn333+/L4+b00kBQAQGD4ZVq+DSS11XUocOMHUqlNv6NGNSFpfkVRcV2/Fu27Ye0MrteNNJDKmUzlZVXn75ZU477TQAhgwZwpNPPplyDFXlfFKosPfecOedsGCBG4T+3e+gb194772wIzOmZkGsRYjzh3bQgih9k0rp7M2bN9O0aVMaNHBLzVq3bs3GjRtTf2FVWFKopls3eOMNuP9+13ro0QNGjoStW8OOzJgfC+ID3K9B4WzsPgpiO9527drRvHlz3n33XV588UW6d+9O27ZtWbRoUcJLp06dEpYUqTjO6bKkkEC9eq6lsGoVDBvmWhCHHgqzZkFMy7vEUpw/POLGCuYlJ6jSN3Utnb3vvvuyZcsWduzYAUBpaSmtWrVKK4ZKNZVPjcMlU5vsvPOOas+eqqB67LGqKVbaNXVEjDduiRsSlMDGxzLYUX4v61I6O6hy+qmUzj7ttNN01qxZqqr6+9//XidOnJjwflY6OwC9e8Pbb8OkSfDuu24a66hR8NVXYUeWOXE+uzPJqfhQqPq9X+97NhTM++ijjwIrfZNK6eybb76Z8ePHc8ghh7B582aGDh2aVgyVasoWcbiEsR1nWZnq+ee7VsOBB6o+9phqeXnGw8g4MnSmly0bt8RBTcc6U+91VCTbUvjXv/4VWAw7d+7Uww8/XFevXu37Y1tLIWAtWrjpqm+8Ac2awWmnwQknwJo1YUeWHbKxHzqqajrW2XBWHydRK51tSSFFRx3lpq/ecYerpdSlC4wZA99+G3Zk/snG2SNm9yre31x+nz/66KPKryUlJVRs5lXxfcXP/VBROvu2227z7THTUlMTIg6XMLqPEvnoI9VzznFdSgcdpDp7dnixBNXNQghdCtZllDmJjnUY73kYli9fruXV+oATdRUF2X1U3caNG315nPLycus+CsMBB8DMmfDyy9CoEfzP/8CgQbBuXeZjyabCY5k8U83ls2LI7dffqFEjNm/eXNmNFgV+tERUlc2bN9OoUaM6/Z7t0eyz77+H22+HG290axpGj4arroK8vMw8f1B75cZxj+K6iPMew34qLi5OeGJRVFSUte//9u3bKS0t5ZNPPuGLL7740c/33ntvmjZtypYtW2jatGlGYlq/fj1t27ZN+3EaNWpE69at2WOPPXa5vbY9mkPvAkrnEpXuo0TWr1c99VTXpdShg+qLLwb3XDZjJ31ErKskrPeu6vNG7ZhkUhivPZP/x9TSfRT6B3s6lygnhQrPPad6yCHuSJ9+uuq//x3s8+XyP3JdRTmZhvU+Vn3eXP5bCvu1B/38tSUFG1MI2MCBsGSJ606aPduVy7jlFti+PezIjE1/rX0sIZenpubya7ekkAGNGsH118OyZXDssXD11a7w3quv+v9cufzHHHdhTAG+4YYbanzeXBb2iUGY/8c20ByCp5+Gyy6D9evh3HNdy2H//cOOKrdFbSA9UwPf1Z/HBtxzQ20DzdZSCMFJJ8Hy5W5m0iOPuH2i77oLvIKHJgRRSghBs0WJpjaWFELSuDH86U9uvOGII1zLoXdvePPNsCMzURBk90FtYynW/Wis+ygCVOGxx9xmPhs3ur0cbr4Z9t037MhMtrPuotxk3UcRJwKnnw4rVriFbjNmuH2i773X9ok2wbKWganOkkKENGniBp0XLXJ7Nlx0ERx5pCu8Z0wQbBzBVGdJIYI6d4ZXXoEHHoANG9xYw/Dh8PnnYUdmjMl2lhQiSgQGD3b7RF96qetKKiiAadOsSymq7KzbZANLChG3995w552uC+mQQ+C3v4W+feG998KOzFSXTRVqTe4KJSmIyEgRWSYiS0Vklog0EpF9RGSOiKzxvjYLI7ao6tbN7fZ2//2u9dCjh5uttHVr2JEZY7JJxpOCiPwMuAzopapdgPrAWcA1wFxVbQ/M9a6bKurVc9NVV62CYcNcC+LQQ2HWLDet1WSeLQQz2Sbj6xS8pPAWcDiwFXgSmADcBfRT1Y9F5ABgnqoW1PZY2bJOIVX/+hdcfLHrWjr2WJg4ETp2DDuq3GVz/k1cRGqdgqpuBG4FNgAfA1+o6otAS1X92LvPx0CLTMcWN717w9tvw7hx71BS8gWHHbadc8+9h7VrHw47NGNMTIXRfdQMGAQcBLQC9hSRc+vw+xeKSImIlGzatCmoMGPjs89mUlh4LNOnt2fAgAeYOfMiCgt/zv33v2ZdShkWlYVg1nVl0hHGQHN/4ENV3aSq24EngJ8DZV63Ed7XTxP9sqpOVtVeqtprv/32y1jQNSkrm8mbb7Zj3rx6vPlmO8rKZmb0+deuHU15+Tc0a7aJUaN+x4QJR/PTn/6HYcP6csIJsGZNRsMxEWCzoEw6wkgKG4AjRaSxuNG544EVwNPAEO8+Q4CnQoitTsrKZrJq1YVs27YeULZtW8+qVRdmNDFs27Zhl+uHHTafyZN7MmLESP75T+jSBYqK4NtvMxZSzrIPY5MNwhhTeBt4DFgILPFimAyMAwaIyBpggHc90irO0qsqL/+GtWtHZyyGvLw2P7qtfv2dDB78d1auhFNPdbu+de4MzzyTsbBMhtksqGiJ83EPZZ2Cqhap6qGq2kVVz1PVbaq6WVWPV9X23tf/hBFbXVQ/S9/d7UHIzx9LvXqNd7mtXr3G5OePpVUrePBBmDsX8vLgxBPh17+GdesyFl7Wi8qHsW0tGi1xbjXaiuY0JDpLr+32ILRsOZiCgsnk5bUFhLy8thQUTKZly8GV9znuOFi8GMaNgzlzoFMnGDsWtm3LWJhZyz6M/WHHKzosKaShtrP0dNVlALtly8EUFq6jX79yCgvX7ZIQKjRsCKNGufLcv/wl/PGPrhLrnDlph2oiJiqzoOoizmfWFaLSakyXJYU0JHOWnoogB7DbtHEb+jz3nCus94tfwBlnQGlp2g+dUNT+IYKMJyofxlE75rkiW1qNlhTSUFY2k7VrR7Nt2wby8tqQnz827YQAmRnAHjjQbQV6440we7Yrl3HrrbB9u29PAUTvDDDIeOL2zx+EuhyDbDmzzja2HWeKKs7mq35416vX2JeWwrx59YBE74vQr5//dbPXroXLL4d//MPNUpo4EY45xp/Hjlrph6jFk21SPb7Z9r4UFxdHOrlFqsxFtgjybD7TA9j5+a618NRT8NVX0K8fnHcefPJJao8XtTPAqMVjsl+c/7YsKaQoyOmoQQ5g1+akk2D5chg9Gh55xG3qc9ddsGNH3R4nan2rUYsn29SWdJM9xlEZjzHWfZSyN99s5w0E7yovry2FhevSfvygxiuStXo1XHKJm53UrRtMmgSFhXV/nKh1C0QtnmxT/fja8Y4m6z4KQNBn88lMMw1Shw7wwguuxbBpE/z85zB0KHz2Wd0eJ2pngLXFYy0HYywppGx301HDLpTnBxE4/XS3tuGqq2DGDJcs7r03+X2io/ZBW1s81WcmRS32OCgqKrIxnJiz7qMABDkzKUzLlsGIEfDqq24vh7vvhp49w47KP9b14T87hsFJZ4aTdR9lWBQK5QWhc2d45RV44AHYsMElhuHD4fPPw44sdXZWa+IqqDU3lhQCEIVCeUERgcGDYeVKNxB9771ultL06fHcJ7r6zKSKMYeKfzhLEumJ2piS2T1LCgHwe51BFMcnmjaFCROgpAQOPhjOPx/69oX33gs7svTY9FV/2XHzVyZatpYUAuDnzKQobORTm+7dYf58mDLFDUj36AEjR8LWrWFHVnd2VmuiLhMnLZYUAuBnobw4jE/Uq+emq65aBcOGwZ13ulpKs2bFq0up+j+WJYkfszP/7GdJISB+rTOI0/hE8+Zwzz3w1lvQqhWccw707+9aEHFU/QMw2z4QU3k9UStwmI64v59BnbTsdkqqiDQGrgTaqOoFItIeKFDVfwQSUR1EdUqqn4JeOR2UnTth8mS47jr4+mu44gq4/nrYc0/386gXDEsk26ZXpvJ6sukYZNNrqat0p6ROBbYBFUUOSoE/+RSb2Y2w6iClq359uPhi16U0eDDcfDN07AhPPOG6lLLpjDPb1TS42a9fv3ADy1JhnywlkxQOVtX/A7YDqOq3gAQalakU1EY+mdKiBUydCq+/Ds2awamnup3f4OCwQ0tKtq1jSOX11DS4+eqrr2YiZF/F4f3c3QlT0LEm0330T+B4YL6q9hCRg4FZqton0MiSkAvdR9lkzJgb+d///Q9wI9AQuBkYR1HRqEj9U9Yk27ob0u0+ivvxiGr8u4vLj7jT7T4qAp4HDhSRmcBc4Oq0IjI56cYbx6B6Bxs37gU8ARRx0EHf0rt3cciRmWQdc8wxkT/TjqNItWAqmoKJLrikcQbQHPgVcCKwb22/k8lLz5491cQToHPnqh56qCqoDhqk+uGHYUdVu6KiorBD8FW6r8d9fMRXVN/PRMe1qKhIcdsx7nJJ9TUAJVrD52oy3UevqWrfgHJSWqz7KL4qZh99/z3cfrvbK1rVbfBz1VWQlxd2hGZ3otr9Endx6D6aIyJXiciBIrJPxSWtiExoolIyo6JZ3LAhjBrl1jL88pfwxz9C165uc59cEdeuF1vcF4ywj2syLYUPE9ysqpofTEjJs5ZC3cShpPfzz8Oll8L777u9HMaPh9atw44qWHbGberCjzU+abUUVPWgBJfQE4KpuziUzBg4EJYscd1Js2e7chm33grbt4cdmcl2frfYgmoBBt2y3G1SEJE9ROQyEXnMu1wiInsEGpUJRFxKZjRq5FY/L1sGxx4L/+//ucJ7r70WdmT+idRsEwP4v6Ayrgs0k+k+mgLsAUz3bjoP2KmqwwKObbeyrfuorGwma9eOZtu2DeTltSE/f6yv3TpxLZnx9NNw2WWwfj2cey60bHkrt956Vdhh+ca6j6LB7/chyu9rugPNvVV1iKq+7F1+C/T2N0STiRLZcS2ZcdJJsHy5q6P08MNw220XcNddsGNH2JGZuPO7xZYNLcBkWgoLgdNV9QPvej7wmKr2yEB8tcqmlkKmzuKDbo0EbfVqKCh4EfgF3bq5faKPPDLsqNITx+KA2chaCt7PkkgKx+OK4q3F1TxqC/xWVV/xO9C6yqakMG9ePdx6lOqEfv3KMx1O5BQXF1froz0NuB1ozdChMG4c7LtvSMGZrGBJwUlm9tFcoD1wmXcpiEJCyDZ+b+EZZamslfhxUbZH2bq1NVdd5faHLihwpbrLLX+aFPm9PiDs9QapSqalMAKYqapbvOvNgLNVdVIG4qtVNrUU4rCGwA9+vM7qZ2DLlsHw4W52Up8+MGkS9Ozpe+jGZI10B5ovqEgIAKr6OXCBX8EZJ+4lspPlx1qJ6mdgnTvDvHnwt7+5GUq9e8OIEfD5535EbExuSaal8B5wuFdECRGpD7ynqp0zEF+tsqmlkCuCHjvZsgXGjIGJE932oLfcAr/5DYjtAGJMpXRbCi8Aj4jI8SJyHDALV0rbmDoLeuykaVOYMAFKSuDgg+H886FvX3jvPV8e3pisl0xSGIXbQ+FiYAS2n4JJQ6bWSnTvDvPnw5Qprthejx4wciRs3err06TFpqGausrE38xuu492ubOrjtpaVSNx3mXdR/FUVjaT1asvZ+fOzQA0aNCc9u3vDGz8ZPNmV5J78mTYf3+47TY466zwu5SiPGXRRJNffzNpdR+JyDwR2ctLCIuAqSIyPs2Amnp1lFaKyAoRKfRKcs8RkTXe12bpPEemRKUUddy4rb6dHTs2+756u6rmzeGee+Ctt6BVKzjnHOjf37UgjDG7Sqb7aG9V3QqcAkxV1Z5A/zSf907geVU9FDgcWAFcA8xV1fa4Lqpr0nyOwGWiNEVc1CU5hlWttU8fePttN2V14UI4/HC45hr4+utAn3YX2VAGwWRWpv9mkpl9tAT4Ba4g3mhV/ZeIvKeqXVN6QpG9gMVAvlZ5chFZBfRT1Y9F5ABgnqoW1PZYYXcfxbXAnN/quvYg6BlIyZSN+PRTt7nPtGlw4IFwxx1w8smZ7VKy7iNTV5HoPgJuxM1Aet9LCPnAmjTiyQc24bqh3hWRKSKyJ9BSVT8G8L62SOM5MiIupaiDVtcz/6BnICVTsrhFC5g6FV5/HZo1g1NPdTu/vf++LyEYE1vJlLl4VFW7qupw7/paVT01jedsAPQA7lbV7sDX1KGrSEQuFJESESnZtGlTGmGkL5dKU9SmrskxStVajz4aFixw+0TPnw9dukBREXz77e5/N11xLYNgwpOJv5lkWgp+KwVKVfVt7/pjuCRR5nUb4X39NNEvq+pkVe2lqr3222+/jARckyh9uIWprskxiNXb6fS7NmgAf/gDrFwJp5zidn3r0gWeeSblcJKO2Zi6iNyUVN+eVOR1YJiqrhKRYmBP70ebVXWciFwD7KOqta6HCHtMAeJfitoPUavblG6/68svuzIZK1fCoEFw553Qtq2PARoTsrRKZwdBRLoBU4CGuJLcv8W1Wh4B2gAbcHs4/Ke2x4lCUjBOlJKjH4Nx33/vupRuvBFU4Y9/hCuvhLw8n4I0JkS1JYXKcsSJLsChwPHAT6vdPrC238vUpWfPnmpMdUVFRb491vr1qqecogqqHTqozpnj20MbH/j5XucSoERr+FytcUxBRC4DngIuBZaKyKAqP/6zP/nKhCWbF9352e/apg08/jg89xzs3AkDBsCZZ8LGjb49hdmN2t7PZGaambqpbaD5AqCnqv4a6AdcLyKXez+zmpMxZovu6m7gQFi6FG64AZ5+Gg491JXL2L497Miyn33wZ1ZtSaG+qn4FoKrrcInhBK/EhSWFGAtrRXHcNWrkynIvWwbHHANXXeUK7732WtiR5RZbFR6s2pLCJ96AMABegjgR2Bc4LOjATHBs0V168vNh9mx48kn46iuXIM47Dz75JOzIskdtH/w/3prVfW9JwR81zj4SkdbADlX90Z+6iBylqvODDm53bPZRaqw8h3+++QbGjnWb+fzkJ/CnP8HFF7u1D8Yftc0ms1IhqUmpzIWqliZKCN7PQk8IJnW26M4/jRu7pLB0KRxxBFx2mdsO9K23wo4sN9iqcP+FsaLZhCxX9oPOpA4d4IUX4JFHYNMmKCyEYcPgs8/Cjiz+avvgty4j/4WyeM0v1n1koujLL92itzvugL32gptucgminp2CmYhIt0pqxYPs5W2Es4+34Y4xJoEmTdwYw6JFrobS73/vWg4LFoQdmTG7l8zOa78XkTLgPWCBd7HTc2N2o3NnmDcP/vY3WL/ejTWMGAGff163x7EuEpNJyWyyswYoVNXI9Y5a91H2i1JNpXRs2eLWOEyc6LYHveUW+M1vktvUx2bYGL+l2330AfDNbu9lck7QpTKisvLaj9fZtClMmAAlJXDwwXD++dC3LyxZ4n+8xqQjmaRwLfBPEblXRCZUXIIOzERbJj6wo7Dy2u/X2b2728xnyhRYscJdv+IK2Lp11/vZql0TlmS6j94B3gCWAJUb6Krq9GBD2z3rPgpPJhbABb2XczKCfJ2bN8Po0TB5Muy/P4wf74rtVe9Ssu4j47d0u492qOoVqjpVVadXXHyO0cRMJkplRGG70yBfZ/PmcM89bqFbq1Zw9tnQv7/b3MeYsCSTFF7x9kU+wKakmgqZ+MAOe+W16yJK/C/i5+vs0wfefhsmTYKFC6FrV7j2Wvj6a/dzW7VrMimZ7qMPE9ysqpofTEjJs+6j8NS2BSfg24yhsGYfJXp9FYLcavTTT+Hqq2H6dLeXwx13wK9/ndwsJWOSFbntOP2SalLIlmmOYUt0HIFI7decqprGEqA+HTtOD/y1vPEGDB/uZiedcIKbuXTIIYE+pckhaScFEekCdAIaVdymqjN8izBFqSSFqG0yn21q+jBt0KA5Rx8duaUuNYrCIPeOHXDXXVBU5PaMHjUKrrnGVWM1Jh1pDTSLSBFwl3c5Fvg/4CRfI8ygKExzzGY1DcDu2LE5Vju7RWGQu0EDGDnSDTyfcoqrp9SlCzzzTMZCMDkomYHm04DjgU9U9bfA4UBeoFEFyDaYCVZtH5pxSrxhD3JX1aoVPPggzJ0LDRvCiSe6cYb1iXq3soyty8i8ZJLCt6paDuwQkb2AT4HQB5lTFYUzwGxW24dmnBJvFMuLH3ccLF4M48bBnDnQsSP8+c+wbVtmnj+MD2jbnznzkpl9NAm4DjgLuBL4CljktRpCZWMK0fT66/uyc+fmH91uO7v5Z8MG17X0xBNuL4eJE90ahyCFsYjOFu4FI60xBVUdrqpbVPUeYAAwJAoJIVVBngEGXQsoLjp0uPNHXS+wBzt3fpXzx8YvbdrA44/Dc8/Bzp0wYIBbDb1xY9iRpc9KfIQrmYHmoRXfq+o6YJk3+BxbLVsOprBwHf36lVNYuM63hBCF4m1RUD3x1q/fHBFhx47N5PqxqUmqJxQDB7qtQG+4AZ5+Gg49FG67DbZvr3sMiT50w/iALi4uRlUrWwgV31tSyIxkuo8eBJoCQ4HmwFTgVVW9KvjwahelxWuZqAUUV3ZsaudXl+batW6P6GeecXs5TJrkKrEma3ddNdZ9lD3S7T46B5iOK4j3LPCHKCSEqLFZTYmVlc2sYRGYHZsKfk2Tzs+H2bPhySfhq6/gmGPcng1lZX5Gm1nZUOIjbi2cZLqP2gOXA48D64DzRKR6h3HOs1lNP1ZxBlyTXD42Vfl5QiECgwbB8uVw3XXw0ENQUAB/+Ysbe6iuLt1DYXxAZ/IDNajnitsMqmS6j1YCI1R1rri/nCuA36lq50wEWJsodR/ZrKYfq7lUhB2bqoLsXlu9Gi65xE1h7d7ddSkdeWTi+wbVVVNcXByLs+WgXn8Uu8DSLZ3dR1XngquCp6q3Ab/2M8BsEMV57WGr7Uw3149NVUEulOvQAV54AR55xBXbKyyEC6Df0rEAABLhSURBVC6AzzJYcSRuZ8p+iPMMqhqTgohcDaCqW0Xk9Go/ju2U1Ap+TB+t/hiA77Oa4qzmLrW2OX9sqgr6hEIETj/d7fR21VUwbZrrUrrvPiivUsYpG/rv6yqoD+84z6CqsftIRBaqao/q3ye6HpZ0qqSm29Vj3UW7Z8compYtcxVYX3vN7eUwaRL07OnvcxQXFydsIRQVFaX1wRhkV5R1Hzm1dR9JDd8nuh4rdZntUVOLwgrr7Z51qUVT584wbx787W+uflLv3jBiBHz+uX/PEdSZcqa6ovxMPHFrgeVkSyHZssi1nemuWHFeUo9hTE2isK/Hli0wZowrk9G8Odxyi5vG6uemPrWdKdf1GAR51l21FRLFs3s/pdpSOFxEtorIl0BX7/uK64cFEmmGJDt9tLbWgE1BzR5hlCeJygr4pk3dBj4lJXDwwXD++W7B25Il/j1HTWfKyR6DTA3axqG/PxNqTAqqWl9V91LVJqrawPu+4voemQzSb8nO9qht/niUSiub1IX14Ry17sfu3WH+fJgyxQ1Id+8OV1wBW7em/9g1fdgmewwyNWgb5xlDfkpmSmrWSbavu7bWgPWXZ4ewPpyjsgK+aivp7bfbceKJM1m1CoYOdftDH3qoWwAXRE9KVI5BhTjPGPJTg7ADCEvLloN3+wGenz824ZhCRWsgmccw0ZbpD6aKPvTE41GZ7X6sPmZW0UoqKIB77x3M0KFw8cVw9tlu+urEiS5J+CUvr00Ni/ZqPgZxG7SNo5xsKSTLWgPZL5NjQ7t2Vf1Yprsfd9dK6tMH3nnHJYOFC6FrV7j2Wvj6a3+eP1EXLAjNm/+yxt/J1Fl7Lief3Za5iLIolbkw8ZTJtRS1lf3Iy2ub8dlHyc7CA7ca+uqrYfp0t5fDHXe4LUHTnaW0evVwPvronl3isLUswUu3zEUgRKS+iLwrIv/wru8jInNEZI33tVlYsZnckcnWYM1dUhLKCvi6tJJatHAroV9/HfbeG045BX71K/jgg/Ri2Lz5WaonJlvvE64wu48uB1ZUuX4NMFdV2wNzvevG1MivqaRBbLqUSNSmMacyg+7oo11X0vjx8MYbbiFccTF8+21qMURtsNmElBREpDXwK2BKlZsH4fZtwPtqRfdMjaIyz78uojaNOdVWUoMGbn/olStdi+GGG6BLF3j22brHELVEacJrKdwBXA1U7bhsqaofA3hfWyT6RRG5UERKRKRk06ZNwUdqIilq8/yTEcWJC+m0klq1ggcfhLlzoWFD15108smudEayopYo6yob92XPeFIQkROBT1V1QSq/r6qTVbWXqvbab7/9fI7OxEVcux0y1VWVSccdB4sXw7hx8OKL0LEj3HQTfP/97n83iokyWam0VuOQRDI++0hEbgLOA3YAjYC9gCeA3kA/Vf1YRA4A5qlqQW2PZbOPcpft+xxNGza4rqUnnvhhx7f+/cOOKhh1/RuMUtXgSM0+UtVrVbW1qrYDzgJeVtVzgaeBId7dhgBPZTo2Ex9x73bIVm3awOOPw3PPwY4dMGAAnHkmbNwYdmT+q2trNS5dnlFavDYOGCAia4AB3nVjEopzt0MuGDgQli51g9BPP+1WQt92G2zfHnZk/qnrIHlcujxt8ZoxJlBr18Jll8Ezz7hZShMnukqscVfX7qAodXlGqvvIGJNb8vNh9mx48kn48ks45hi3Z0NZWdiRpaeurdW4dHlaS8HERhQ2pTHp+eYbGDvWbebTuDH86U+u6F79+mFHlhlR+RuuraVgScHEQpRmbpj0rVoFl1wCL73k9m6YNAmOPDLsqHKHdR+Z2IvLzA2TnIICt6bh4YddN1JhIVxwAXz2WdiRGUsKJhbiMnPDJE8EzjjDlcu48kqYOtUli/vug3Lb5jw0lhRMLFiNnOzVpAnceissWuRmJ114oWs5LEip5oFJlyUFEwtxmblhUtelC8ybBzNmwLp10Lu3G3fYsiXsyHKLJQUTC7ZYLTeIwHnn/TAQfffdrktpxoxg9ok2P2azj4wxkfXuuzB8OLz1FvzXf7mFb4cdFnZU8Wezj4wxsdS9O8yfD1OmwPLl7vqVV7pFcCYYlhSMMZFWrx4MHeq6lIYOhdtvd7WUHnrIupSCYEnBGBMLzZvDvfe6rqT994ezz3ZVWFeuDDuy7GJJwRgTK336wDvvuPGFBQuga1e49lr4+uuwI8sOlhSMyXFx2A2suvr13QD0qlVwzjlu17dOneDvf7cupXRZUjAmh6WypWSUtGgB06bB66/D3nvDKae4vaI/+CDsyOLLkoIxOSxbakodfTQsXAjjx8Mbb0DnzlBcDN9+G3Zk8WNJwZgclk01pRo0cPtDr1zpWgw33OBWST/7bNiRxYslBWNyWDbWlGrVCh58EObOhYYNXXfSySfD+h9vemYSsKRgTA7L5ppSxx0Hixe7QegXX4SOHeGmm+D778OOLNosKRiTw7K9plTDhjBqFKxYAQMHwnXXuSmsL70UdmTRZbWPjDE547nn4NJL3eykM85wA9M/+1nYUWWe1T4yxhjghBNg6VI3CP3UU65cxvjxsH172JFFhyUFY0xOadQIxoxxBfaOOcYV2OvRw611MJYUjDE5Kj8fZs+GJ590VVf79oUhQ9ye0bnMkoIxJmeJwKBBrtVw3XUwa5bb1GfiRNi5M+zowmFJwRiT8xo3hrFjYcmSH7YB7d3bVWTNNZYUjDHGU1Dg1jQ8/LDrRioshAsugM2bw44scywpGGNMFSJuuurKlW4QeupU6NAB7rsPysvDji54lhSMMSaBJk3g1lth0SJXQ+nCC+HnP3eF97KZJQVjjKlFly4wbx7MmAEffvjDmMOWLWFHFgxLCsYYsxsicN55blOfESPg7rvd+MOMGdm3qY8lBWOMSVLTpjBhApSUuHUOQ4a4BXBLloQdmX8sKRhjTB117w7z58OUKW6NQ/fublD6yy/Djix9lhSMMSYF9erB0KGuS2noULj9dldL6aGH4t2lZEnBGGPS0Lw53HuvW+i2//5w9tkwYICb0hpHlhSMMcYHffrAO++4EhklJW7fhmuvha+/DjuyurGkYIwxPqlfH4YPh9Wr4Zxz3K5vnTq5ontx6VKypGCMMT5r0QKmTYPXXoO993Z7RJ94otvcJ+osKRhjTED+679gwQK3kc9rr0Hnzm6Dn+++CzuymmU8KYjIgSLyioisEJFlInK5d/s+IjJHRNZ4X5tlOjZjjPHbHnvAyJFultLJJ0NxsUsOzz4bdmSJhdFS2AFcqaodgSOBESLSCbgGmKuq7YG53nVjjMkKrVq5/RpeegkaNoRf/colifXrw45sVxlPCqr6saou9L7/ElgB/AwYBEz37jYd+HWmYzPGmKAdfzwsXgw33eTKdHfs6L7//vuwI3NCHVMQkXZAd+BtoKWqfgwucQAtavidC0WkRERKNm3alKlQjTHGNw0bwjXXwIoVMHCg2/Wta1eYOzfsyEJMCiLyU+Bx4A+qujXZ31PVyaraS1V77bfffsEFaIwxAWvTBp54wo0v7NgB/fvDWWfBxo3hxRRKUhCRPXAJYaaqPuHdXCYiB3g/PwD4NIzYjDEm0044AZYudTOTnnzSlcsYPx62b898LGHMPhLgfmCFqo6v8qOngSHe90OApzIdmzHGhKVRIxgzxhXYO+YYV2CvRw94/fXMxhFGS+Eo4DzgOBFZ5F1+CYwDBojIGmCAd90YY3JKfj7Mnu1aDF9+CX37uhLdZWWZeX7RuKy9TqBXr15aUlISdhjGGBOIb76BsWPhllugcWP3/UUXuXIa6RCRBaraK9HPbEWzMcZEVEUiWLLkh21Ae/d2FVmDYknBGGMirqDArWl4+GHXjVRY6MYcgmBJwRhjYkAEzjjD7dNw5ZVw8MHBPE+DYB7WGGNMEJo0gVtvDe7xraVgjDGmkiUFY4wxlSwpGGOMqWRJwRhjTCVLCsYYYypZUjDGGFPJkoIxxphKlhSMMcZUinVBPBHZBKS6w+m+wGc+hhM0izc4cYoV4hVvnGKF3Im3raom3KUs1kkhHSJSUlOVwCiyeIMTp1ghXvHGKVaweMG6j4wxxlRhScEYY0ylXE4Kk8MOoI4s3uDEKVaIV7xxihUs3twdUzDGGPNjudxSMMYYU40lBWOMMZVyIimISCMReUdEFovIMhG5wbt9HxGZIyJrvK/Nwo61gojUF5F3ReQf3vUox7pORJaIyCIRKfFui3K8TUXkMRFZKSIrRKQwivGKSIF3TCsuW0XkD1GMtYKIjPT+x5aKyCzvfy+S8YrI5V6cy0TkD95tkYlVRP4qIp+KyNIqt9UYn4hcKyLvi8gqEfnvVJ83J5ICsA04TlUPB7oBA0XkSOAaYK6qtgfmetej4nJgRZXrUY4V4FhV7VZlznSU470TeF5VDwUOxx3nyMWrqqu8Y9oN6Al8A/ydCMYKICI/Ay4DeqlqF6A+cBYRjFdEugAXAH1wfwMnikh7ohXrNGBgtdsSxicinXDHurP3O5NEpH5Kz6qqOXUBGgMLgSOAVcAB3u0HAKvCjs+LpbX3hh8H/MO7LZKxevGsA/atdlsk4wX2Aj7Em2QR9XirxPcLYH6UYwV+Bvwb2Ae31e8/vLgjFy9wOjClyvXrgaujFivQDlha5XrC+IBrgWur3O8FoDCV58yVlkJFd8wi4FNgjqq+DbRU1Y8BvK8twoyxijtwf6DlVW6LaqwACrwoIgtE5ELvtqjGmw9sAqZ63XNTRGRPohtvhbOAWd73kYxVVTcCtwIbgI+BL1T1RaIZ71Kgr4g0F5HGwC+BA4lmrFXVFF9FQq5Q6t1WZzmTFFR1p7pmeGugj9d8jBwRORH4VFUXhB1LHRylqj2AE4ARItI37IBq0QDoAdytqt2Br4lAd0ZtRKQhcBLwaNix1Mbr3x4EHAS0AvYUkXPDjSoxVV0B3AzMAZ4HFgM7Qg0qPZLgtpTWG+RMUqigqluAebh+tzIROQDA+/ppiKFVOAo4SUTWAQ8Bx4nIA0QzVgBU9SPv66e4Pu8+RDfeUqDUaykCPIZLElGNF1yyXaiqZd71qMbaH/hQVTep6nbgCeDnRDReVb1fVXuoal/gP8AaIhprFTXFV4pr6VRoDXyUyhPkRFIQkf1EpKn3/U9wf7wrgaeBId7dhgBPhRPhD1T1WlVtrartcF0GL6vquUQwVgAR2VNEmlR8j+tDXkpE41XVT4B/i0iBd9PxwHIiGq/nbH7oOoLoxroBOFJEGouI4I7tCiIar4i08L62AU7BHeNIxlpFTfE9DZwlInkichDQHngnpWcIe8AnQ4M1XYF3gfdwH1hjvNub4wZ013hf9wk71mpx9+OHgeZIxorro1/sXZYBo6McrxdbN6DE+3t4EmgW1XhxEyM2A3tXuS2SsXqx3YA74VoK/A3Ii2q8wOu4E4LFwPFRO7a4JPUxsB3XEhhaW3zAaOAD3GD0Cak+r5W5MMYYUyknuo+MMcYkx5KCMcaYSpYUjDHGVLKkYIwxppIlBWOMMZUsKZisJCI7q1UYzdiq5UTVLY2JC5uSarKSiHylqj8N6bn7Al8BM9RVC83Ec9ZX1Z2ZeC6T3aylYHKGiOzt1Zov8K7PEpELvO/vFpESqbLfhnf7OhH5s4i86f28h4i8ICIfiMhFiZ5HVV/DlU2oLZbTvVr+i0XkNe+2+iJyq7i9Kd4TkUu924/3ivct8VoheVViGyMibwCni8gvvDgXisijIhJKUjTxZknBZKufVOs+OlNVvwAuAaaJyFlAM1W9z7v/aHV7QXQFjhGRrlUe69+qWohbATsNOA04ErgxjfjGAP+tbo+Pk7zbLsQVk+uuql2BmSLSyHvOM1X1MFxBv4urPM53qno08BLwR6C/uuKEJcAVacRnclSDsAMwJiDfqquKuwtVnSMipwMTcZurVDjDK/vdAFenvhOuDAa4ujIAS4CfquqXwJci8p2INFVXZLGu5uOS0yO4wnHganLdo6o7vFj/IyKH44rMrfbuMx0YgSuvDvCw9/VIL+b5ruwQDYE3U4jL5DhLCianiEg9oCPwLW4zmFKvgNhVQG9V/VxEpgGNqvzaNu9reZXvK66n9D+kqheJyBHAr4BFItINV/64+iBfopLIVX1d5X5zVPXsVOIxpoJ1H5lcMxJXufNs4K8isgduN7avgS9EpCWuVHWgRORgVX1bVccAn+HKHr8IXCQiDbz77IMrLtdORA7xfvU84NUED/kWcFTF/bxKpR2Cfh0m+1hLwWSrn3g77VV4HvgrMAzoo6pfegO8f1TVIhF5F1fldS2uaydlIjILV+F2XxEpBYpU9f5qd7tF3J7Agqt2uRhXWbQD8J6IbAfuU9W/iMhvgUe9ZPEv4J7qz6mqm0TkfGBWxUA0boxhdfX7GlMbm5JqjDGmknUfGWOMqWRJwRhjTCVLCsYYYypZUjDGGFPJkoIxxphKlhSMMcZUsqRgjDGm0v8HuN5etXZD6WgAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plot_decision_boundary(w, b, X_train, y_train)\n", "# Set the y-axis label\n", "plt.ylabel('Exam 2 score') \n", "# Set the x-axis label\n", "plt.xlabel('Exam 1 score') \n", "plt.legend(loc=\"upper right\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 2.8 Evaluating logistic regression\n", "\n", "We can evaluate the quality of the parameters we have found by seeing how well the learned model predicts on our training set. \n", "\n", "You will implement the `predict` function below to do this.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Exercise 4\n", "\n", "Please complete the `predict` function to produce `1` or `0` predictions given a dataset and a learned parameter vector $w$ and $b$.\n", "- First you need to compute the prediction from the model $f(x^{(i)}) = g(w \\cdot x^{(i)} + b)$ for every example \n", " - You've implemented this before in the parts above\n", "- We interpret the output of the model ($f(x^{(i)})$) as the probability that $y^{(i)}=1$ given $x^{(i)}$ and parameterized by $w$.\n", "- Therefore, to get a final prediction ($y^{(i)}=0$ or $y^{(i)}=1$) from the logistic regression model, you can use the following heuristic -\n", "\n", " if $f(x^{(i)}) >= 0.5$, predict $y^{(i)}=1$\n", " \n", " if $f(x^{(i)}) < 0.5$, predict $y^{(i)}=0$\n", " \n", "If you get stuck, you can check out the hints presented after the cell below to help you with the implementation." ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [], "source": [ "# UNQ_C4\n", "# GRADED FUNCTION: predict\n", "\n", "def predict(X, w, b): \n", " \"\"\"\n", " Predict whether the label is 0 or 1 using learned logistic\n", " regression parameters w\n", " \n", " Args:\n", " X : (ndarray Shape (m,n)) data, m examples by n features\n", " w : (ndarray Shape (n,)) values of parameters of the model \n", " b : (scalar) value of bias parameter of the model\n", "\n", " Returns:\n", " p : (ndarray (m,)) The predictions for X using a threshold at 0.5\n", " \"\"\"\n", " # number of training examples\n", " m, n = X.shape \n", " p = np.zeros(m)\n", " \n", " ### START CODE HERE ### \n", " # Loop over each example\n", " for i in range(m): \n", " z_wb = 0\n", " # Loop over each feature\n", " for j in range(n): \n", " # Add the corresponding term to z_wb\n", " z_wb_ij = X[i,j]*w[j]\n", " z_wb += z_wb_ij\n", " \n", " # Add bias term \n", " z_wb += b\n", " \n", " # Calculate the prediction for this example\n", " f_wb = sigmoid(z_wb)\n", "\n", " # Apply the threshold\n", " p[i] = 1 if (f_wb >= 0.5) else 0\n", " \n", " ### END CODE HERE ### \n", " return p" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Click for hints\n", " \n", " \n", "* Here's how you can structure the overall implementation for this function\n", " ```python \n", " def predict(X, w, b): \n", " # number of training examples\n", " m, n = X.shape \n", " p = np.zeros(m)\n", " \n", " ### START CODE HERE ### \n", " # Loop over each example\n", " for i in range(m): \n", " \n", " # Calculate f_wb (exactly how you did it in the compute_cost function above) \n", " # using a couple of lines of code\n", " f_wb = \n", "\n", " # Calculate the prediction for that training example \n", " p[i] = # Your code here to calculate the prediction based on f_wb\n", " \n", " ### END CODE HERE ### \n", " return p\n", " ```\n", " \n", " If you're still stuck, you can check the hints presented below to figure out how to calculate `f_wb` and `p[i]` \n", " \n", "
\n", " Hint to calculate f_wb\n", "     Recall that you calculated f_wb in compute_cost above — for detailed hints on how to calculate each intermediate term, check out the hints section below that exercise\n", "
\n", "     More hints to calculate f_wb\n", "     You can calculate f_wb as\n", "
\n",
    "               for i in range(m):   \n",
    "                   # Calculate f_wb (exactly how you did it in the compute_cost function above)\n",
    "                   z_wb = 0\n",
    "                   # Loop over each feature\n",
    "                   for j in range(n): \n",
    "                       # Add the corresponding term to z_wb\n",
    "                       z_wb_ij = X[i, j] * w[j]\n",
    "                       z_wb += z_wb_ij\n",
    "            \n",
    "                   # Add bias term \n",
    "                   z_wb += b\n",
    "        \n",
    "                   # Calculate the prediction from the model\n",
    "                   f_wb = sigmoid(z_wb)\n",
    "    
\n", " \n", "
\n", "
\n", " Hint to calculate p[i]\n", "     As an example, if you'd like to say x = 1 if y is less than 3 and 0 otherwise, you can express it in code as x = y < 3 . Now do the same for p[i] = 1 if f_wb >= 0.5 and 0 otherwise. \n", "
\n", "     More hints to calculate p[i]\n", "     You can compute p[i] as p[i] = f_wb >= 0.5\n", "
\n", "
\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once you have completed the function `predict`, let's run the code below to report the training accuracy of your classifier by computing the percentage of examples it got correct." ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Output of predict: shape (4,), value [0. 1. 1. 1.]\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "# Test your predict code\n", "np.random.seed(1)\n", "tmp_w = np.random.randn(2)\n", "tmp_b = 0.3 \n", "tmp_X = np.random.randn(4, 2) - 0.5\n", "\n", "tmp_p = predict(tmp_X, tmp_w, tmp_b)\n", "print(f'Output of predict: shape {tmp_p.shape}, value {tmp_p}')\n", "\n", "# UNIT TESTS \n", "predict_test(predict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected output** \n", "\n", "\n", " \n", " \n", " \n", "
Output of predict: shape (4,),value [0. 1. 1. 1.]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now let's use this to compute the accuracy on the training set" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Train Accuracy: 92.000000\n" ] } ], "source": [ "#Compute accuracy on our training set\n", "p = predict(X_train, w,b)\n", "print('Train Accuracy: %f'%(np.mean(p == y_train) * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
Train Accuracy (approx): 92.00
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 3 - Regularized Logistic Regression\n", "\n", "In this part of the exercise, you will implement regularized logistic regression to predict whether microchips from a fabrication plant passes quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly. \n", "\n", "\n", "### 3.1 Problem Statement\n", "\n", "Suppose you are the product manager of the factory and you have the test results for some microchips on two different tests. \n", "- From these two tests, you would like to determine whether the microchips should be accepted or rejected. \n", "- To help you make the decision, you have a dataset of test results on past microchips, from which you can build a logistic regression model.\n", "\n", "\n", "### 3.2 Loading and visualizing the data\n", "\n", "Similar to previous parts of this exercise, let's start by loading the dataset for this task and visualizing it. \n", "\n", "- The `load_dataset()` function shown below loads the data into variables `X_train` and `y_train`\n", " - `X_train` contains the test results for the microchips from two tests\n", " - `y_train` contains the results of the QA \n", " - `y_train = 1` if the microchip was accepted \n", " - `y_train = 0` if the microchip was rejected \n", " - Both `X_train` and `y_train` are numpy arrays." ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "# load dataset\n", "X_train, y_train = load_data(\"data/ex2data2.txt\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### View the variables\n", "\n", "The code below prints the first five values of `X_train` and `y_train` and the type of the variables.\n" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "X_train: [[ 0.051267 0.69956 ]\n", " [-0.092742 0.68494 ]\n", " [-0.21371 0.69225 ]\n", " [-0.375 0.50219 ]\n", " [-0.51325 0.46564 ]]\n", "Type of X_train: \n", "y_train: [1. 1. 1. 1. 1.]\n", "Type of y_train: \n" ] } ], "source": [ "# print X_train\n", "print(\"X_train:\", X_train[:5])\n", "print(\"Type of X_train:\",type(X_train))\n", "\n", "# print y_train\n", "print(\"y_train:\", y_train[:5])\n", "print(\"Type of y_train:\",type(y_train))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Check the dimensions of your variables\n", "\n", "Another useful way to get familiar with your data is to view its dimensions. Let's print the shape of `X_train` and `y_train` and see how many training examples we have in our dataset." ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The shape of X_train is: (118, 2)\n", "The shape of y_train is: (118,)\n", "We have m = 118 training examples\n" ] } ], "source": [ "print ('The shape of X_train is: ' + str(X_train.shape))\n", "print ('The shape of y_train is: ' + str(y_train.shape))\n", "print ('We have m = %d training examples' % (len(y_train)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Visualize your data\n", "\n", "The helper function `plot_data` (from `utils.py`) is used to generate a figure like Figure 3, where the axes are the two test scores, and the positive (y = 1, accepted) and negative (y = 0, rejected) examples are shown with different markers.\n", "\n", "" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZAAAAEGCAYAAABLgMOSAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3deZwU9Z3/8debQ7xRcIJERCTrEQ8cAdFRo7hGBbORuJpV40bNhUTRzWY1S37+EiZrSMxqdhMNYjSaaMIPNFlRY4yiKHE9OVQUQ/BAiIgBggdgFIX5/P6oGuyZ6e7po6qrqvvzfDz6Md11dH27prs+9b1lZjjnnHPl6pF0ApxzzmWTBxDnnHMV8QDinHOuIh5AnHPOVcQDiHPOuYr0SjoBtbTbbrvZkCFDkk6Gc85lysKFC/9qZk2dlzdUABkyZAgLFixIOhnOOZcpklbkW+5FWM455yriAcQ551xFPIA455yrSEPVgTjn6tMHH3zAypUree+995JOSqZtu+22DBo0iN69e5e0vQcQ51zmrVy5kp122okhQ4YgKenkZJKZsW7dOlauXMnee+9d0j5ehOUa2urV03n88SHMnduDxx8fwurV05NOkqvAe++9R//+/T14VEES/fv3LysX5zkQ17BWr57O0qXjaWv7GwCbNq1g6dLxAAwYcHaSSXMV8OBRvXLPoedAXMNatuyyrcGjXVvb31i27LKEUuRctngAcQ1r06Y/l7U867y4Ln6zZs1CEn/6059qcrzvfe97Ze/zi1/8gokTJ0ZyfA8grmH16TO4rOVZ1l5ct2nTCsC2Ftc1ehBpbW2N9P1mzJjB0UcfzcyZMyN930IqCSBR8gDiGtbQoVPo0WP7Dst69NieoUOnJJSi+HhxXX7f+c53InuvjRs38uijj3LjjTduDSBbtmzhkksu4eCDD2bYsGFcc801AMyfP58jjzySQw45hFGjRrFhwwa2bNnCpZdeymGHHcawYcP46U9/CsDcuXM55phjOPXUUznggAOYMGECbW1tTJo0iXfffZfm5mbOPjuos/vVr37FqFGjaG5u5vzzz2fLli0A/PznP2fffffl2GOP5dFHH43sM3slumtY7RXly5ZdxqZNf6ZPn8EMHTqlLivQG624Lgl33HEHY8aMYd9996Vfv3489dRTPPnkk7zyyis8/fTT9OrVizfeeIP333+fM844g1tvvZXDDjuM9evXs91223HjjTfSt29f5s+fz6ZNmzjqqKM48cQTAZg3bx5//OMf2WuvvRgzZgy33347V1xxBT/5yU945plnAFiyZAm33norjz76KL179+aCCy5g+vTpnHDCCUyePJmFCxfSt29fjjvuOA499NBIPrPnQFxDGzDgbFpaljN6dBstLcvrMnhAYxXXdae1tRVJW1sctT+vtjhrxowZnHnmmQCceeaZzJgxgwceeIAJEybQq1dwr96vXz+WLl3KwIEDOeywwwDYeeed6dWrF7Nnz+aWW26hubmZww8/nHXr1vHiiy8CMGrUKIYOHUrPnj0566yzeOSRR7ocf86cOSxcuJDDDjuM5uZm5syZw7Jly3jyyScZPXo0TU1NbLPNNpxxxhlVfc5cngNxrgEMHTqlQ5NlqN/iuu60trZuDRaSMLOq33PdunU8+OCDLF68GEls2bIFSYwYMaJL01gzy9tc1sy45pprOOmkkzosnzt3bpftC+1/7rnn8v3vf7/D8jvuuCO2Js6eA3GuAQwYcDb77Xc9ffrsBYg+ffZiv/2ur9scV6395je/4ZxzzmHFihUsX76cV199lb333pvhw4dz3XXXsXnzZgDeeOMN9t9/f1atWsX8+fMB2LBhA5s3b+akk05i2rRpfPDBBwC88MILvPPOO0BQhPXKK6/Q1tbGrbfeytFHHw1A7969t25//PHH85vf/IY1a9ZsPdaKFSs4/PDDmTt3LuvWreODDz7g17/+dWSf23MgzjWIAQPO9oDRyeTJkyN5nxkzZjBp0qQOy0477TSWLFnC4MGDGTZsGL179+YrX/kKEydO5NZbb+Wiiy7i3XffZbvttuOBBx7gy1/+MsuXL2f48OGYGU1NTdxxxx0AtLS0MGnSJJ577rmtFeoA48ePZ9iwYQwfPpzp06fz3e9+lxNPPJG2tjZ69+7N1KlTOeKII2htbaWlpYWBAwcyfPjwrZXr1VIU2besGDlypPmEUs7VnyVLlvDxj3886WTEYu7cuVx11VXcfffdNTlevnMpaaGZjey8baJFWJJukrRG0uIC6yXpakkvSXpW0vCcdWMkLQ3XTcq3v+uedy4rj58v5z6UdB3IL4AxRdaPBfYJH+OBaQCSegJTw/UHAGdJOiDWlNahRuxcVk0AaMTz5ZI3evTomuU+ypVoADGzh4E3imwyDrjFAk8Au0gaCIwCXjKzZWb2PjAz3NaVodE6l1UbABrtfDnXnaRzIN3ZA3g15/XKcFmh5V1IGi9pgaQFa9eujS2hWdRoncuqDQCNdr6c607aA0i+xstWZHnXhWbXm9lIMxvZ1NQUaeKyLq2dy6Ien6hdtQEgrefLuaSkPYCsBPbMeT0IWFVkuStDWseCinJ8olzVBoC0ni/nkpL2AHIXcE7YGusI4G0zex2YD+wjaW9J2wBnhtu6MjRa57JqA0CjnS9Xnp49e9Lc3MxBBx3Epz/9ad56662i21933XXccsstZR/nrbfe4tprry17v9bWVq666qqy9ysm6Wa8M4DHgf0krZT0JUkTJE0IN7kHWAa8BNwAXABgZpuBicB9wBLgNjN7vuYfoA6kZSyouMYnyhVFAEjqfHnz4WjFcT632247nnnmGRYvXky/fv2YOnVq0e0nTJjAOeecU/ZxKg0gcUi0J7qZndXNegMuLLDuHoIA4xK0evX0SEazjWN8onxq1Rs7qvPS/l4+9W50anE+W1paePbZZwF4+eWXufDCC1m7di3bb789N9xwA/vvvz+tra3suOOOXHLJJQW3Wb16NRMmTGDZsmUATJs2jauvvpqXX36Z5uZmTjjhBK688kquvPJKbrvtNjZt2sSpp566tRh4ypQp3HLLLey55540NTUxYsSISD5fOx/KxFXML2z5RX1eirUea+TzXKm4z+eWLVuYM2cOX/rSl4BguJHrrruOffbZhyeffJILLriABx98sMM+hba5+OKLOfbYY5k1axZbtmxh48aNXHHFFSxevHjrMO6zZ8/mxRdfZN68eZgZp5xyCg8//DA77LADM2fO5Omnn2bz5s0MHz7cA4irTpR3xnH9EKManygpUZ8Xbz4crbjOZ/vkTsuXL2fEiBGccMIJbNy4kccee4zPfvazOcfZ1GG/Yts8+OCDW+tJevbsSd++fXnzzTc77D979mxmz569dY6PjRs38uKLL7JhwwZOPfVUtt8+qPc75ZRTqvp8+XgAaSBR3xnH9UOMqxlvrUR9Xvr0GRx2fuy63H3ogw/WsWnTa5i9j7QNffrsQe/e/btsF9f5bK8Defvtt/mHf/gHpk6dynnnnccuu+yyNbeQT1tbW7fbFGNmfPOb3+T888/vsPxHP/pRbMO4t0t7KywXoah7Unu/iPyiPi/efLh7W7a8w3vvrSAYmALM3ue991bwwQfrumwb9/ns27cvV199NVdddRXbbbcde++999Yh1M2MRYsWddh+5513LrjN8ccfz7Rp08LPuIX169ez0047sWHDhq37n3TSSdx0001s3LgRgNdee401a9ZwzDHHMGvWLN599102bNjAb3/720g+Xy4PIA0k6jtjv7DlF/V58ebD3du8+U2grdPSNjZteq3LtrU4n4ceeiiHHHIIM2fOZPr06dx4440ccsghHHjggdx5551bt2vPIRTa5sc//jEPPfQQBx98MCNGjOD555+nf//+HHXUURx00EFceumlnHjiiXzuc5+jpaWFgw8+mNNPP50NGzYwfPhwzjjjDJqbmznttNP4xCc+Ednn25p+H869cTz++JACWfe9aGlZXtF7RlmnUk/8vNTW00/fy9/93W551+20U5dRyFPhoosuYvjw4XzhC19IOikdlDOcu9eBNJA4pjX1SYry8/NSW8EA3fmWb1PjlJTmW9/6Fk8++WTm6/u8CKuBeFGIq1e9eu1K18tZD/r0yTvGauIuv/xy5s2bR//+XSv5s8RzIA3G74xdPerZcwf69Gni/fdXddsKyxVWbpWGBxDnXFnSWL+z7bbbsn499O9/cOxNV+uVmbFu3Tq23XbbkvfxAOJcEblDrLj0jj4waNAgVq5cic/5U51tt92WQYMGlby9t8Jyrog4x+XKojha8rn0K9QKyyvRXd3xHEN8fFgVl8sDiKs71U5IVYuh5bPKRx9wuTyAuESl8aLc2tqKmW0tump/HmVao5yPopZzhfjoAy6XBxCXqKimr81SrqG9IjqoS7CtFdGVXPijfK9SeF8il8sr0V2i4qikjvI942iFFWVFdFYrtdPYFNgVlspKdEljJC2V9JKkSXnWXyrpmfCxWNIWSf3CdcslPReu86iQIVnKLcSRpigrorNYqV3rXJOLT2IBRMHgNVOBscABwFmSDsjdxsyuNLNmM2sGvgn8wczeyNnkuHB9OkdLc3nFXceQ9gmpoqyIzmKldtTTCrjkJJkDGQW8ZGbLLBjEfyYwrsj2ZwEzapIyl2lpzMnkirIiOouV2lHmmmrZgMB1lWQA2QN4Nef1ynBZF5K2B8YA/5Oz2IDZkhZKGl/oIJLGS1ogaYH3Uk2ftOcW4hBlRXQWK7WjyjV5UVjyEqtEl/RZ4CQz+3L4+vPAKDO7KM+2ZwD/bGafzln2UTNbJekjwP3ARWb2cLFjeiW6c8nrPBwKBLmmcgNfVhsQZFEaK9FXAnvmvB4ErCqw7Zl0Kr4ys1Xh3zXALIIiMUe82fo0FxmkregqbelJi6hyTVlsQFBvkgwg84F9JO2tYNaXM4G7Om8kqS9wLHBnzrIdJO3U/hw4EVhck1SnXJzZ+rQUGRS6MEfVpyQqaUtPd2oZ8AYMOJuWluWMHt1GS8vyiorcstiAoN4kFkDMbDMwEbgPWALcZmbPS5ogaULOpqcCs83snZxlA4BHJC0C5gG/M7N7a5X2asV5Fx9nC5e0tJ6J8sLsuYQPZS3gZbEBQb1JtB+Imd1jZvua2cfMbEq47Dozuy5nm1+Y2Zmd9ltmZoeEjwPb982CuO/i48zWp7HIoNo+JVFfNLPUxyVOtfi8WWxAUG98KJMai/suPs5sfZJFBoUuzEDs41aVm840pac7cQW8WuVmoigKq1aa6wXj5gGkxuK+i48zW59kkUGUF2bPJXwoawEvbdJSL5gUDyA1FvddfJzZ+rQXGRx77LElbVeri2aj9XFJa2DOap1jFvhgijUWVRv4qGVp6tZCaa1kEEWfcfBDUX4H0nJe4/69zZ3bg6BPc2di9Oi2qt8/LQr1A/EAkoA0jkSalh98NSr5DFkKnFmSlu9T3J0NG6UzYxo7EjasNFT81Ytqi00aMXjUotI3LcV3Wa5zzAIPIA0srWXW5fBK4PLUqtI3Lec/y3WOWeBFWBkVddFLWoocqlEPnyFu9VLkUmoxcFrrHLPGi7DqTNZ6DddCWopN0iyNnUHLVU4uqtFzCHHzAOKA+rj4pqXYpFpxfo5iRTpZOX/lNp31Osf4eADJkDjrLLJy8WgEceYui1X6ZiVXWw+5qHrhdSAZ5eX99Svu/22h+oOsfKfqpR4nS7wOxDWsYrmrtOS8atkiLrdI5777zmP33f85Uy3xGr3pbJp4DqRGou486B3gSlfszjqNd9350lSL/3caz0UhaeyMW8+8JzrJBRBvSpisegggtUhnGs+FSwcvwkpQow+4loRiRUJp70BZaYu4atNfDy3xXG15DqQGGmXAtbTKWg6kXWtra96WUZMnT45sMEnnSpHKHIikMZKWSnpJ0qQ860dLelvSM+Hj26XumyaNMndzWu7g64UP05K8Rp4sqhQFA4ikgyU9IelVSddL2jVn3bxqDyypJzAVGAscAJwl6YA8m/6vmTWHj/8oc99UiKPVSBq/2GntR1CsaCbrxTZpL47LskafLKoUxXIg04BW4GDgBeARSR8L1/WO4NijgJfC+c3fB2YC42qwb81FPZyCf7HLk4VmvN0pFOjqOZeS9GfwusvuFQsgO5rZvWb2lpldBUwE7pV0BPkL9Mu1B/BqzuuV4bLOWiQtkvR7SQeWuS+SxktaIGnB2rVrI0h2ZaIcTiFNX+yo7oCTvlikXSOen6RztN7jvXvFAogk9W1/YWYPAacBvwT2iuDYyrOsc2B6CtjLzA4BrgHuKGPfYKHZ9WY20sxGNjU1VZzYNKn2ix3lxSiqO+CkLxb1IOvFcWnTKHWX1SgWQH4AfDx3gZk9CxwP3B7BsVcCe+a8HgSs6nS89Wa2MXx+D9Bb0m6l7FvPqv1i+8W6PtVDLiVNdTre4717BQOImf0/M3siz/I/m9lXIjj2fGAfSXtL2gY4E7grdwNJuyv8JkkaFaZ3XSn71rO0frHLvQNO08XCpUOSdTqdj+FDwXcv0X4gkk4GfgT0BG4ysymSJgCY2XWSJgJfBTYD7wJfN7PHCu3b3fHqaTDFcodyKLdPQa15HwbXWa2/E/4dLMyHMqG+Akg10vhDSWOaXLJqPd6bfwcLq7gjoaSjSlnmXDW8Ath1VqtiKy9GrVy3ORBJT5nZ8O6WZYHnQAI+kq9zXXkOpLBCOZBeRXZoAY4EmiR9PWfVzgT1Di6jPHhUz4Owc8WLsLYBdiQIMjvlPNYDp8efNOfSy5tC1x8vRi1fKUVYe5nZivB5D4Ie6utrkbioeRGWi4oXd7hGUs1ovN+XtLOkHYA/AkslXRp5Cp1LOa9wda6jUgLIAWGO4zPAPcBg4POxpsq5biRx0a7ngQtdctI4snapSgkgvSX1Jgggd5rZB0QzmKJzFfM6CFcPsj6ydikB5KfAcmAH4GFJexFUpDvXsLzC1UUhTSNrV6LbAGJmV5vZHmZ2sgVWAMfVIG0uhxeTpKsOwv8fLgpZHzK+lFZYA4DvAR81s7HhzH8tZnZjLRIYpSy3wvJWPx35+XD14PHHh4TFVx316bMXLS3La5+gAqpphfUL4D7go+HrF4CvRZe0dMtyBVdS/O7cudKkdWTtUpUSQHYzs9uANgAz2wxsiTVVKZF0BVeaimzKUYsKbq+DcPUg60PGFyzCktTLzDZLmkswE+H9ZjY8nNL2B2Z2bA3TGYlyi7AqyV6WO8x6qbJUZJOltDrnuldJEda88O+/EUzW9DFJjwK3ABdFn8T0KbeCK+kcS5KymltyzlWu6JzoAGa2EDiWYGDF84EDw6lt6165U8fG2SQv7UU23snONapGrictFkCaJH09HIn3YuAk4ETgok6j89atciu44myS5xdi1+jS+Bto5FIHKB5AehKMxrtTgUfVJI2RtFTSS5Im5Vl/tqRnw8djkg7JWbdc0nOSnpEUS9vcciu4ys2x1Ku055ZcNqVx9IGsdwSsVrFK9FgnjZLUk6BJ8AnASmA+cJaZ/TFnmyOBJWb2pqSxQKuZHR6uWw6MNLO/lnrMuPuBtN+N5H6hevTYPlOtKpxLqzQ2zpg7twf5R3YSo0e31To5samkEl0xpgdgFPCSmS0zs/eBmcC43A3M7DEzezN8+QQwKOY0VSXrTfKcS5u0N85o9FKHYjmQfmb2RmwHlk4HxpjZl8PXnwcON7OJBba/BNg/Z/tXgDcJwv9Pzez67o6Z5Z7ozjW6NOZAGqXUoewcSJzBI5Qvh5P32yHpOOBLwL/nLD4qLGIbC1wo6ZgC+46XtEDSgrVr11ab5oaUlrs959Km0UsdSumJHpeVwJ45rwcBqzpvJGkY8DNgnJmta19uZqvCv2uAWQRFYl2Y2fVmNtLMRjY1NUWY/OypNBCksfLSNZ60Ns4YMOBsWlqWM3p0Gy0tyxsmeEAJgykCSNqd4AJtwHwz+0vVB5Z6EVSiHw+8RlCJ/jkzez5nm8HAg8A5ZvZYzvIdgB5mtiF8fj/wH2Z2b7FjNnoRVqVFAGksOnDO1U7FgylK+jJBr/R/BE4HnpD0xWoTFI6pNZFgoMYlwG1m9rykCZImhJt9G+gPXNupue4A4BFJi8K0/a674OHKk/bKS+dc8koZzn0pcGR78ZGk/sBjZrZfDdIXqUbMgbS2tuYtgpo8eXLJwcBzIM41tmqGc18JbMh5vQF4NaqEuXjV6xAjWU+/c/WglADyGvCkpFZJkwn6Y7yUM8yJq3NprLz0in2Xj99Y1FYpAeRl4A4+bGJ7J/A6EQ5p4mqj0kDgP0qXFX5jUVsltcKqF41YB1JPoqjPcfXN6+viUXYdiKQfhX9/K+muzo84E+tcPvVan+Oq4y0Gk1NsKJMRZrZQUt6ZB83sD7GmLAaeA6kffqfp8vHvRTwK5UB6FdohnEgqk4HC1b80Vuw712hK6Uh4lKT7Jb0gaZmkVyQtq0XinCvEiydcPn5j0VWcMyaW0pHwT8C/AguBLe3Lc8elygovwnLONZKoRguupiPh22b2ezNbY2br2h8lH9k551zZosg5xD1jYsE6EEntsxE+JOlK4HZgU/t6M3sqkhQ455zroHPOoX2udaCsnMOmTX8ua3m5CgYQ4IedXudmXwz4+0hS4JxzroNiOYdyAkifPoPZtGlF3uVRKNYK67hIjuCcc64sUeUchg6dkrcOZOjQKVWlb+t7dbeBpO9J2iXn9a6SvhvJ0Z1zznUR1Vzrcc+YWEol+lgze6v9hZm9CZwcydEbVJzN6mrJm9I6F4+hQ6fQo8f2HZZVmnOIc8bEUgJIT0l92l9I2g7oU2R7V0R75VhQLmlbK8eyGESqHbjOA5Bz+WVlrvVS+oF8AzgF+DlB5fkXgbvM7D/jT1600tAP5PHHhxSo1NqLlpbltU9QFaodNsKHnXAuGyruBxIGiu8CHwcOAC7PYvBIi6ib1dX6Lt4HrnPl8u9GabJYtF1KERbA08AfgLnh80hIGiNpqaSXJE3Ks16Srg7XP5vTN6XbfdMqqsqxdrWe/6DaEXE9ADUen6Oje1kt2i6lFdY/AfOA04F/Ipid8PRqDyypJzAVGEuQszlL0gGdNhsL7BM+xgPTytg3laKsHMsiH5Ldua7i7jEel1JyIJcBh5nZuWZ2DjAK+FYExx4FvGRmy8zsfWAmMK7TNuOAWyzwBLCLpIEl7ptKUVSOpeUuPomB6zzQZENavqNZEXeP8biUUon+nJkdnPO6B7Aod1lFBw5yMWPM7Mvh688Dh5vZxJxt7gauMLNHwtdzgH8HhnS3b857jCfIvTB48OARK1Z0rcDOsixXRLe2tpZ9Qcny521UtfifrV49nWXLLmPTpj/Tp89ghg6dkroWS8WkvXFNNYMp3ivpPknnSToP+B1wTxRpyrOs87es0Dal7BssNLvezEaa2cimpqYyk+ji5HejLgpZrT/IldWi7aIBREH+82rgp8Aw4BDgejP79wiOvRLYM+f1IGBViduUsm9DaIT5D7w4JNvi/o5mtf4gV1b6fXRWShHWQjMbEfmBpV7AC8DxwGvAfOBzZvZ8zjafAiYS9Hw/HLjazEaVsm8+aegH4j5USbGDF2G5zubO7UH+AggxenRbrZNTl6opwnpC0mFRJ8jMNhMEh/uAJcBtZva8pAmSJoSb3QMsA14CbgAuKLZv1GlMmyy2Ey+kHoodXDpE3TTela7YcO7tjgPOl7QCeIeg/sHMbFi1Bzeze+hUn2Jm1+U8N+DCUvetZ1HND5AWlQ5X3QhFdq48cY846worJYCMjT0VdS6KFiJRzQ+QFpU2W/R6D9dZ+/c/ba2wst4yrBSlBJCBwPNmtgFA0k4Enffqqz1sTLIys1itxT3RjWssAwacnaqLc72VGBRSSh3INGBjzut3wmWuBFG1EKm3ct6sNlt0rhT10DKsFKUEEFlOsxcza6O0nIsj2pnF6umCm9Vmi86Vot5KDAopJRAsk3QxH+Y6LiBoGeVKEFVRTVrLeauRtmIH56LSKEW0peRAJgBHEvS3WEnQH2N8nImqJ1mZWayemgg7l7R6KzEopJT5QNaY2Zlm9hEzG2BmnzOzNbVIXD3IQlGN98lw9Sbp1npZ+N1HoWBPdEnfMLP/lHQNebp5mtnFcScuat4TPb+0D+TmXLl8xIJoVdITfUn4dwGwMM/D1YlGqfBLStJ3w87FpWAAMbPfhn9vzveoXRJd3OqtiXDa+Ix8teGDbtZewQAi6a5ij1om0sWrUSr8SuUXnPjEeW6jmu3S//+lK1aE1UIwTPr/AlcBP+z0cHUiiQq/NP9Io8gx+N1wflnIjWUhjWlRrBK9J3ACcBbBXCC/A2ZkedRbr0RPjzRXckadtjR/1lqr1bmoZLbLdv7/6qrsSnQz22Jm95rZucARBEOqz5V0UYzpbGjeFyO/Wty1e44hPkmc20qKrWqVxnr6nRedUEpSH+BTBLmQIcBdwE1m9lpNUhexNOdAOg++BkE9RD21HW9tbc1bPDB58uSiP9Ra3xFGfbxq7obrTRbu7uNMY1Z/54VyIMWKsG4GDgJ+D8w0s8XxJjF+aQ4gjdYXo5wfadYDSBolFdSycG7jTGNWf+eV9AP5PLAv8C/AY5LWh48NktbHldBG5X0xOkqySKkRJq1KqqI4C+c2zjTW2++82znRYzmo1A+4laBYbDnwT2b2Zqdt9gRuAXYH2oDrzezH4bpW4CvA2nDz/xPOUFiU50DSo5w74CzctWZNPZ/TNBcZZvV3Xs2c6HGYBMwxs32AOeHrzjYD/2ZmHyeoxL9Q0gE56//bzJrDR+qmti23oixrfTGqrQhM6w+8njVKQ4E0N8PN2u+8O0kFkHFAe2/2m4HPdN7AzF43s6fC5xsIhlbZo2YprEIlgxNmafC1Wg++mIVijyyIqqOdq1yWfuelSKoI6y0z2yXn9ZtmtmuR7YcADwMHmdn6sAjrPGA9wVhd/9a5CCxn3/GEw88PHjx4xIoV8c/Em9VsailaW1s56aRfZOrzpblIIyn1VoRVaQs/V5qyW2FFcMAHCOovOrsMuLnUACJpR+APwBQzuz1cNgD4K8EowZcDA83si92lqVZ1IHPn9iDPAMaAGIGWUlMAAA9mSURBVD26Lfbjx0kSDz0ksvT56u1iGYV6Dqr+/45eoQAS29S0ZvbJIolZLWmgmb0uaSCQd34RSb2B/wGmtweP8L1X52xzA3B3dCmvXr3PRlbvn68R1GvwcLWVVB3IXcC54fNzgTs7b6Cgpu9GYImZ/VendQNzXp4KpKqPSlYqykqtCO9c+fqtb63gvfc6bpO2z9coFcauK68zq52k6kD6A7cBg4E/A581szckfRT4mZmdLOlogoEcnyNoxgthc11JvwSaCcpRlgPnm9nr3R23ls14V6+enur5yyvtEdtePJD2z5fLizScq07N60DSKM39QGqt0or+LF6Ms5hm59Ikbf1AXMIq7RGbxeKBLKbZuSzwHEiDquemxs65aHkOxHWQlYp+51x6eQBpUPXWI9Y5V3ux9QNx6TdgwNl1GTCy1ELM1Z9G+v55AHF1pXPz5PZxuoC6/RG79Gi0758XYbm6smzZZR36tgC0tf2NZcsuSyhFrpE02vfPA4irK6U0T46jN7r3cHdQfxNGdccDiCtZtXOA1EKh8bhyl8cxX0Ra5qDwQJasUr5/9cQDSAYlcSGv9RwglWr05slpCWTtsnDTEaVG+/55AMmYpC7kWSnbLdQ8edq0FyMfXNEHbCwuKzcdUWq05vHeEz1jkupBXk9znMQxNlaS422ldTIlH+2gftR8PhAXj6Qq6bI6B0i+Nvn1JndyqDQNHNloFcqNyIuwMiapSroslu0WKkKZOvUfIz+WD9jYVaNVKDciDyAZk9SFPItlu4XqbQ49dGHkx0pLvUeaAlkWbzpcebwOJIMaaaiEatRTvU1W+Xe1PngdSB2p1zGsopbVept64t/V+pZIEZakfpLul/Ri+HfXAtstl/ScpGckLSh3f9fYvAjFuXglVQcyCZhjZvsAc8LXhRxnZs2dsk/l7O8aVBbrbZzLkqQCyDjg5vD5zcBnary/axADBpxNS8tyRo9uo6VleeqDR1oq450rRVIBZICZvQ4Q/v1Ige0MmC1poaTxFezvXKakbSiSWstqAG20IVvaxRZAJD0gaXGex7gy3uYoMxsOjAUulHRMBekYL2mBpAVr164td3fnXARKDQxxBtC4glMjDtnSLrYAYmafNLOD8jzuBFZLGggQ/l1T4D1WhX/XALOAUeGqkvYP973ezEaa2cimpqboPqBzEWmEMbXSkLOKKw1ZGScuDkkVYd0FnBs+Pxe4s/MGknaQtFP7c+BEYHGp+zuXFa2trZjZ1iFI2p/XUwApJusBtJGHbEkqgFwBnCDpReCE8DWSPirpnnCbAcAjkhYB84Dfmdm9xfZ3zsWv1At7qYEhzgAadXDKV9fRyEO2eE9051Ikd2DEtKpkwMZS94lzMMhq37vzfOcQ9Cvaffdz+ctfbu6yvJ6ajBfqie5jYTmXR1KtalpbWxNv0ZPk8dM0lldnheo61q27p2H7G3kAca6TJFvVJN2ip9Dxr732tKqKggoFhs7B6qtf3SeiT1J6GkpVrK4ja/2NouJFWM51kuRESOUcO47irlKOH1UxU6EiobTevTfyBFlehOVciZJsVVPOseNollrLz5615q8+tlpXHkCc6yTJVjVJt+gp5fhR1VNkrfmrj63WlQcQ5zpJ8k6zu2PH3WeilM8e1bGSDpaVaNS6jkI8gLjUSqo1UJJ3mt0dO+5Oh7X87F4klH1eie5SKWsVrEmIs89ErfiMhdngMxK6TClWweoXmECa+0yUymcszDYvwnKplLUK1iSkvce6q38eQFwqZbGC1blG4wHEpZJXsDqXfh5AXCp5m3vn0s8r0V1qeQWrc+nmORDnnHMV8QDinHOuIh5AnHPOVcQDiHMuM5KebMt1lEgAkdRP0v2SXgz/7ppnm/0kPZPzWC/pa+G6Vkmv5aw7ufafwrny+MWvOklPtuW6SioHMgmYY2b7AHPC1x2Y2VIzazazZmAE8DdgVs4m/92+3szuqUmqnauQX/yql7X5QxpBUgFkHHBz+Pxm4DPdbH888LKZdZ0OzLkMSMPFL+s5IB/eJn2SCiADzOx1gPDvR7rZ/kxgRqdlEyU9K+mmfEVg7SSNl7RA0oK1a9dWl2rnKpT0xa8eckA+vE36xBZAJD0gaXGex7gy32cb4BTg1zmLpwEfA5qB14EfFtrfzK43s5FmNrKpqamCT+Jc9ZK++KUhB1QtH94mfWLriW5mnyy0TtJqSQPN7HVJA4E1Rd5qLPCUma3Oee+tzyXdANwdRZqdi8vQoVPyzm9Sq4tf0jmgKLSPSuDzh6RHUkOZ3AWcC1wR/r2zyLZn0an4qj34hC9PBRbHkUjnopL0xa9Pn8Fh8VXX5Vniw9ukSyIzEkrqD9wGDAb+DHzWzN6Q9FHgZ2Z2crjd9sCrwFAzeztn/18SFF8ZsBw4PyegFOQzErpG5TM8umqkakZCM1tH0LKq8/JVwMk5r/8G9M+z3edjTaBzdSbpHJCrTz4ar3MNwot/XNR8KBPnnHMV8QDinHOuIh5AnHPOVcQDiHPOuYp4AHHOOVeRRPqBJEXSWqCWAzLuBvy1hscrl6evOp6+6nj6qlPL9O1lZl3GgmqoAFJrkhbk63yTFp6+6nj6quPpq04a0udFWM455yriAcQ551xFPIDE6/qkE9ANT191PH3V8fRVJ/H0eR2Ic865ingOxDnnXEU8gDjnnKuIB5AqSeon6X5JL4Z/u8zPLmk/Sc/kPNZL+lq4rlXSaznrTu56lHjTF263XNJzYRoWlLt/nOmTtKekhyQtkfS8pH/JWRfL+ZM0RtJSSS9JmpRnvSRdHa5/VtLwUvetUfrODtP1rKTHJB2Ssy7v/7rG6Rst6e2c/9u3S923Rum7NCdtiyVtkdQvXBfr+ZN0k6Q1kvJOlJf0d68DM/NHFQ/gP4FJ4fNJwA+62b4n8BeCjjkArcAlSaePYGKu3ar9fHGkDxgIDA+f7wS8ABwQ1/kL/0cvA0OBbYBF7cfL2eZk4PeAgCOAJ0vdt0bpOxLYNXw+tj19xf7XNU7faODuSvatRfo6bf9p4MEanr9jgOHA4gLrE/vudX54DqR644Cbw+c3A5/pZvvjgZfNrFY94stNX9T7V/3+Zva6mT0VPt8ALAH2iDgduUYBL5nZMjN7H5gZpjPXOOAWCzwB7CJpYIn7xp4+M3vMzN4MXz4BDIo4DVWlL6Z940pfl2m142RmDwNvFNkkye9eBx5AqjfAwul0w78f6Wb7M+n6ZZwYZkVvirqIqIz0GTBb0kJJ4yvYP+70ASBpCHAo8GTO4qjP3x4EUym3W0nXgFVom1L2rUX6cn2J4I61XaH/da3T1yJpkaTfSzqwzH1rkb72abXHAP+Tszju89edJL97HfiMhCWQ9ACwe55Vl5X5PtsApwDfzFk8Dbic4Et5OfBD4IsJpO8oM1sl6SPA/ZL+FN4JVS3C87cjwQ/5a2a2Plxc9fnLd6g8yzq3dy+0TSn7VqvkY0g6jiCAHJ2zOLb/dRnpe4qgGHdjWG91B7BPifvWIn3tPg08ama5OYK4z193kvzudeABpARm9slC6yStljTQzF4Ps5FrirzVWOApM1ud895bn0u6Abg7ifRZMB89ZrZG0iyC7PDDQDmfL7b0SepNEDymm9ntOe9d9fnLYyWwZ87rQcCqErfZpoR9a5E+JA0DfgaMNbN17cuL/K9rlr6cGwDM7B5J10rarZR9a5G+HF1KDGpw/rqT5HevAy/Cqt5dwLnh83OBO4ts26UsNbxotjsVyNvyogrdpk/SDpJ2an8OnJiTjnI+X1zpE3AjsMTM/qvTujjO33xgH0l7h7nGM8N0dk73OWGLmCOAt8MiuFL2jT19kgYDtwOfN7MXcpYX+1/XMn27h/9XJI0iuBatK2XfWqQvTFdf4FhyvpM1On/dSfK711GcNfSN8AD6A3OAF8O//cLlHwXuydlue4IfSN9O+/8SeA54NvxnD6x1+ghabSwKH88Dl3W3f43TdzRBVvxZ4JnwcXKc54+gpcsLBK1aLguXTQAmhM8FTA3XPweMLLZvDN+77tL3M+DNnPO1oLv/dY3TNzE8/iKCSv4j03T+wtfnATM77Rf7+SO4yXwd+IAgt/GlNH33ch8+lIlzzrmKeBGWc865ingAcc45VxEPIM455yriAcQ551xFPIA455yriAcQ1zAkmaRf5rzuJWmtpLvD16fEOYKppLmSRuZZPlLS1SW+R399OErsX9RxJOJtSnyP0ZKOLLBuf0mPS9ok6ZJS3s81Lu+J7hrJO8BBkrYzs3eBE4DX2lea2V2U2PEq7AQnM2urNlFmtgAoaVhwC3qUN4dpaAU2mtlVZR5yNLAReCzPujeAi4l+0ExXhzwH4hrN74FPhc87jAwg6TxJPwmfD5A0Kxzsb5GkIyUNUTAnybUEYzntKelKBfNFPCfpjJz3+ka4bJGkK3KO/1lJ8yS9IOkT4bajc3JBrZJ+KelBBXOkfKWUDyVphKQ/KBjg7772HvqSLpb0RwWDTc5UMBjlBOBfw1zLJ3Lfx8zWmNl8gk5szhXlORDXaGYC3w4v2MOAm4BP5NnuauAPZnaqpJ7AjsCuwH7AF8zsAkmnEeQGDgF2A+ZLejhc9hngcDP7m8KJiEK9zGyUggEEJwP5xgkbRjDPww7A05J+Z+H4S/koGCfsGmCcma0NA9kUgkElJwF7m9kmSbuY2VuSrqOynItzHXgAcQ3FzJ4N78LPAu4psunfA+eE+2wB3lYwVPwKC+ZggGCIlRnh+tWS/gAcRjB+0s/N7G/h/rkjubYPBLkQGFLg2HeGRWzvSnqIYLC+O4qkdT/gIIKRYSGYWOj1cN2zwHRJd3TzHs6VzQOIa0R3AVcR1AX0L3Pfd3Ke5xs+u315oTGCNoV/t1D499d53+7GGxLwvJm15Fn3KYIZ7k4BvqUP591wrmpeB+Ia0U3Af5jZc0W2mQN8FUBST0k759nmYeCMcH0TwYV6HjAb+KKCyYjoVIRVinGStpXUnyDIze9m+6VAk6SW8Hi9JR0oqQewp5k9BHwD2IWgKG4DwdTAzlXFA4hrOGa20sx+3M1m/wIcJ+k5guKmfHfuswiKiBYBDwLfMLO/mNm9BLmcBZKeAcptDjsP+B3BKLWXF6v/ALBg+tLTgR9IWkQw+u6RBEVZvwo/w9PAf5vZW8BvgVPzVaIrGGZ9JfB14P9KWlkgeDrno/E6lyZVNM11ruY8B+Kcc64ingNxzjlXEc+BOOecq4gHEOeccxXxAOKcc64iHkCcc85VxAOIc865ivx/G6Ywz4x5iHsAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "# Plot examples\n", "plot_data(X_train, y_train[:], pos_label=\"Accepted\", neg_label=\"Rejected\")\n", "\n", "# Set the y-axis label\n", "plt.ylabel('Microchip Test 2') \n", "# Set the x-axis label\n", "plt.xlabel('Microchip Test 1') \n", "plt.legend(loc=\"upper right\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Figure 3 shows that our dataset cannot be separated into positive and negative examples by a straight-line through the plot. Therefore, a straight forward application of logistic regression will not perform well on this dataset since logistic regression will only be able to find a linear decision boundary.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.3 Feature mapping\n", "\n", "One way to fit the data better is to create more features from each data point. In the provided function `map_feature`, we will map the features into all polynomial terms of $x_1$ and $x_2$ up to the sixth power.\n", "\n", "$$\\mathrm{map\\_feature}(x) = \n", "\\left[\\begin{array}{c}\n", "x_1\\\\\n", "x_2\\\\\n", "x_1^2\\\\\n", "x_1 x_2\\\\\n", "x_2^2\\\\\n", "x_1^3\\\\\n", "\\vdots\\\\\n", "x_1 x_2^5\\\\\n", "x_2^6\\end{array}\\right]$$\n", "\n", "As a result of this mapping, our vector of two features (the scores on two QA tests) has been transformed into a 27-dimensional vector. \n", "\n", "- A logistic regression classifier trained on this higher-dimension feature vector will have a more complex decision boundary and will be nonlinear when drawn in our 2-dimensional plot. \n", "- We have provided the `map_feature` function for you in utils.py. " ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Original shape of data: (118, 2)\n", "Shape after feature mapping: (118, 27)\n" ] } ], "source": [ "print(\"Original shape of data:\", X_train.shape)\n", "\n", "mapped_X = map_feature(X_train[:, 0], X_train[:, 1])\n", "print(\"Shape after feature mapping:\", mapped_X.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also print the first elements of `X_train` and `mapped_X` to see the tranformation." ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "X_train[0]: [0.051267 0.69956 ]\n", "mapped X_train[0]: [5.12670000e-02 6.99560000e-01 2.62830529e-03 3.58643425e-02\n", " 4.89384194e-01 1.34745327e-04 1.83865725e-03 2.50892595e-02\n", " 3.42353606e-01 6.90798869e-06 9.42624411e-05 1.28625106e-03\n", " 1.75514423e-02 2.39496889e-01 3.54151856e-07 4.83255257e-06\n", " 6.59422333e-05 8.99809795e-04 1.22782870e-02 1.67542444e-01\n", " 1.81563032e-08 2.47750473e-07 3.38066048e-06 4.61305487e-05\n", " 6.29470940e-04 8.58939846e-03 1.17205992e-01]\n" ] } ], "source": [ "print(\"X_train[0]:\", X_train[0])\n", "print(\"mapped X_train[0]:\", mapped_X[0])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While the feature mapping allows us to build a more expressive classifier, it is also more susceptible to overfitting. In the next parts of the exercise, you will implement regularized logistic regression to fit the data and also see for yourself how regularization can help combat the overfitting problem.\n", "\n", "\n", "### 3.4 Cost function for regularized logistic regression\n", "\n", "In this part, you will implement the cost function for regularized logistic regression.\n", "\n", "Recall that for regularized logistic regression, the cost function is of the form\n", "$$J(\\mathbf{w},b) = \\frac{1}{m} \\sum_{i=0}^{m-1} \\left[ -y^{(i)} \\log\\left(f_{\\mathbf{w},b}\\left( \\mathbf{x}^{(i)} \\right) \\right) - \\left( 1 - y^{(i)}\\right) \\log \\left( 1 - f_{\\mathbf{w},b}\\left( \\mathbf{x}^{(i)} \\right) \\right) \\right] + \\frac{\\lambda}{2m} \\sum_{j=0}^{n-1} w_j^2$$\n", "\n", "Compare this to the cost function without regularization (which you implemented above), which is of the form \n", "\n", "$$ J(\\mathbf{w}.b) = \\frac{1}{m}\\sum_{i=0}^{m-1} \\left[ (-y^{(i)} \\log\\left(f_{\\mathbf{w},b}\\left( \\mathbf{x}^{(i)} \\right) \\right) - \\left( 1 - y^{(i)}\\right) \\log \\left( 1 - f_{\\mathbf{w},b}\\left( \\mathbf{x}^{(i)} \\right) \\right)\\right]$$\n", "\n", "The difference is the regularization term, which is $$\\frac{\\lambda}{2m} \\sum_{j=0}^{n-1} w_j^2$$ \n", "Note that the $b$ parameter is not regularized." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Exercise 5\n", "\n", "Please complete the `compute_cost_reg` function below to calculate the following term for each element in $w$ \n", "$$\\frac{\\lambda}{2m} \\sum_{j=0}^{n-1} w_j^2$$\n", "\n", "The starter code then adds this to the cost without regularization (which you computed above in `compute_cost`) to calculate the cost with regulatization.\n", "\n", "If you get stuck, you can check out the hints presented after the cell below to help you with the implementation." ] }, { "cell_type": "code", "execution_count": 44, "metadata": {}, "outputs": [], "source": [ "# UNQ_C5\n", "def compute_cost_reg(X, y, w, b, lambda_ = 1):\n", " \"\"\"\n", " Computes the cost over all examples\n", " Args:\n", " X : (ndarray Shape (m,n)) data, m examples by n features\n", " y : (ndarray Shape (m,)) target value \n", " w : (ndarray Shape (n,)) values of parameters of the model \n", " b : (scalar) value of bias parameter of the model\n", " lambda_ : (scalar, float) Controls amount of regularization\n", " Returns:\n", " total_cost : (scalar) cost \n", " \"\"\"\n", "\n", " m, n = X.shape\n", " \n", " # Calls the compute_cost function that you implemented above\n", " cost_without_reg = compute_cost(X, y, w, b) \n", " \n", " # You need to calculate this value\n", " reg_cost = 0.\n", " \n", " ### START CODE HERE ###\n", " reg_cost = sum([(w[j])**2 for j in range(n)]) * lambda_ / (2*m)\n", " \n", " \n", " ### END CODE HERE ### \n", " \n", " # Add the regularization cost to get the total cost\n", " total_cost = cost_without_reg + reg_cost\n", "\n", " return total_cost" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Click for hints\n", " \n", " \n", "* Here's how you can structure the overall implementation for this function\n", " ```python \n", " def compute_cost_reg(X, y, w, b, lambda_ = 1):\n", " \n", " m, n = X.shape\n", " \n", " # Calls the compute_cost function that you implemented above\n", " cost_without_reg = compute_cost(X, y, w, b) \n", " \n", " # You need to calculate this value\n", " reg_cost = 0.\n", " \n", " ### START CODE HERE ###\n", " for j in range(n):\n", " reg_cost_j = # Your code here to calculate the cost from w[j]\n", " reg_cost = reg_cost + reg_cost_j\n", " reg_cost = (lambda_/(2 * m)) * reg_cost\n", " ### END CODE HERE ### \n", " \n", " # Add the regularization cost to get the total cost\n", " total_cost = cost_without_reg + reg_cost\n", "\n", " return total_cost\n", " ```\n", " \n", " If you're still stuck, you can check the hints presented below to figure out how to calculate `reg_cost_j` \n", " \n", "
\n", " Hint to calculate reg_cost_j\n", "     You can use calculate reg_cost_j as reg_cost_j = w[j]**2 \n", "
\n", " \n", "
\n", "\n", "
\n", "\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the cell below to check your implementation of the `compute_cost_reg` function." ] }, { "cell_type": "code", "execution_count": 45, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Regularized cost : 0.6618252552483948\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "X_mapped = map_feature(X_train[:, 0], X_train[:, 1])\n", "np.random.seed(1)\n", "initial_w = np.random.rand(X_mapped.shape[1]) - 0.5\n", "initial_b = 0.5\n", "lambda_ = 0.5\n", "cost = compute_cost_reg(X_mapped, y_train, initial_w, initial_b, lambda_)\n", "\n", "print(\"Regularized cost :\", cost)\n", "\n", "# UNIT TEST \n", "compute_cost_reg_test(compute_cost_reg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", "
Regularized cost : 0.6618252552483948
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.5 Gradient for regularized logistic regression\n", "\n", "In this section, you will implement the gradient for regularized logistic regression.\n", "\n", "\n", "The gradient of the regularized cost function has two components. The first, $\\frac{\\partial J(\\mathbf{w},b)}{\\partial b}$ is a scalar, the other is a vector with the same shape as the parameters $\\mathbf{w}$, where the $j^\\mathrm{th}$ element is defined as follows:\n", "\n", "$$\\frac{\\partial J(\\mathbf{w},b)}{\\partial b} = \\frac{1}{m} \\sum_{i=0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - y^{(i)}) $$\n", "\n", "$$\\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j} = \\left( \\frac{1}{m} \\sum_{i=0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - y^{(i)}) x_j^{(i)} \\right) + \\frac{\\lambda}{m} w_j \\quad\\, \\mbox{for $j=0...(n-1)$}$$\n", "\n", "Compare this to the gradient of the cost function without regularization (which you implemented above), which is of the form \n", "$$\n", "\\frac{\\partial J(\\mathbf{w},b)}{\\partial b} = \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - \\mathbf{y}^{(i)}) \\tag{2}\n", "$$\n", "$$\n", "\\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j} = \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - \\mathbf{y}^{(i)})x_{j}^{(i)} \\tag{3}\n", "$$\n", "\n", "\n", "As you can see,$\\frac{\\partial J(\\mathbf{w},b)}{\\partial b}$ is the same, the difference is the following term in $\\frac{\\partial J(\\mathbf{w},b)}{\\partial w}$, which is $$\\frac{\\lambda}{m} w_j \\quad\\, \\mbox{for $j=0...(n-1)$}$$ \n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### Exercise 6\n", "\n", "Please complete the `compute_gradient_reg` function below to modify the code below to calculate the following term\n", "\n", "$$\\frac{\\lambda}{m} w_j \\quad\\, \\mbox{for $j=0...(n-1)$}$$\n", "\n", "The starter code will add this term to the $\\frac{\\partial J(\\mathbf{w},b)}{\\partial w}$ returned from `compute_gradient` above to get the gradient for the regularized cost function.\n", "\n", "\n", "If you get stuck, you can check out the hints presented after the cell below to help you with the implementation." ] }, { "cell_type": "code", "execution_count": 50, "metadata": {}, "outputs": [], "source": [ "# UNQ_C6\n", "def compute_gradient_reg(X, y, w, b, lambda_ = 1): \n", " \"\"\"\n", " Computes the gradient for logistic regression with regularization\n", " \n", " Args:\n", " X : (ndarray Shape (m,n)) data, m examples by n features\n", " y : (ndarray Shape (m,)) target value \n", " w : (ndarray Shape (n,)) values of parameters of the model \n", " b : (scalar) value of bias parameter of the model\n", " lambda_ : (scalar,float) regularization constant\n", " Returns\n", " dj_db : (scalar) The gradient of the cost w.r.t. the parameter b. \n", " dj_dw : (ndarray Shape (n,)) The gradient of the cost w.r.t. the parameters w. \n", "\n", " \"\"\"\n", " m, n = X.shape\n", " \n", " dj_db, dj_dw = compute_gradient(X, y, w, b)\n", "\n", " ### START CODE HERE ### \n", " for j in range(n):\n", " dj_dw[j] += w[j] * lambda_ / m \n", " ### END CODE HERE ### \n", " \n", " return dj_db, dj_dw" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Click for hints\n", " \n", " \n", "* Here's how you can structure the overall implementation for this function\n", " ```python \n", " def compute_gradient_reg(X, y, w, b, lambda_ = 1): \n", " m, n = X.shape\n", " \n", " dj_db, dj_dw = compute_gradient(X, y, w, b)\n", "\n", " ### START CODE HERE ### \n", " # Loop over the elements of w\n", " for j in range(n): \n", " \n", " dj_dw_j_reg = # Your code here to calculate the regularization term for dj_dw[j]\n", " \n", " # Add the regularization term to the correspoding element of dj_dw\n", " dj_dw[j] = dj_dw[j] + dj_dw_j_reg\n", " \n", " ### END CODE HERE ### \n", " \n", " return dj_db, dj_dw\n", " ```\n", " \n", " If you're still stuck, you can check the hints presented below to figure out how to calculate `dj_dw_j_reg` \n", " \n", "
\n", " Hint to calculate dj_dw_j_reg\n", "     You can use calculate dj_dw_j_reg as dj_dw_j_reg = (lambda_ / m) * w[j] \n", "
\n", " \n", "
\n", "\n", "
\n", "\n", " \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Run the cell below to check your implementation of the `compute_gradient_reg` function." ] }, { "cell_type": "code", "execution_count": 51, "metadata": { "deletable": false, "editable": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "dj_db: 0.07138288792343662\n", "First few elements of regularized dj_dw:\n", " [-0.010386028450548701, 0.011409852883280124, 0.0536273463274574, 0.003140278267313462]\n", "\u001b[92mAll tests passed!\n" ] } ], "source": [ "X_mapped = map_feature(X_train[:, 0], X_train[:, 1])\n", "np.random.seed(1) \n", "initial_w = np.random.rand(X_mapped.shape[1]) - 0.5 \n", "initial_b = 0.5\n", " \n", "lambda_ = 0.5\n", "dj_db, dj_dw = compute_gradient_reg(X_mapped, y_train, initial_w, initial_b, lambda_)\n", "\n", "print(f\"dj_db: {dj_db}\", )\n", "print(f\"First few elements of regularized dj_dw:\\n {dj_dw[:4].tolist()}\", )\n", "\n", "# UNIT TESTS \n", "compute_gradient_reg_test(compute_gradient_reg)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
dj_db:0.07138288792343
First few elements of regularized dj_dw:
[[-0.010386028450548], [0.011409852883280], [0.0536273463274], [0.003140278267313]]
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.6 Learning parameters using gradient descent\n", "\n", "Similar to the previous parts, you will use your gradient descent function implemented above to learn the optimal parameters $w$,$b$. \n", "- If you have completed the cost and gradient for regularized logistic regression correctly, you should be able to step through the next cell to learn the parameters $w$. \n", "- After training our parameters, we will use it to plot the decision boundary. \n", "\n", "**Note**\n", "\n", "The code block below takes quite a while to run, especially with a non-vectorized version. You can reduce the `iterations` to test your implementation and iterate faster. If you have time later, run for 100,000 iterations to see better results." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false }, "outputs": [], "source": [ "# Initialize fitting parameters\n", "np.random.seed(1)\n", "initial_w = np.random.rand(X_mapped.shape[1])-0.5\n", "initial_b = 1.\n", "\n", "# Set regularization parameter lambda_ (you can try varying this)\n", "lambda_ = 0.01 \n", "\n", "# Some gradient descent settings\n", "iterations = 10000\n", "alpha = 0.01\n", "\n", "w,b, J_history,_ = gradient_descent(X_mapped, y_train, initial_w, initial_b, \n", " compute_cost_reg, compute_gradient_reg, \n", " alpha, iterations, lambda_)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", " Expected Output: Cost < 0.5 (Click for details)\n", "\n", "\n", "```\n", "# Using the following settings\n", "#np.random.seed(1)\n", "#initial_w = np.random.rand(X_mapped.shape[1])-0.5\n", "#initial_b = 1.\n", "#lambda_ = 0.01; \n", "#iterations = 10000\n", "#alpha = 0.01\n", "Iteration 0: Cost 0.72 \n", "Iteration 1000: Cost 0.59 \n", "Iteration 2000: Cost 0.56 \n", "Iteration 3000: Cost 0.53 \n", "Iteration 4000: Cost 0.51 \n", "Iteration 5000: Cost 0.50 \n", "Iteration 6000: Cost 0.48 \n", "Iteration 7000: Cost 0.47 \n", "Iteration 8000: Cost 0.46 \n", "Iteration 9000: Cost 0.45 \n", "Iteration 9999: Cost 0.45 \n", " \n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.7 Plotting the decision boundary\n", "To help you visualize the model learned by this classifier, we will use our `plot_decision_boundary` function which plots the (non-linear) decision boundary that separates the positive and negative examples. \n", "\n", "- In the function, we plotted the non-linear decision boundary by computing the classifier’s predictions on an evenly spaced grid and then drew a contour plot of where the predictions change from y = 0 to y = 1.\n", "\n", "- After learning the parameters $w$,$b$, the next step is to plot a decision boundary similar to Figure 4.\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "plot_decision_boundary(w, b, X_mapped, y_train)\n", "# Set the y-axis label\n", "plt.ylabel('Microchip Test 2') \n", "# Set the x-axis label\n", "plt.xlabel('Microchip Test 1') \n", "plt.legend(loc=\"upper right\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "### 3.8 Evaluating regularized logistic regression model\n", "\n", "You will use the `predict` function that you implemented above to calculate the accuracy of the regularized logistic regression model on the training set" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "deletable": false, "editable": false }, "outputs": [], "source": [ "#Compute accuracy on the training set\n", "p = predict(X_mapped, w, b)\n", "\n", "print('Train Accuracy: %f'%(np.mean(p == y_train) * 100))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Expected Output**:\n", "\n", " \n", " \n", "
Train Accuracy:~ 80%
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Congratulations on completing the final lab of this course! We hope to see you in Course 2 where you will use more advanced learning algorithms such as neural networks and decision trees. Keep learning!**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " Please click here if you want to experiment with any of the non-graded code.\n", "

Important Note: Please only do this when you've already passed the assignment to avoid problems with the autograder.\n", "

    \n", "
  1. On the notebook’s menu, click “View” > “Cell Toolbar” > “Edit Metadata”
  2. \n", "
  3. Hit the “Edit Metadata” button next to the code cell which you want to lock/unlock
  4. \n", "
  5. Set the attribute value for “editable” to:\n", "
      \n", "
    • “true” if you want to unlock it
    • \n", "
    • “false” if you want to lock it
    • \n", "
    \n", "
  6. \n", "
  7. On the notebook’s menu, click “View” > “Cell Toolbar” > “None”
  8. \n", "
\n", "

Here's a short demo of how to do the steps above: \n", "
\n", " \"unlock_cells.gif\"\n", "

" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }