Deep Learning From Scratch - Theory and Implementation

DanielSabinasz
360.3K views

Open Source Your Knowledge, Become a Contributor

Technology knowledge has to be shared and made accessible for free. Join the movement.

Create Content

Training criterion

Great, so now we are able to classify points using a linear classifier and compute the probability that the point belongs to a certain class, provided that we know the appropriate parameters for the weight matrix W and bias b. The natural question that arises is how to come up with appropriate values for these. In the red/blue example, we just looked at the training points and guessed a line that nicely separated the training points. But generally we do not want to specify the separating line by hand. Rather, we just want to supply the training points to the computer and let it come up with a good separating line on its own. But how do we judge whether a separating line is good or bad?

The misclassification rate

Ideally, we want to find a line that makes as few errors as possible. For every point x and class c(x) drawn from the true but unknown data-generating distribution pdata(x,c(x)), we want to minimize the probability that our perceptron classifies it incorrectly - the probability of misclassification:

argminW,bp(c^(x)c(x)x,c(x)~pdata)

Generally, we do not know the data-generating distribution pdata, so it is impossible to compute the exact probability of misclassification. Instead, we are given a finite list of N training points consisting of the values of x with their corresponding classes. In the following, we represent the list of training points as a matrix XRN×d where each row corresponds to one training point and each column to one dimension of the input space. Moreover, we represent the true classes as a matrix cRN×C where ci,j=1 if the i-th training sample has class j. Similarly, we represent the predicted classes as a matrix c^RN×C where c^i,j=1 if the i-th training sample has a predicted class j. Finally, we represent the output probabilities of our model as a matrix pRN×C where pi,j contains the probability that the i-th training sample belongs to the j-th class.

We could use the training data to find a classifier that minimizes the misclassification rate on the training samples:

argminW,b1Ni=1NI(c^ici)

However, it turns out that finding a linear classifier that minimizes the misclassification rate is an intractable problem, i.e. its computational complexity is exponential in the number of input dimensions, rendering it unpractical. Moreover, even if we have found a classifier that minimizes the misclassification rate on the training samples, it might be possible to make the classifier more robust to unseen samples by pushing the classes further apart, even if this does not reduce the misclassification rate on the training samples.

Maximum likelihood estimation

An alternative is to use maximum likelihood estimation, where we try to find the parameters that maximize the probability of the training data:

argmaxW,bp(c^=c)
=argmaxW,bi=1Np(c^i=ci)
=argmaxW,bi=1Nj=1Cpi,jI(ci=j)
=argmaxW,bi=1Nj=1Cpi,jci,j
=argmaxW,blogi=1Nj=1Cpi,jci,j
=argmaxW,bi=1Nj=1Cci,jlogpi,j
=argminW,bi=1Nj=1Cci,jlogpi,j
=argminW,bJ

We refer to J=i=1Nj=1Cci,jlogpi,j as the cross-entropy loss. We want to minimize J.

We can regard J as yet another operation in our computational graph that takes the input data X, the true classes c and our predicted probabilities p (which are the output of the σ operation) as input and computes a real number designating the loss:

Image

Building an operation that computes J

We can build up J from various more primitive operations. Using the element-wise matrix multiplication , we can rewrite J as follows:

i=1Nj=1C(clogp)i,j

Going from the inside out, we can see that we need to implement the following operations:

  • log: The element-wise logarithm of a matrix or vector
  • : The element-wise product of two matrices
  • j=1C: Sum over the columns of a matrix
  • i=1N: Sum over the rows of a matrix
  • : Taking the negative

Let's implement these operations.

log

This computes the element-wise logarithm of a tensor.

log

multiply /

This computes the element-wise product of two tensors of the same shape.

Multiply

reduce_sum

We'll implement the summation over rows, columns, etc. in a single operation where we specify an axis. This way, we can use the same method for all types of summations. For example, axis = 0 sums over the rows, axis = 1 sums over the columns, etc. This is exactly what numpy.sum does.

reduce_sum

negative

This computes the element-wise negative of a tensor.

Negative

Putting it all together

Using these operations, we can now compute J=i=1Nj=1C(clogp)i,j as follows:

J = negative(reduce_sum(reduce_sum(multiply(c, log(p)), axis=1)))

Example

Let's now compute the loss of our red/blue perceptron.

Loss
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content