Deep Learning From Scratch - Theory and Implementation

DanielSabinasz
237.3K views

Open Source Your Knowledge, Become a Contributor

Technology knowledge has to be shared and made accessible for free. Join the movement.

Create Content

Training criterion

Great, so now we are able to classify points using a linear classifier and compute the probability that the point belongs to a certain class, provided that we know the appropriate parameters for the weight matrix WW and bias bb. The natural question that arises is how to come up with appropriate values for these. In the red/blue example, we just looked at the training points and guessed a line that nicely separated the training points. But generally we do not want to specify the separating line by hand. Rather, we just want to supply the training points to the computer and let it come up with a good separating line on its own. But how do we judge whether a separating line is good or bad?

The misclassification rate

Ideally, we want to find a line that makes as few errors as possible. For every point xx and class c(x)c(x) drawn from the true but unknown data-generating distribution pdata(x,c(x))pdata(x,c(x)), we want to minimize the probability that our perceptron classifies it incorrectly - the probability of misclassification:

argminW,bp(ˆc(x)c(x)x,c(x)~pdata)argminW,bp(^c(x)c(x)x,c(x)~pdata)

Generally, we do not know the data-generating distribution pdatapdata, so it is impossible to compute the exact probability of misclassification. Instead, we are given a finite list of NN training points consisting of the values of xx with their corresponding classes. In the following, we represent the list of training points as a matrix XRN×dXRN×d where each row corresponds to one training point and each column to one dimension of the input space. Moreover, we represent the true classes as a matrix cRN×CcRN×C where ci,j=1ci,j=1 if the ii-th training sample has class jj. Similarly, we represent the predicted classes as a matrix ˆcRN×C^cRN×C where ˆci,j=1^ci,j=1 if the ii-th training sample has a predicted class jj. Finally, we represent the output probabilities of our model as a matrix pRN×CpRN×C where pi,jpi,j contains the probability that the ii-th training sample belongs to the j-th class.

We could use the training data to find a classifier that minimizes the misclassification rate on the training samples:

argminW,b1NNi=1I(ˆcici)argminW,b1NNi=1I(^cici)

However, it turns out that finding a linear classifier that minimizes the misclassification rate is an intractable problem, i.e. its computational complexity is exponential in the number of input dimensions, rendering it unpractical. Moreover, even if we have found a classifier that minimizes the misclassification rate on the training samples, it might be possible to make the classifier more robust to unseen samples by pushing the classes further apart, even if this does not reduce the misclassification rate on the training samples.

Maximum likelihood estimation

An alternative is to use maximum likelihood estimation, where we try to find the parameters that maximize the probability of the training data:

argmaxW,bp(ˆc=c)argmaxW,bp(^c=c)
=argmaxW,bNi=1p(ˆci=ci)=argmaxW,bNi=1p(^ci=ci)
=argmaxW,bNi=1Cj=1pI(ci=j)i,j=argmaxW,bNi=1Cj=1pI(ci=j)i,j
=argmaxW,bNi=1Cj=1pci,ji,j=argmaxW,bNi=1Cj=1pci,ji,j
=argmaxW,blogNi=1Cj=1pci,ji,j=argmaxW,blogNi=1Cj=1pci,ji,j
=argmaxW,bNi=1Cj=1ci,jlogpi,j=argmaxW,bNi=1Cj=1ci,jlogpi,j
=argminW,bNi=1Cj=1ci,jlogpi,j=argminW,bNi=1Cj=1ci,jlogpi,j
=argminW,bJ=argminW,bJ

We refer to J=Ni=1Cj=1ci,jlogpi,jJ=Ni=1Cj=1ci,jlogpi,j as the cross-entropy loss. We want to minimize JJ.

We can regard JJ as yet another operation in our computational graph that takes the input data XX, the true classes cc and our predicted probabilities pp (which are the output of the σσ operation) as input and computes a real number designating the loss:

Image

Building an operation that computes JJ

We can build up JJ from various more primitive operations. Using the element-wise matrix multiplication , we can rewrite JJ as follows:

Ni=1Cj=1(clogp)i,jNi=1Cj=1(clogp)i,j

Going from the inside out, we can see that we need to implement the following operations:

  • loglog: The element-wise logarithm of a matrix or vector
  • : The element-wise product of two matrices
  • Cj=1Cj=1: Sum over the columns of a matrix
  • Ni=1Ni=1: Sum over the rows of a matrix
  • : Taking the negative

Let's implement these operations.

log

This computes the element-wise logarithm of a tensor.

log

multiply /

This computes the element-wise product of two tensors of the same shape.

Multiply

reduce_sum

We'll implement the summation over rows, columns, etc. in a single operation where we specify an axis. This way, we can use the same method for all types of summations. For example, axis = 0 sums over the rows, axis = 1 sums over the columns, etc. This is exactly what numpy.sum does.

reduce_sum

negative

This computes the element-wise negative of a tensor.

Negative

Putting it all together

Using these operations, we can now compute J=Ni=1Cj=1(clogp)i,jJ=Ni=1Cj=1(clogp)i,j as follows:

J = negative(reduce_sum(reduce_sum(multiply(c, log(p)), axis=1)))

Example

Let's now compute the loss of our red/blue perceptron.

Loss
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content