Deep Learning From Scratch - Theory and Implementation

DanielSabinasz
22.8K views

Open Source Your Knowledge, Become a Contributor

Technology knowledge has to be shared and made accessible for free. Join the movement.

Create Content

It is now time to say goodbye to our cuddly TensorSlow library and start to get professional by switching to the actual TensorFlow.

As we've learned already, TensorFlow conceptually works exactly the same as our implementation. So why not just stick to our own implementation? There are a couple of reasons:

  1. TensorFlow is the product of years of effort in providing efficient implementations for all the algorithms relevant to our purposes. Fortunately, there are experts at Google whose everyday job is to optimize these implementations. We do not need to know all of these details. We only have to know what the algorithms do conceptually (which we do now) and how to call them.

  2. TensorFlow allows us to train our neural networks on the GPU (graphical processing unit), resulting in an enormous speedup through massive parallelization.

  3. Google is now building Tensor processing units, which are integrated circuits specifically built to run and train TensorFlow graphs, resulting in yet more enormous speedup.

  4. TensorFlow comes pre-equipped with a lot of neural network architectures that would be cumbersome to build on our own.

  5. TensorFlow comes with a high-level API called Keras that allows us to build neural network architectures way easier than by defining the computational graph by hand, as we did up until now. We will learn more about Keras in a later lesson.

So let's get started. Installing TensorFlow is very easy.

pip install tensorflow

If we want GPU acceleration, we have to install the package tensorflow-gpu:

pip install tensorflow-gpu

In our code, we import it as follows:

import tensorflow as tf

Since the syntax we are used to from the previous sections mimics the TensorFlow syntax, we already know how to use TensorFlow. We only have to make the following changes:

  • Add tf. to the front of all our function calls and classes
  • Call session.run(tf.global_variables_initializer()) after building the graph

The rest is exactly the same. Let's recreate the multi-layer perceptron from the previous section using TensorFlow:

Multi-Layer Perceptron
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# Create two clusters of red points centered at (0, 0) and (1, 1), respectively.
red_points = np.concatenate((
0.2*np.random.randn(25, 2) + np.array([[0, 0]]*25),
0.2*np.random.randn(25, 2) + np.array([[1, 1]]*25)
))
# Create two clusters of blue points centered at (0, 1) and (1, 0), respectively.
blue_points = np.concatenate((
0.2*np.random.randn(25, 2) + np.array([[0, 1]]*25),
0.2*np.random.randn(25, 2) + np.array([[1, 0]]*25)
))
# Create training input placeholder
X = tf.placeholder(dtype=tf.float64)
# Create placeholder for the training classes
c = tf.placeholder(dtype=tf.float64)
# Build a hidden layer
W_hidden = tf.Variable(np.random.randn(2, 2))
b_hidden = tf.Variable(np.random.randn(2))
p_hidden = tf.sigmoid( tf.add(tf.matmul(X, W_hidden), b_hidden) )
# Build the output layer
W_output = tf.Variable(np.random.randn(2, 2))
b_output = tf.Variable(np.random.randn(2))
p_output = tf.nn.softmax( tf.add(tf.matmul(p_hidden, W_output), b_output) )
# Build cross-entropy loss
J = tf.negative(tf.reduce_sum(tf.reduce_sum(tf.multiply(c, tf.log(p_output)), axis=1)))
# Build minimization op
minimization_op = tf.train.GradientDescentOptimizer(learning_rate = 0.01).minimize(J)
# Build placeholder inputs
feed_dict = {
X: np.concatenate((blue_points, red_points)),
c:
[[1, 0]] * len(blue_points)
+ [[0, 1]] * len(red_points)
}
# Create session
session = tf.Session()
# Initialize variables
session.run(tf.global_variables_initializer())
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Open Source Your Knowledge: become a Contributor and help others learn. Create New Content