Colour image classification (CIFAR-10) using a CNN

As I mentioned in a previous post, a convolutional neural network (CNN) can be used to classify colour images in much the same way as grey scale classification. The way to achieve this is by utilizing the depth dimension of our input tensors and kernels. In this example I'll be using the CIFAR-10 dataset, which consists of 32x32 colour images belonging to 10 different classes. You can see a few examples of each class in the following image from the CIFAR-10 website: Although previously I've talked about the Lasagne and nolearn packages (here and here), extending those to colour images…

0 Comments

Visualizing Convolutional Neural Networks using nolearn

We previously talked about Convolutional Neural Networks (CNN) and how use them to recognize handwritten digits using Lasagne. While we can manually extract kernel parameters to visualize weights and activation maps (as discussed in the previous post), the nolearn package offers an easy way to visualize different elements of CNNs. nolearn is a wrapper around Lasagne (which itself is a wrapper around Theano), and offers some nice visualization options such as plotting occlusion maps to help diagnose the performance of a CNN model. Additionally, nolearn offers a very high level API that makes model training even simpler than with Lasagne. In…

0 Comments

Handwritten digit recognition with a CNN using Lasagne

Following my overview of Convolutional Neural Networks (CNN) in a previous post, now lets build a CNN model to 1) classify images of handwritten digits, and 2) see what is learned by this type of model. Handwritten digit recognition is the 'Hello World' example of the CNN world. I'll be using the MNIST database of handwritten digits, which you can find here. The MNIST database contains grey scale images of size 28x28 (pixels), each containing a handwritten number from 0-9 (inclusive). The goal: given a single image, how do we build a model that can accurately recognize the number that…

1 Comment

Overview of Convolutional Neural Networks (CNN)

Regular feed-forward artificial neural networks (ANN), like the type featured below, allow us to learn higher order non-linear features, which typically results in improved prediction accuracy over smaller models like logistic regression. However, artificial neural networks have a number of problems that make them less ideal for certain types of problems. For example, imagine a case where we wanted to classify images of handwritten digits. An image is just a 2D array of pixel intensity values, so a small 28x28 pixel image has a total of 784 pixels. If we wanted to classify this using an ANN, we would flatten…

0 Comments

XOR Logic Gate – Neural Networks (3/3)

(Part 3 of a series on logic gates) We have previously discussed OR logic gates and the importance of bias units in AND gates. Here, we will introduce the XOR gate and show why logistic regression can't model the non-linearity required for this particular problem. As always, the full code for these examples can be found in my GitHub repository here. XOR gates output True if either of the inputs are True, but not both. It acts like a more specific version of the OR gate: Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1…

0 Comments