22

1       
Artificial Neural Network

Artificial neural networks are inspired by the
biological neural networks and they work by mimicking the same concept. The
idea originated from the study of information processing in biological system
and its mathematical representation by McCulloch & Pitts (1943). The
basic building unit of a biological neural network is a neuron known as
perceptron in artificial neural network. These units perform very simple
function but when combine together they can built very complex classification
functions whose effectiveness can be increased using high grade parallelization
(Rojas, 2013).

1.1      Perceptron

The idea of a hypothetical nervous system
known as perceptron was given by Rosenblatt (1958). The working
of a perceptron is the mimic of biological neuron. A neuron has dendrites through
which the information flows into the body where it gets processed and then
passed to axon which are connected to dendrites of next neuron. Similarly a
perceptron has multiple inputs, a processing stage and a single output. The
output is the sum of all the input weights with biases added to them and an
activation function is used to spark the output as illustrated in fig 3.1.

 

 

Figure 3.1:
Illustration of biological and artificial neuron[1]

 

A simple activation function will give a
binary output. Artificial neural network consisting of a single neuron are too
trivial for complex tasks. In order to address wider set of problems they can
be combined to form multi-layer perceptron or feed forward networks.

1.2      Types of
Artificial Neural Network

There are different types of ANN depending
upon no. of layers, there functionality and the flow of data. Major categories
are explained below.

1.2.1    
Single Layer Feed Forward Network

The simplest kind of an artificial neural
network is a single feed forward or an acyclic network which consist of an
input layer containing source nodes which projects into an output layer. In
such networks there is no connection between neurons in the output layer to the
neuron in the input layer. There is no computation in the input layer so it
does not counts, thus these are called single layer networks as shown in figure
3.2.

 

Figure 3.2:
representation of a single layer feed forward network (Mas & Flores, 2008)

 

1.2.2    
Multilayer Feed Forward Network

A feed forward multilayer network contains
multiple neurons arranged in layers. Neurons in the adjacent layers have
connections between them. It basically consist of three layers, an input layer,
hidden layer and an output layer.  The
input neurons in the first layer don’t perform any calculations. Hidden layers
perform the calculations. Inputs in these networks are labeled thus MLP uses
supervised learning through backpropogation algorithms, so the networks knows
what is the desired output of each given input. The data flows from the input node
to the output nodes passing through the hidden nodes as shown in figure 3.3.
The data flows only in the forward direction, there is no backward passing of
data or any loops or cycles as in case of recurrent neural networks (Ian Goodfellow and Yoshua Bengio and Aaron Courville,
2016).

Figure 3.3:
Multi-layer feed forward network Recurrent Network[2]

 

1.2.3    
Recurrent Network

A recurrent network is a feed forward network
with input layer and one or multiple hidden layers before an output layer with at
least one feed backward loop as shown in figure 3.4. Recurrent networks are
different from feed forward network because of this feedback ward loop as they
feed their own output moment after moment as input.

 

Figure 3.4:
Recurrent neural network [3]

Feed-forward networks are believed to achieve
high performance on vision and speech problems (Bengio, 2009). In this
research convolutional neural network is used which is a kind of feed forward
network.