Home  About me  Rubrics  Comments  Links 

Neural Network

Optical character recognition - The VB program

The artificial neuron (shown in Fig 1) is a unit processing that has a set of inputs and a threshold activation function that determines the output of the neuron itself. The inputs of neurons may be originate from outside or from output of other neurons. When two neurons are interconnected between them, the point of junction between them is called synapses (according to the nomenclature used for biological neuron).

Fig .1 - Neuron Example Fig .2 -Neural Network example

The fig2 shows a type of neural network FeedFarward said.

Here briefly analyze the type of neuron used in our network. This is the Perceptron designed by Rosenblatt in 1962 in order to recognize the characters.

Simple Perceptron

Eq 1.1

Eq 1.2

A is the activation and Xj the inputs. Wj are the synaptic weights q is the threshold. The last ones change during the training process of the neural network. The activation in a neuron is the linear combination of inputs Xi with weights Wj (Eq 1.1). This is a general principle and applies to all neurons that have been created because it has biological foundations.


The inputs X1, X2, X3, ... .. Xn are also known as bipolar inputs, because they can take only 2 values: 1 and -1. Consider the vector X = (X1, X2, X3, ... .. Xn). This one has said pattern of entry.


The function f (A) can be chosen in different ways. In our case we use the non-linear above (Eq 1.2). Choosing a function of this type is that:


  • Se A 0 wi * xi q Then we have y = 1 (cio excitatory)
  • Se A < 0 wi * xi < q Then we have y = -1 (cio inhibitory).


Learning consist of to give to the perceptron the pattern of entry and the response that should be associated. The Wi synaptic weights and the threshold q are modified according to a comparison between the actual peceptron response (Evaluate with 1.1 equations) and the answer you want.

Will be U the desired answer and y the effective answer



The synaptic weights are not correct if the answer y = u while are correct if the answer is different. The correction factor is due to a Canadian psychologist, Donald hebb.

Donald hebb discover that in the biological circuit the connections are reinforced when two neurons connected , are active simultaneously and vice versa.

Then, consider the correction factor Dwi=xi *u. We have the following table:


u xi Dwi

1 -1 -1

-1 1 -1

-1 -1 1

1 1 1


Ohhhh Look.

When the input Xi is agreed to the output u the Dwi is positive while the vice versa when they disagree. Well, add variations at each step of training as shown in Eq 1.3 below. You know what happens? During training, when xi and u have the same value, wi grows (the connection are reinforced) and when xi and u are different, wi decreases (the connection is weakened).


wi(t+1) = wi(t) + Dwi(t) Eq 1.3


t is generic istant then t+1 is the next one !


Typically, the factor Dwi =u * xi is multiplied by a constant h, the rate of learning

With this multiplication factor can vary the speed of training (but we don't enter in this details).


Our Neural Network


The first thing to do before start to program is see which is the input , which is the output of the network and how many perceptron we need.

Our characters are drawn on a matrix of rows 6 to 5. Associate each element of the array with bipolor value . This value will be -1 if the point on the matrix is white, while will be 1 if the point is black. In this way we get a numerical matrix that denote by M. Consider for example the letter "C" The matrix of points (Fif 4) and the numerical matrix (fig 5) are shown below.


Fig 3 - Numerical Matrix for "C" character Fig. 4 - Point Matrix for "C" character


Problem: the input neurons is made up of vectors while we have matrices. What we can do ? Simple, build an isomorphism between the space of matrices and vectors. It is a big words to say simply that we establish a simple binary function between vectors and matrices. How?

The rows of the matrix are side by side in sequence to form a vector.

In our case the input vector on the letter "C" will be given by:


x = (1 1 1 1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 -1 -1 -1 -1 1 1 1 1)


Ok, this is about the inputs, and about the outputs?

Well, use a perceptron for each character to learn.


Consider a character and its vectore x .The perceptron that must recognize this one character will provide a response y = 1 if its entry is x (or a similar pattern) and will provide an answer y = -1 in other cases. In our example perceptron trained to recognize the C, will be on only at the vector x above see and will be off in all other cases.


Note : The perceptron are interesting feature. After it done the training on a similar pattern (similar characters) the percettrone learns to extract the characteristics of that pattern and it will turn on when it is recognized. This means that a perceptron is able to recognize a character never seen before, but similar to those of trained.

A network trained to recognize three letters will be made so:


Putting together the three percettroni have:

Where n is equal to the length of X : 5x6 = 30 items


Y1 will be equal to 1 only at the character 1 (or a similar)
Y2 will be equal to 1 only at the character 2 (or a similar)
Y3 will be equal to 1 only the 3 character 3 (or a similar)





The Visual basic Program


Give only the main steps of the program (otherwise this article did not finish more).

If you are interested in some clarification, well ... you have my mail !


The program doesn't have a lot of commented (because I wanted finish it) in addition, the code is not exactly as they say in computer science "code clean". We say that is somewhat "do at home" , but it works!


Character Store


A character is designed using an array of shape (called CH) organized on the form to form a grid (as shown in Figure 1). The elements are numbered sequentially from 0 to 29. The number CH (0) is the top left on the grid. Moving from left to right on the first row, the number is growing. The last element of the first row will be CH (4). Then again with the first element of the second line, which will be CH (5), until you reach the top of the second line, which will be the CH (9). So on up to the lower right CH (29)


As explained above, the characters are stored in carriers row of 30 elements.

So if we want to store 20 characters, we need a matrix of 20 lines by 30 points each. Indichimo with M (20.30) this matrix.

The function InMatM (n) stores a generic character on the grid in row n matrix M.


Public Sub InMatM(n As Byte)

For i = 0 To 29

If CH(i).BackColor = vbBlack Then

M(n, i) = 1


M(n, i) = -1

End If

Next i

End Sub


InMatM (n) running in practice scan from a CH (0) to CH (29). If the generic CH (i) has a black background element in the corresponding position on the line n is set to 1 and vice versa if the white background and is set to -1


The function outMatM (n) does the opposite operation. Given a line of M, designs the Associate character on the grid. The character associated with the line n of M is stored in a position of control n list1. (ListBox labeled "N° Cicli")


The number of neurons is equal (as explained above) the number of different characters that you want to learn the network. These are stored in list2 (the ListBox labeled "Neuroni Associati").




Public Sub addestra()

LBch.Caption = ""

ta = 1 ' seting train value


If List2.ListCount = 0 Then

MsgBox "E' effettuare inserire almeno un carattere!"

Exit Sub

End If


InizializzaW reset the synaptic weights Matrix


do the train for each neuron. one for each character

For i = 0 To List2.ListCount - 1

Apprendi (i)

Next i



Label7.Caption = "Addestramento Neuroni completato"

Frame1.Enabled = True

End Sub


W (20:31) is the matrix of synaptic weights. we will see it below.

The training process makes the training of all neurons, one for each character in this list2. The training of the i position neuron done by the "apprendi" procedure

see it in detail.


Public Sub Apprendi(n As Integer) 'n is a train neuron

Dim X(31) As Integer, Ncicli As Byte, y As Integer, t As Integer, dw As Integer

Dim chn As String, chx As String

chn = List2.List(n): Ncicli = List1.ListCount - 1

For ix = 0 To Ncicli

'Define the output state

chx = List1.List(ix)

If chx = chn Then

t = 1


t = -1

End If


'I define the input activation training

' X(0)=-1 X(1..30) associate vector co character

X(0) = -1

For i = 0 To 29

X(i + 1) = M(ix, i)

Next i


'Evaluation activations

somma = 0

For i = 0 To 30

somma = somma + W(n, i) * X(i)

Next i

If somma > 0 Then

y = 1


y = -1

End If


'change the synaptic weights while y<>t

While y <> t

For i = 0 To 30

dw = ta * t * X(i)

W(n, i) = W(n, i) + dw

Next i

'Evaluate of activations again

somma = 0

For i = 0 To 30

somma = somma + W(n, i)*X(i)

Next i

If somma > 0 Then

y = 1


y = -1

End If




Next ix

End Sub


Instruction : chn = List2.List(n)

It takes the character of list2 in n position (on the neuron by train).

The cycle repeats for all the instructions for each element of list1 (in all patterns of training provided to the network). Let's see the operations within the cycle


'I define the exit status

I Evaluate generic neuron n exit on the character chn. In particular, the neuron will issue exit t= 1 only if the character in list2 is chn while, it should be -1 in other cases


'I define input activitvations vector

It connect to general character CHX list2.list = (ix) a input activations vector. The vector stored in row ix of M. This section of the code copy that one line in the X vector.

The value X (0) is constant and it's equal to -1 value . We will see below that this position is for to include the threshold q in the sum EQ 1.1


' Activation Evaluation

For the calculation of activation is necessary to assess the sum in EQ 1.1. To do this you run a cycle that accumulates products W (i)*X (i) in the variable "somma". Note that the cycle start from 0! This means that it is also added initial factor w (0) * X (0). This factor is the threshold, in fact, if we assume in l'Eq1 .1: -q =-W(0) = X(0)W(0), con X(0)=-1 we can include the threshold in the summation.

The output y is evaluated according to eq 1.2, if the sum> 0 then y = 1 and vice versa, y =- 1 when the sum £ 0.


'Weights Adjustment


If the output of neuron y n is considered different from the value t wanted (while loop with head control ), then the weights are changed with the factor of hebb. Thereafter is revalued the activations and the y output. If y is in according to the desired t, you can go to train the same neuron with thereafter puttern of list1 otherwise, it repeats the while cycle.




The recognition is simple. Is built vector X input vector

X (0) =- 1 (The reason I explained before!)

X (i) = 1 if the shape is black and vice versa X (i) =- 1 if the shape is white.


Consider that each character is a neuron, which each character corresponds to recognize a line of W (which defines the synaptic weights for the neuron under study).

The "For j cycle" to each character of list2 estimate the activation.

The Neurone for which the somma > 0 will be that one connect on the letter drawn on the grid.

If list2 is crawled all over and no neuron generates sum> 0 means that the character was not recognized. The procedure returns List2.ListIndex = -1 to indicate this fact.


Note: The recognition procedure will stop as soon as is a neuron that sum> 0. This case however does not guarantee that there isn't another neuron next with sum> 0. In other words, the program does not detect if there are any interference.

I leave to you the task!

Public Sub Riconosci()

Dim X(31) As Integer


'Define of input activations vector

' X(0)=-1 X(1..30) Vector associated with the character

X(0) = -1

For i = 1 To 30

If CH(i - 1).BackColor = vbBlack Then

X(i) = 1


X(i) = -1

End If

Next i



List2.ListIndex = -1

For j = 0 To List2.ListCount - 1

List2.ListIndex = List2.ListIndex + 1

'I evaluate the activation for j neuron

somma = 0

For i = 0 To 30

somma = somma + W(j, i) * X(i)

Next i

If somma > 0 Then Exit Sub


Next j

List2.ListIndex = -1

End Sub

Go to Part 1