|| The Lamstar network is an artificial neural network architecture
(developed by Professor Daniel Graupe, UIC), which is
based on the self-organizing modules and on the powerful
concept of the link-weights between neurons; it is capable of a
considerable amount of generalization and operates correctly
even with datasets which have missing attributes.
It is designed to store a large amount of data and yet be fast
I need an implementation of this network for my thesis, which
is better described in another page of this site (maybe ;-),
what you see here is an embryo of the final network and
a 'toy' testing infrastructure around it.
The implementation of the Lamstar network here given is limited
(some of the theoretical possibilities of the network, such as
fast retrieval via weight channelling were not implemented).
The example itself (the recognition of whether a 2D point
is inside a circle or not) is trivial and in practice
does not make use of most of the interesting features provided
by the network, it was originally designed to see whether
the network was running correctly or not.
2500 points in the unity square centered in the origin are
pseudo-randomly generated (the random generator is not inizialized
-- a simple way to initialize it always in the same way) in order to
obtain always the same pseudo-random sequence, therefore not
biasing the results when different structural parameters are used.
The task of the network (once trained) is to recognize whether
a 2d given point is inside or outside the circle which has
center in the origin and radius = 0.5.
The network is trained with the first 1250 points, the
second 1250 are used for the test.
A critical parameter, which determines how silimar a new data should
be near to an already-known one for the neuron which represents it
to be reused, is changed four times, assuming the following
values: 0.05, 0.1, 0.2, 0.5. When this parameter is low
the network is more accurate because almost every new data
causes a new neuron to be created inside the network to represent
the new data. On the other side, the network becomes bigger,
slower and exploit less its generalization capabilities.
Increasing this parameter makes the network reuse more frequently
the neuron it already has, thus resulting in a faster, smaller
network which can be less accurate.
The graphs represent the error the number and distribution
of the mistakes when the four values of the parameter are changed.
Further studies on this example, like the number of used neurons
as a function of the parameter described above, are left
to the reader as an exercise.
- the green diamonds represent points inside the circle and
- the blue plus signs represent points outside the circle and
- the magenta crosses represent point outside the circle which
were incorrectly identified as belonging to the circle;
- the red squares represent points inside the circle which
were incorrectly identified as not belonging to the circle;