Self updating map
The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping.
The examples are usually administered several times as iterations. When a training example is fed to the network, its Euclidean distance to all weight vectors is computed.
The other plots each overlay the resulting map with predicted values on an input dimension: red means a predicted 'yes' vote on that bill, blue means a 'no' vote. This makes SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling.
The artificial neural network introduced by the Finnish professor Teuvo Kohonen in the 1980s is sometimes called a Kohonen map or network.
An illustration of the training of a self-organizing map.
Math Works does not warrant, and disclaims all liability for, the accuracy, suitability, or fitness for purpose of the translation.The procedure for placing a vector from data space onto the map is to find the node with the closest (smallest distance metric) weight vector to the data space vector.While it is typical to consider this type of network structure as related to feedforward networks where the nodes are visualized as being attached, this type of architecture is fundamentally different in arrangement and motivation.The neuron whose weight vector is most similar to the input is called the best matching unit (BMU).The weights of the BMU and neurons close to it in the SOM lattice are adjusted towards the input vector.