A neuron is the basic building block of a neural network. The network consists of many interconnected neurons, each working in parallel, with no central control. Neurons in artificial neural networks are often organized into layers, where the neurons in one layer are only connected to those of adjacent layers. Each neuron-to-neuron connect has an associated weight, and learning within the network is accomplished by updating these weights.
A neural network usually takes one or more inputs, and produces one or more outputs, based on the strength of the connections within, and the way the connections change the input signals. Each neuron receives a signal. If this signal exceeds a certain threshold, it is modified and propagated to connected neurons. The output layer of neurons, those which do not propagate their signals to other neurons, produce the output calculated by the whole network.
The idea that learning in the brain may be based on a system of weighted connections was first introduced in 1943, by Warren McCulloch and Walter Pitts. In 1951, Marvin Minsky built a machine that may have been the first neural network learning system. In the 1960s, single layer neural networks called perceptrons were studied extensively, but were unable to perform a large class of computations. Study of neural networks was abandoned until the 1980s, when Parallel Distributed Processing, by Rumelhart and McClelland, revived research and helped bring the state of the art to what it is today.
While much of brain function remains a mystery, scientists have learned a great deal about it. Nerve cells form the building blocks of the brain, and are connected to other nerve cells by synapses. Complex electrochemical processes propagate an action potential across neurons. If the potential reaches a certain threshold, it is propagated across synapses to adjacent neurons. A synapse may increase, decrease or maintain an action potential. Long-term changes in the strength of synaptic connections allow the brain to change the way it computes, which accounts for learning. Many believe that neural networks are the main components of thought and consciousness.
Neural networks can be used to perform a wide variety of tasks. They have been used to study how the brain works, and they have been especially useful modeling brain damage, such as that which might bring about speech disorders. Neural networks can be used to approximate a wide class of mathematical functions to an arbitrary degree. They have also been used in optical character recognition and visual feature extraction for automobile driving algorithms. The wide applicability of neural networks once led aviation scientist John Denker to remark that "neural networks are the second best way of doing just about anything."