The main component of a neural network is the neuron. Each neuron has an activation threshold, and a series of weighted connections to other neurons. If the aggregate activation a neuron receives from the neurons connected to it exceeds its activation threshold, the neuron fires and relays its activation to the neurons to which it is connected. The weights associated with these connections can be modified by training the network to perform a certain task. This modification accounts for learning.
Scientists have learned a great deal about brain function in recent years. The brain is composed of nerve cells, which are connected to other nerve cells by synapses. Complex electrochemical processes propagate an activation potential across a neuron, which the neuron relays to adjacent neurons if the potential exceeds that cell's threshold. Synapses can be inhibitory or excitatory, and brains learn by modifying the way synapses change the activation potential. Through these simple processes, brains are believed to produce thoughts and consciousness.
Artificial neural networks are often organized into layers, with each layer receiving input from one adjacent layer, and sending it to another. Layers are categorized as input layers, output layers, and hidden layers. The input layer is initialized to a certain set of values, and the computations performed by the hidden layers update the values of the output layers, which comprise the output of the whole network.
Learning in neural networks can be supervised or unsupervised. It is accomplished by updating the weights between connected neurons. The most common method for training neural networks is back propagation, a statistical method for updating weights based on how far their output is from the desired output. To search for the optimal set of weights, various algorithms can be used. The most common is gradient descent, which is an optimization method that, at each step, searches in the direction that appears to come nearest to the goal.
Artificial neural networks have been used for a variety of tasks. They have been used as a form of weak artificial intelligence, to study how the brain works. Certain types of brain damage can be modeled by removing nodes and connections from an appropriately trained network. They can be used to estimate mathematical functions, and extract features from images for optical character recognition. An artificial neural network, the Autonomous Land Vehicle in a Neural Network, was used by Carnegie Mellon University's NAVLAB to extract road features for navigating an unmanned vehicle. Neural networks have also been used for voice recognition, game playing and email spam filtering.