Wednesday, October 17, 2012

Artificial Neural Networks

Computers outperform humans in many tasks. Although humans must write the instructions, once the program is up and running, a computer can perform arithmetic or sort a list in a fraction of the time a person would require to do the same job. The most advanced computers today are trillions of times faster than humans in certain tasks, and IBM's supercomputer Deep Blue defeated Garry Kasparov, the reigning chess champion, in a 1997 chess match.
But even the fastest computers cannot outperform humans in all tasks. Computers excel at tasks requiring a large number of simple operations, but unlike humans, computers are not yet generally capable of making new discoveries. The human brain is an astonishingly complex organ composed of billions of cells; one type of cell, called a neuron, communicates with other neurons to create vast networks. The complexity, adaptability, and information-processing capacity of these neural networks provide humans with the intelligence to conduct experiments, test scientific theories, formulate general principles, learn new things, and write computer programs. A computer can only carry out its instructions. Computers are able to run complicated programs, but the program must consist of a sequence of simple instructions, and a computer's processor can only follow these instructions - it does what it is told to do, and nothing more. Deep Blue won its chess match by performing billions of simple calculations that evaluated the outcome of potential moves.
Artificial intelligence (AI) is a branch of computer science aimed at creating machines capable of showing a certain degree of intelligence. The ultimate goal of AI is a computer that can think like a person. One option to reach this goal would be to give computers a "brain" that is similar to a human brain. Many AI researchers who pursue this option have started tinkering with artificial neural networks, which are not biological though they are based on the operating principles of the brain.
Artificial neural networks were not possible until scientists had some idea about the biological neural networks in the brain. Neurons are enclosed in a membrane and are tiny, having a cell body with a diameter of about 0.02-0.06 inches (0.05-0.150 cm) and a long, thin projection called an axon.
Detailed study of neurons began in 1873, when Italian researcher Camillo Golgi (1843-1926) developed a method of staining the cells so that they could be easily viewed in microscopes. Neurons, like most cells, are mostly transparent, and they are tightly packed together, making these small objects nearly impossible for scientists to see and study even under microscopic magnification. Golgi's method involved a dye consisting of silver nitrate, which some (though not all) neurons take up. The dye stained these neurons and made them stand out against a background of unstained cells. (If the dye had stained all the cells, the result would have been a uniform field of color - as useless as the original, transparent condition, because researchers could not have studied individual cells.) Why some but not all neurons take up this dye is still not well understood, but the method gives scientists a good look at these important cells.
Using Golgi's technique, Spanish anatomist Santiago Ramon y Cajal (1852-1934) suggested that neurons process information by receiving inputs from other cells and sending outputs down the axon. Cajal's theories proved to be mostly correct. Neurons send and receive information from cell to cell by way of small junctions called synapses, named by British physiologist Sir Charles Sherrington (1857-1952) in 1897. As shown in the figure, synapses are usually formed between the axon of the sending neuron - the presynaptic neuron - and a dendrite or cell body of the receiving neuron - the postsynaptic neuron. The figure illustrates the anatomy of a neuron and its synapses.
Information in the brain has a different form than it has in a computer or in human languages such as English. Neurons maintain a small electrical potential of about -70 millivolts (a millivolt is a thousandth of a volt) - the interior of a neuron is about 70 millivolts more negative than the outside. This small voltage is only about 1/20th the voltage of an ordinary flashlight battery and is not powerful by itself (though some animals such as electric eels can combine the small potentials produced by their cells to generate a powerful shock). More important is a neuron's ability to change its voltage briefly, causing a voltage spike that lasts a few milliseconds. The spike is known as an action potential.
Neurons transmit information in the form of sequences of action potentials. An action potential travels down an axon until it arrives at a special site called an axon terminal, which is usually located at a synapse. In most synapses, the spike causes the presynaptic neuron to release molecules known as neurotransmitters that cross the synaptic gap and attach to a receptor in the postsynaptic membrane. This activates the receptor, which sets certain biochemical reactions into motion and can slightly change the potential of the postsynaptic neuron. Neurons are continually receiving these synaptic inputs, usually from a thousand or more neurons, some of which slightly elevate the neuron's potential and some of which depress it. A neuron will generally initiate an action potential if its voltage exceeds a threshold, perhaps 10 or 15 millivolts higher (more positive) than its resting potential of -70 millivolts. In this way, neurons are constantly "processing" their inputs, some of which are excitatory, tending to cause the neuron to spike by pushing it closer to the threshold, and some of which are inhibitory, making it more difficult for a neuron to spike by dropping the potential farther away from the threshold. The result of this processing is the brain activity responsible for all the intelligent - and sometimes not so intelligent - things that people do.
Vision, for example, begins when special cells in the eye called photoreceptors absorb light. Other cells convert the light signals into trains of action potentials that represent the dark and bright areas making up the image. Dozens of neural networks, distributed over vast areas in the brain, process this visual information, extracting information such as the number and type of objects and the color and motion of these objects. At some point - scientists are not sure how and where - the person perceives and becomes consciously aware of this visual information.
Information processing in the brain is much different than in an ordinary computer. A computer generally operates on binary values using digital logic circuits to transform data. Each processor in a computer works serially, one step at a time. In the brain, information processing occurs in parallel. Millions of neurons are working at the same time, summing their synaptic inputs and generating more or fewer action potentials. This activity is sometimes called parallel distributed processing, which refers to the simultaneous parallel operations distributed over a broad area.
The parallel nature of information processing in the brain is the reason it can work so quickly. Computers are much faster in arithmetic, but the brain's primary function is not to add or subtract numbers quickly. The brain evolved to analyze sensory inputs - vision, hearing, smell, taste, and touch - and extract vital information concerning food and predators. Neural networks in the brain can interpret an image more rapidly and accurately than any computer program, for example. Each neuron behaves like a little processor, contributing its portion to the overall computation. Supercomputers gain speed by using a lot of processors working in parallel, but the brain has approximately a trillion neurons, which gives it a computational capacity greatly exceeding computers for jobs it evolved to perform.

No comments:

Post a Comment