The school of psychology called "behaviorism" dominated the earliest research into learning and motivation. In 1903, Russian psychologist Ivan Pavlov reported that he could train dogs to salivate at the sound of a bell and other cues normally unrelated to this otherwise instinctual behavior, an association later called "classical conditioning."
In 1913, John B. Watson proposed that reward or punishment could train any person. B.F. Skinner's radical behaviorism brought the movement to its greatest extreme, removing the role of human thought and feeling. In "Verbal Behavior," published in 1957, Skinner summarized the brain as a simple input-output system, where even the complexity of language was simply a byproduct of environmental feedback.
Cracks in behaviorism's grip came with Noam Chomsky's influential criticism of Skinner's book in 1959. Behaviorism had overextended itself by failing to account for free will and innate capacity. Since the late 1970s and '80s, the cognitive model has dominated psychology: Rather than seeing people as simple input-output machines, this model posits that a person's background and how she learned to think about things influence her behavior and learning ability.
During the cognitive revolution, advances in computer science allowed increasingly accurate models of the human mind. Network models -- with branches of interconnected information -- resemble the internal structure of the brain and how computers process complex information. For example, George A. Miller's WordNet, under development since 1985, infers similar learning methods onto organic systems.
Two important sub-theories come from the cognitive revolution. The "locus of control," coined by Julian B. Rotter in 1954, refers to an individual's beliefs about how much of his life he can control. This may be internal, assuming personal responsibility and effort, or external, assuming luck and circumstances. Rotter observed that the more internalized a person's locus of control, the more motivated she feels to take on challenges, assuming personal effort leads to the desired outcome.
"Delayed gratification" refers to the ability to prioritize later gain over current reward. In a 1972 study led by Walter Mischel, the Stanford Marshmallow Experiment, researchers offered children a small reward immediately or a larger reward if they waited. Mischel observed the children's ability to resist the desire for immediate gratification and discovered 30 percent of the children could wait the allotted time for the larger reward. A 1990 follow-up study, published in "Developmental Psychology," found that these children, deemed more competent by their parents during adolescence, achieved higher SAT scores. In both cases, the ability and willingness to act hinges on internal processes, either the perception of effectiveness or in self-distracting methods individuals use to draw their focus away from an immediate reward.