Key+Findings

__**Key Findings**__

Hermann Ebbinghaus: The Forgetting Curve

Hermann Ebbinghaus is considered by many to be one of the pioneers of modern experimental psychology. Ebbinghaus focused much of his work on memory and learning habits. Through his research he created the forgetting curve. Ebbinghaus believed that learning was all about associations. He maintained that the mind was a network of associations among elements (Mook, 2004). If two events take place together, the one event is now connected to the other in the mind. After forming the aforementioned association, one has it available for retrieval, which is what we call memory.

Ebbinghaus set out to study memories in regards to their formation and rate of decay. He wanted memories that were built up from zero strength (Mook, 2004). In order to do so Ebbinghaus needed to create information that was completely meaningless. His goal was for the information not to be associated with anything, until associations were formed among them in the course of the experiment (Mook, 2004). This led Ebbinghaus to create his list of nonsense syllables. He created them by taking two consonants and placing a vowel between them. He created a master list of about 2,300 nonsense syllables (Mazur, 2002). As a result of the words being “nonsense,” there would be no prior association. Ebbinghaus would then make a list of his nonsense syllables and recite it several times. He would do this until he could recite the list completely from memory without making a mistake. Ebbinghaus would then document the number of repetitions, or “trials,” that it took for him to be able to repeat it. He would then wait a certain amount of time, which varied each experiment, and start to recite the list again. He would then record how many times it took him to once again be able to say the list without any mistakes. If it took him only half as long or only a quarter as long to rememorize the list, there was a 50 percent savings or a 75 percent savings, respectively. If it took him the same amount of tries to memorize the list as it did initially, then there was no savings at all.

Ebbinghaus repeated this process several times and changed the delay in time between the initial memorization and the second memorization (Mook, 2004). Through this experiment, Ebbinghaus was able to show how the strength of memory decayed with time after learning (Mook, 2004). With these results, he coined the famous forgetting curve. The curve showed that forgetting occurred faster immediately after learning something, but that the rate slowed as time passed (Mazur, 2002). His experiment and the forgetting curve are still taught in many psychology classes today.

= **Classical Conditioning** = Ivan Pavlov’s experiments on classical conditioning are probably some of the most well known in the realm of psychology. His experiments and theories are still taught to this day and referenced to many times in popular culture. Although Pavlov was not necessarily a psychologist, he was a physiologist; he has become a staple in the subject. Pavlov first decided to focus on conditioned reflexes when he studying the subject of salivary secretion (Gray, 1980). Pavlov was interested the amount of salivation that occurred in response to food under varied conditions (Mook, 2004). He noticed an interesting occurrence though when the dog began to salivate as soon the assistant approached. He was interested in studying salivation when there was food in the animal’s mouth, so this was a little bit of a problem. He soon realized that this was less of a problem and a more interesting experiment. The dog was creating a new pathway in its brain. The assistant had become associated with food in the dog’s mind and now salivation occurred in anticipation of food (Mook, 2004). This process is what was going to be known as classical conditioning.

Pavlov wanted to study such connections from the beginning so he decided to begin his experiments with a “signal” that was in no way connected to food, like the click of a metronome or the sound of a bell (Mook, 2004). After a short period of time a machine would give the dog a certain amount of food. Pavlov did not want the presence of an assistant to contaminate the process (Mook, 2004). So the basic process of classical conditioning begins with an unconditioned stimulus (US). The response to such a stimulus is an unconditioned response (UR). In Pavlov’s experiment the US was the food and the UR was salivation. To view the process of classical conditioning you then need to add a conditioned stimulus (CS), which has to be a stimulus that would not normally provoke the UR (Mook, 2004). In Pavlov’s experiment that would be the bell as it previously did not evoke salivation from the dog until it had been classically conditioned to do so. As the conditioning continued, after a while the unconditioned response would become a conditioned response (CR). Pavlov went on to study the process of classical conditioning for years and it is still studied in today’s psychology.

**Conditioned Emotional Responses** John B. Watson is best known as the founder of the school of behaviorism. According to him, learning is the foundation of merely all behavior, including emotional responces like: joy, fear, etc. His findings on emotional conditioning are groundbreaking and very influential. Watson and his colleagues conducted an experiment by classically conditioning a baby known as "Little Albert" to learn a phobia. The experiment was very simple: the goal was to make the baby develop a fear of white and furry things. In the first part of the experiment Watson introduced the baby to things he wasn't afraid of, or neutral stimulus like : a white rat and a dog (Hock, 72). The next step consisted of scaring little albert with a loud sound (Unconditioned Stimulus). Watson then proceeded in his experiment by introducing the baby with the pairing of the white rat along with the loud sound. Every time the pairing occurred, the baby became scared and cried. After six repetitions of the experiment, little Albert developed a conditioned response (fear), when the white rat was presented alone. His fear was later generalized to other stimulus' like a rabbit, a monkey, or any furry looking items. This finding expanded classical conditioning enormously by proving that it's possible to condition emotional reactions through learning .John B. Watson's research definitely changed psychology as a whole by creating the foundation for behaviorism.



media type="youtube" key="0FKZAYt77ZM" height="349" width="425" align="center"

=**Operant Conditioning**= media type="custom" key="9925721" width="170" height="170" align="right" Sometime after Pavlov’s Classical Conditioning, behaviorist started to move towards another aspect of learning. Operant Conditioning had a great deal of influence from the psychologist Edward L. Thorndike who was the founder of the Law of Effect. (Mcleod 2007) “The Law of Effect states that a) Responses to a situation that are followed by satisfaction are strengthened; and b) Responses that are followed by discomfort are weakened.” (Plucker, 2003, Ideas and Interest, para. 1). Operant Conditioning was first coined Instrumental Learning or Trial-and- Error Learning by Thorndike who believed that during this theory the experimenter organizes situations so that the reinforcement is provided only after certain actions are accomplished by the individual. Burrhus Frederic Skinner also known as B.F. Skinner renamed Instrumental Learning operant behaviors because “operant behaviors are actions that ‘operate’ on the environment to produce some effect” making it a better suited name (Medin, Ross & Markman,2005,p.56).

**Reinforcement and Punishment**
The methods used in operant conditioning are dependent of what type of outcome is wanted. Whether it be to increase or decrease behavior, to increase behavior you would reinforce and to decrease behavior you would punish. The stimuli needed to enforce or deter such outcomes consist of positive or negative incentives. The ways these incentives function are by the action given to them either to remove them or to add them (Huitt & Hummel, 1997, General Principles, para.1). Reinforcement is given when one wants to increase the behavior. There are two types of reinforcements; there are **positive reinforcement** which consist of adding something that the individual finds pleasing which can be either verbal or material and it is viewed as a reward when the desired behavior is done (Cherry, n.d., Components of Operant Conditioning, para.1). For example: A young girl cleans her mother’s car, and her mother gives her five dollars for cleaning the car. The cleaning of the car is the desired behavior and the five dollars the young girl received is the positive reinforcement. Another type of reinforcement is **negative reinforcement** which consists of removing an unpleasant factor when the desirable behavior is demonstrated (Cherry, n.d., Components of Operant Conditioning, para.1). For example: A young girl is grounded for fighting with her younger brother. She decides to clean the kitchen and apologize to her brother and mother for her actions. In turn, the mother removes the disciplinary action because of her daughter’s behavior. The cleaning of the kitchen and the apologies were the desired behavior and the removal of being grounded is the negative reinforcement. Punishment is given when one wants to decrease the behavior. There are two types of punishment, **positive punishment** which is adding an unpleasant or unwanted event in order to diminish the frequency of the behavior (Cherry, n.d., Components of Operant Conditioning, para.2). For example: A young boy is having dinner at a fancy restaurant when he lets out a disgusting burp, his mother is appalled and embarrassed, and she grounds him for a week. The disgusting burp is the unwanted behavior and the discipline action given to him by his mother is the positive punishment. Another type of punishment is **negative punishment** where in order to deter a certain behavior a pleasant stimuli is removed. This removal takes places after the unwanted behavior happens (Cherry, n.d., Components of Operant Condtioning, para.2). For example: A young boy is at the dentist and he does not want to be there so he throws a temper tantrum. His mother does not like his behavior; therefore she takes away his game console. The temper tantrum is the unwanted behavior and the removal of his game console is the negative punishment.

media type="youtube" key="guroaQRFsX4" width="434" height="360" align="center"

The "Skinner Box"
Behaviorist B.F. Skinner used these theories to build the “Skinner Box”. The “Skinner Box” was a box specifically designed to hold a small animal. Therefore, Skinner could test his ideas of positive and negative reinforcements on live test subjects. To test reinforcement, Skinner placed a starving rat into the box. The box contained a lever that release food when it was pressed. The rat while in the box pressed the lever and out came the pellet. Soon after the rat realized that every time he pressed the lever, he would obtain a pellet to eat. The rat’s behavior was positively reinforced by the pellet of food he received (Mcleod, 2007, Reinforcement, para.1). Skinner tested negative reinforcement in a similar manner except with an electric shock. He would give an electric shock to the “Skinner Box” that caused the rat distress. As the rat scurried around it would hit the lever causing the electric shock to turn off. Soon after the rat realized that hitting the lever would stop the shock. The rat’s behavior was negatively reinforced by the removal of the shock once the lever was pressed (Mcleod, 2007, Punishment, para.1). The “Skinner Box” solidified Skinner’s theory about reinforcement and paved the way towards the use of his theories on humans and at the same time demonstrating that the way humans and animals learn are not so different after all.

media type="youtube" key="I_ctJqjlrHA" width="425" height="350" align="center"

media type="custom" key="9925717" width="310" height="310" Skinner Box

**The Schedule of Reinforcement**
The frequency of the reinforcement also plays a crucial role in the outcome of the behavior. The schedule of reinforcement is broken down into two types: intermittent and continuous. **Continuous Reinforcement** is when the behavior that is being sought after occurs and it is reinforced each and every time it is present. Continuous Reinforcement should be used in the preliminary phases of learning to generate a solid association between the action and the response. **Intermittent Reinforcement** is when the behavior that is being sought after occurs but it is only reinforced occasionally. With intermittent reinforcement, it takes longer for the desired behavior to prevail, yet when it does become more predominant, it is much harder to extinct (Sperrazza and Lorenzo, 2010, Schedules of Reinforcement, para. 1-3). Branching off from continuous and intermittent reinforcement, there are two other types of schedule reinforcement that are categorized as interval and ratio. An interval schedule is the reinforcement after a period of time and a ratio schedule is the reinforcement after a certain amount of happenings (Sperrazza, et al., 2010, Schedules of Reinforcement, para. 2). Under the interval category there is **fixed interval schedule** where the desired behavior is rewarded only after a certain amount of time has passed. This schedule results in an elevated amount of responses towards the end of the interval, yet a lot slower response directly after the reinforcement is given. Then there is **variable-interval schedule** in which the desired response is rewarded after a random amount of time has elapsed. This particular schedule creates a consistent rate of response but at a slower pace (Cherry, n.d.). Under the ratio category there is **fixed ratio schedule** in where the desired behavior is reinforced only after a precise number of reactions. The response during this schedule is usually consistent and at an elevated pace with a short pause soon after the reinforcement is given. Then there is **variable-ration schedule** where the desired behavior is reinforced after a random amount of responses. The responses during this schedule are usually consistent and at an elevated pace as well (Cherry, n.d). The schedule of reinforcement is important to the outcome of the wanted behavior because if the schedule being used is not always the same, reproducing the same desired outcome consistently may be extremely difficult and maybe even impossible. Unfortunately using the schedule of reinforcement for punishment is not as clear cut; however here are a few guidelines that can be helpful to a person who is trying to decrease a behavior. According to Sperrazza and Lorenzo (2010) “Punishment should be immediate, intense, unavoidable, and consistent.”

media type="custom" key="9925735" width="409" height="409" align="center"

=Observational Learning= As the title suggested observational learning is simply "the learning of a new behavior through the observation of a model" (Ciccarelli & Meyer, 1996). A great contributor to this type of learning is Albert Bandura. His experiment on aggression with the bobo doll is well known, and represents a classic study in psychology.Bandura and his colleagues used children as subject for the bobo doll experiment. In the first half of the experiment, they were testing whether children would copy aggresive behaviors when it's modeled in their presence. For this part of the study, half of the children watch a model behaving aggressively towards the bobo doll, while the other half witness a model not paying attention to the doll. When left alone in the room with the doll, the children who witness the beating imitated the aggressive behavior. The children who were not exposed,however, to the bad behavior did nothing to the doll. The second part of the study tested the children's likelihood of replicating an aggressive behavior, if the model faced some kind of repercussion positive or negative.For this section of the experiment, half of the children watched a video where a model was seen behaving aggressively with a bobo doll and received a prize afterwards; while the other half witnessed the model being punished for punching the bobo doll. The children who observe the model being rewarded, copied the behavior when they were left alone in the room with the doll. The kids who watched the model getting punished for pushing the doll; restrained from mirroring the aggressive behavior. Bandura's findings offers great insights on how young children can be easily influenced into modeling negative behavior.



-To visualize the experiment, please refer to "the most important people" page.