Schedules of Reinforcement
A reinforcement schedule is a rule that specifies the timing and frequency of reinforces. The plan, pattern, or strategy for delivering the reinforcement is known as the schedule of reinforcement. It is B.F. Skinner introduced the reinforcement schedules in his operant conditioning.
Schedules of reinforcement are important determinants of behavior that help in shaping behavior towards the desired response. Reinforcement might be delivered after a certain number of responses are made, or after some time interval has passed. The schedule defines the rules that determine when and under what conditions a response will be reinforced.
Generally, the schedules of reinforcement are of two types:
- Continuous reinforcement schedule
- Intermittent (partial) reinforcement schedule
Continuous Reinforcement Schedule
In a continuous reinforcement schedule, the subject is reinforced after each desired response it provides. For example, a rat in the Skinner box receives a food pellet each time it presses the bar, or a salesperson receives a commission for each car sold.
Under continuous reinforcement, schedule learning occurs rapidly. However, when the reinforcement stops, that is, the food is discontinued, extinction occurs rapidly. Food pellets only reinforce a hungry rat while it is hungry. Once the rat has eaten to satisfaction, this reinforcement is ineffective.
If the teacher praises the student at each step in solving the equation then the teacher is applying a continuous reinforcement schedule.
Partial Reinforcement Schedule
In real human life, continuous reinforcements are rare. An action sometimes leads to reinforcement but other times it does not. Such reinforcement schedules are known as partial or intermittent schedules of reinforcement because the behavior is reinforced occasionally.
The term partial or intermittent reinforcement is used to describe a non-continuous pattern of delivering reinforcement. Learning is slow in partial reinforcement at the beginning, which makes continuous reinforcement preferable until the behavior is mastered. Continuous reinforcement makes the connection between the behavior and its consequence clear and predictable during the learning phase, which usually makes partial reinforcement more superior schedule for maintaining a learned behavior.
Partial reinforcement produces greater persistence of behavior. It creates greater resistance to extinction than with continuous reinforcement. Skinner found that a pigeon that has learned to peck a key to obtain food, pecked 150,000 times when the experimenter gradually slowed down the delivery of food. With partial reinforcement, hopes are intrinsic. Occasionally, giving the child some sweets to stop tantrums is the best procedure for a partial reinforcement schedule.
Skinner and his collaborators compared four schedules of partial reinforcement, which are based on trial and time, fixed and unpredictable nature. There are two main types of partial schedules.
- The ratio schedules
- The interval schedules
The Ratio Schedules of Reinforcement
In ratio schedules, the number of reinforcement given is related to the number of responses emitted. There are two types of ratio schedules: fixed and variable.
Fixed Ratio Schedule: It is a kind of continuous reinforcement. Here, the participants are reinforced after the specified and correct responses are accomplished. For example, a pigeon is provided with grain after every 6 pecks to the key. Likewise, an employee may be paid for every 5 dresses made, and the workers receive payments for every 10 service wovens.
Workers weave the first nine scarves without any reinforcement. The payoff occurs when the tenth scarf is completed. A fixed ratio encourages the subject to respond by a rapid rate of responding with a brief pause, following each of the reinforcements.
Variable Ratio Schedule: In this reinforcement schedule, to receive the award the exact number of responses is required, and the reinforcement is not specified. Sometimes the reinforcement will be delivered after 25 responses, sometimes after 38 responses, sometimes after 13 responses, and so on.
Thus a pigeon on a variable ratio schedule may be rewarded on its fourth, eighth, twelfth, twentieth responses. VR schedule works on an average number of correct responses to be rewarded, for example, 4+8+12+20=44. 44/4 =11, that is, the average response before the reinforcement. This particular schedule would be designated as a variable ratio (VR 11) to be rewarded.
Interval Schedules of Reinforcement
The second type of partial schedule of reinforcement is the interval schedule which involves a passage of time. In an interval schedule, the responses are rewarded only after a certain interval of time has passed. There are two basic types of interval schedules of reinforcement: fixed interval and variable interval.
Fixed Interval Schedule: In a fixed interval schedule, a certain fixed time must be spent by the organism by performing desired response, and then the organism is rewarded. For example, in a fixed interval schedule, a rat gets a food pellet whether it presses the bar 101 times or one time during that 10 minutes. An animal on a fixed interval schedule of reinforcement will ultimately learn to stop responding except toward the end of each interval.
For example, workers whose boss comes to check them at two o’clock are likely to relax the rest of the day. Schools based on fixed-interval schedules pull students at the end of the semester to study harder. It is a popularly known fact to voters that politicians become active at the end of their tenure.
Variable Interval Schedule: It is also based on the passage of time, but the organism can not predict how long the time interval is, which will bring it a reward. The time interval changes after every reinforcement. For example, a rat may receive reinforcement for bar pressing, but only a five, six, twenty-one, and forty minutes (5+6+21+40, 72/4=18), a reinforcer that occurs, on average, every 18 minutes.
In the classroom, pop quizzes make similar use of variable-interval schedules. Random announcement of government inspections to inspect working conditions in a plant is much more effective in getting management to maintain safety standards than inspections at a fixed interval. Fishing is another example of this schedule because no one can predict when a fish is going to bite, but even then people patiently sit with poles in their hands along the river to catch fish.