All Matches
Solution Library
Expert Answer
Textbooks
Search Textbook questions, tutors and Books
Oops, something went wrong!
Change your search query and then try again
Toggle navigation
FREE Trial
S
Books
FREE
Tutors
Study Help
Expert Questions
Accounting
General Management
Mathematics
Finance
Organizational Behaviour
Law
Physics
Operating System
Management Leadership
Sociology
Programming
Marketing
Database
Computer Network
Economics
Textbooks Solutions
Accounting
Managerial Accounting
Management Leadership
Cost Accounting
Statistics
Business Law
Corporate Finance
Finance
Economics
Auditing
Hire a Tutor
AI Study Help
New
Search
Search
Sign In
Register
study help
business
methods behavioral research
Questions and Answers of
Methods Behavioral Research
13. What type of reinforcer serves to maintain behavior throughout the early links in a chain? What is the best way to establish responding on a chained schedule in animals?
12. What is a chained schedule? Diagram and label an example of a chained schedule.
11. What is an adjusting schedule? In what way does shaping involve the use of an adjusting schedule?
10. What is a conjunctive schedule? How does a conjunctive schedule differ from a chained schedule?
9. Name and define the two types of noncontingent schedules.
8. What are three types of response-rate schedules?
7. Name and define two types of duration schedules.
6. Define variable interval schedule. Describe the typical pattern of responding produced by this schedule.
5. Define fixed interval schedule. Describe the typical pattern of responding produced by this schedule.
4. Define variable ratio schedule. Describe the typical pattern of responding produced by this schedule.
3. Define fixed ratio schedule. Describe the typical pattern of responding produced by this schedule.
2. Distinguish between continuous and intermittent schedules of reinforcement.
1. What is a schedule of reinforcement?
3. Given this state of affairs, how is the organism likely to distribute its activities?
2. Contingencies of reinforcement often (disrupt/enhance) the distribution of behavior such that it is (easy/impossible) to obtain the optimal amount of reinforcement.
1. According to the behavioral approach, an organism QUICK QUIZ Q that (is forced to / can freely) engage in alternative activities will distribute its behavior in such a way as to (optimize/balance)
4. Kaily typically watches television for 4 hours per day and reads comic books for 1 hour per day. You then set up a contingency whereby Kaily must watch 4.5 hours of television each day in order to
3. The response deprivation hypothesis differs from the Premack principle in that we need only know the baseline frequency of the (reinforced/reinforcing)behavior.
2. If a child normally watches 4 hours of television per night, we can make television watching a reinforcer if we restrict free access to the television to (more/less) than 4 hours per night.
1. According to the response deprivation hypothesis, a response can serve as a QUICK QUIZ P reinforcer if free access to the response is (provided/restricted) and its frequency then falls
6. What is Grandma’s rule, and how does it relate to the Premack principle?
5. If “Chew bubble gum ã Play video games” is a diagram of a reinforcement procedure based on the Premack principle, then chewing bubble gum must be a(lower/higher) probability behavior than
4. If you drink five soda pops each day and only one glass of orange juice, then the opportunity to drink can likely be used as a reinforcer for drinking .
3. According to the Premack principle, if you crack your knuckles 3 times per hour and burp 20 times per hour, then the opportunity to can probably be used as a reinforcer for .
2. The Premack principle states that a behavior can be used as a reinforcer for a behavior.
1. The Premack principle holds that reinforcers can often be viewed as rather than stimuli. For example, rather than saying that the rat’s lever pressing was reinforced with food, we could say that
5. Research has shown that hungry rats will perform more effectively in a T-maze when the reinforcer for a correct response (right turn versus left turn) consists of several small pellets as opposed
4. The motivation that is derived from some property of the reinforcer is called motivation.
3. A major problem with drive reduction theory is that.
2. According to this theory, a s reinforcer is one that has been associated with a p reinforcer.
1. According to drive reduction theory, an event is reinforcing if it is associated with a reduction in some type of p drive.
3. One suggestion for enhancing our behavior in the early part of a long response chain is to make the completion of each link more s , thereby enhancing its value as a s reinforcer.
2. An efficient way to train a complex chain, especially in animals, is through b chaining, in which the (first/last) link of the chain is trained first. However, this type of procedure usually is
1. Responding tends to be weaker in the (earlier/later) links of a chain.This is an example of the g g effect in which the strength and/or efficiency of responding (increases/decreases) as the
2. Within a chain, completion of each of the early links ends in a(n) s reinforcer, which also functions as the for the next link of the chain.
1. A chained schedule consists of a sequence of two or more simple schedules, QUICK QUIZ L each of which has its own and the last of which results in a t r .
3. To the extent that a gymnast is trying to improve his performance, he is likely on a(n) schedule of reinforcement; to the extent that his performance is judged according to both the form and
2. In a(n) schedule, the response requirement changes as a function of the organism’s performance while responding for the previous reinforcer, while in a(n) schedule, the requirements of two or
1. A complex schedule is one that consists of _______.
3. A child who is often hugged during the course of the day, regardless of what he is doing, is in humanistic terms receiving unconditional positive regard. In behavioral terms, he is receiving a
2. In many mixed martial arts matches, each fighter typically receives a guaranteed purse, regardless of the outcome. In the Ultimate Fighter series, the winner of the final match is awarded a major
1. During the time that a rat is responding for food on a VR 100 schedule, we QUICK QUIZ J begin delivering additional food on a VT 60-second schedule. As a result, the rate of response on the VR
3. As shown by the kinds of situations in which superstitious behaviors develop in humans, such behaviors seem most likely to develop on a(n) (VT/FT)schedule of reinforcement.
2. Herrnstein (1966) noted that superstitious behaviors can sometimes develop as a by-product of c reinforcement for some other behavior.
1. When noncontingent reinforcement happens to follow a particular behavior, QUICK QUIZ I that behavior may (increase/decrease) in strength. Such behavior is referred to as s behavior.
3. For farmers, rainfall is an example of a noncontingent reinforcer that is typically delivered on a schedule(abbreviated ).
2. Every morning at 7:00 a.m. a robin perches outside Marilyn’s bedroom window and begins singing. Given that Marilyn very much enjoys the robin’s song, this is an example of a 24-hour schedule
1. On a non schedule of reinforcement, a response is not required to QUICK QUIZ H obtain a reinforcer. Such a schedule is also called a response i schedule of reinforcement.
5. Frank discovers that his golf shots are much more accurate when he swings the club with a nice, even rhythm that is neither too fast nor too slow.This is an example of reinforcement of
4. On a video game, the faster you destroy all the targets, the more bonus points you obtain. This is an example of reinforcement of behavior (abbreviated ).
3. In practicing the slow-motion form of exercise known as tai chi, Tung noticed that the more slowly he moved, the more thoroughly his muscles relaxed.This is an example of d reinforcement of
2. As Tessa sits quietly, her mother occasionally gives her a hug as a reward. This is an example of a schedule.
1. On a (VD/VI) schedule, reinforcement is contingent upon responding continuously for a varying period of time; on an (FI/FD) schedule, reinforcement is contingent upon the first response after a
4. In general, schedules produce postreinforcement pauses because obtaining one reinforcer means that the next reinforcer is necessarily quite(distant/close) .
3. In general, (variable/fixed) schedules produce little or no postreinforcement pausing because such schedules often provide the possibility of relatively i reinforcement, even if one has just
2. On schedules, the reinforcer is largely time contingent, meaning that the rapidity with which responses are emitted has (little/considerable)effect on how quickly the reinforcer is obtained
1. In general, (ratio/interval) schedules tend to produce a high rate of QUICK QUIZ F response. This is because the reinforcer in such schedules is entirely r contingent, meaning that the rapidity
3. In general, variable interval schedules produce a (low/moderate/high)and (steady/fluctuating) rate of response with little or no .
2. You find that by frequently switching stations on your radio, you are able to hear your favorite song an average of once every 20 minutes. Your behavior of switching stations is thus being
1. On a variable interval schedule, reinforcement is contingent upon the response following a , un period of .
5. On a pure FI schedule, any response that occurs (during/following)the interval is irrelevant.
4. Responding on an FI schedule is often characterized by a sc pattern of responding consisting of a p p followed by a gradually(increasing/decreasing) rate of behavior as the interval draws to a
3. In the example in question 2, I will probably engage in (few/frequent)glances at the start of the interval, followed by a gradually (increasing/decreasing)rate of glancing as time passes.
2. If I have just missed the bus when I get to the bus stop, I know that I have to wait 15 minutes for the next one to come along. Given that it is absolutely freezing out, I snuggle into my parka as
1. On a fixed interval schedule, reinforcement is contingent upon the QUICK QUIZ D response following a , pr period of .
4. As with an FR schedule, an extremely lean VR schedule can result in r s .
3. An average of 1 in 10 people approached by a panhandler actually gives him money. His behavior of panhandling is on a schedule of reinforcement.
2. A variable ratio schedule typically produces a (high/low) rate of behavior (with/without) a postreinforcement pause.
1. On a variable ratio schedule, reinforcement is contingent upon a un of responses.
11. Graduate students often have to complete an enormous amount of work in the initial year of their program. For some students, the workload involved is far beyond anything they have previously
10. Over a period of a few months, Aaron changed from complying with each of his mother’s requests to complying with every other request, then with every third request, and so on. The mother’s
9. A very dense schedule of reinforcement can also be referred to as a very r schedule.
8. An FR 12 schedule of reinforcement is (denser/leaner) than an FR 75 schedule.
7. The typical FR pattern is sometimes called a b -and-r pattern, with a pause that is followed immediately by a (high/low) rate of response.
6. An FR 200 schedule of reinforcement will result in a (longer/shorter)pause than an FR 50 schedule.
5. A fixed ratio schedule tends to produce a (high/low) rate of response, along with a p p .
4. An FR 1 schedule of reinforcement can also be called a schedule.
3. A mother finds that she always has to make the same request three times before her child complies. The mother’s behavior of making requests is on an schedule of reinforcement.
2. A schedule in which 15 responses are required for each reinforcer is abbreviated .
1. On a(n) schedule, reinforcement is contingent upon a fixed number of responses.
5. S e are the different effects on behavior produced by different response requirements. These are the stable patterns of behavior that emerge once the organism has had sufficient exposure to the
4. When the weather is very cold, you are sometimes unable to start your car.The behavior of starting your car in very cold weather is on a(n)schedule of reinforcement.
3. Each time you flick the light switch, the light comes on. The behavior of flicking the light switch is on a(n) schedule of reinforcement.
2. On a c reinforcement schedule (abbreviated ), each response is reinforced, whereas on an i reinforcement schedule, only some responses are reinforced. The latter is also called a p reinforcement
1. A s of reinforcement is the r requirement that must be QUICK QUIZ A met in order to obtain reinforcement.
14. Define shaping. What are two advantages of using a secondary reinforcer, such as a sound, as an aid to shaping?
13. Define natural and contrived reinforcers, and provide an example of each.
12. Under what three conditions does extrinsic reinforcement undermine intrinsic interest? Under what two conditions does extrinsic reinforcement enhance intrinsic interest?
11. Define intrinsic and extrinsic reinforcement, and provide an example of each.
10. What is a generalized reinforcer? What are two examples of such reinforcers?
9. Distinguish between primary and secondary reinforcers, and give an example of each.
8. How does immediacy affect the strength of a reinforcer? How does this often lead to difficulties for students in their academic studies?
7. What are similarities and differences between negative reinforcement and positive punishment?
6. Define positive punishment and diagram an example. Define negative punishment and diagram an example. Be sure to include the appropriate symbols for each component.
5. Define positive reinforcement and diagram an example. Define negative reinforcement and diagram an example. Be sure to include the appropriate symbols for each component.
4. What is a discriminative stimulus? Define the three-term contingency and diagram an example. Be sure to include the appropriate symbols for each component.
3. Define the terms reinforcer and punisher. How do those terms differ from the terms reinforcement and punishment?
2. Explain why operant behaviors are said to be emitted and why they are defined as a “class” of responses.
1. State Thorndike’s law of effect. What is operant conditioning (as defined by Skinner), and how does this definition differ from Thorndike’s law of effect?
3. The advantages of using the click as a reinforcer is that it can be delivered i . It can also prevent the animal from becoming s .
Showing 400 - 500
of 1115
1
2
3
4
5
6
7
8
9
10
11
12