Problem behaviors present a significant challenge for individuals with developmental disabilities and their caregivers. Interventions based on behavioral principles are effective in treating problem behaviors; however, some caregivers have difficulty adhering to treatment recommendations. Treatment adherence may be affected by the technical nature of behavioral terminology. Research suggests that caregivers better understand and are more comfortable with interventions described in conversational language; however, the effects of technical language on treatment implementation are unknown. In the current investigation, implementation of a behavioral treatment was monitored after caregivers were given either a technical or conversational description of the intervention. Implementation was more accurate when the treatment description was written in conversational language, suggesting that clinicians should write behavioral plans in conversational language.
Problem behaviors such as self-injurious behavior (SIB), aggression, and property destruction present a significant challenge for individuals with developmental disabilities and their caregivers. These challenges include limited access to education, impairment of social relationships, and risk of out-of-home placement (Horner, Carr, Strain, Todd, & Reed, 2002). Research findings have suggested that behavioral interventions based on basic behavioral principles (e.g., reinforcement, extinction) are frequently effective in treating problem behaviors with these individuals (Asmus et al., 2004; Hagopian, Fisher, Sullivan, Acquisto, & LeBlanc, 1998; Kahng, Iwata, & Lewin, 2002; Kurtz et al., 2003). Effective behavioral interventions often involve functional assessment aimed at generating hypotheses about environmental variables that maintain the problematic behavior and teaching socially appropriate behavior that may replace the problem behavior. For example, a functional assessment may suggest that SIB is maintained by attention provided by caregivers (e.g., parents, teachers, and staff) following the occurrence of problem behavior. One treatment is to withhold attention following problem behavior (extinction) and teaching and reinforcing appropriate requests for attention.
Despite the availability of effective behavioral technology, some caregivers have difficulty adhering to treatment recommendations. Thus, treatment efficacy may not be sufficient to produce lasting treatment success if caregivers are unable or unwilling to follow through (Allen & Warzak, 2000). Numerous factors may affect caregiver's adherence to the behavioral intervention. These include, but are not limited to, complexity of the intervention, the method of instruction, the training environment, lack of satisfaction with the chosen intervention, lack of social acceptability of the intervention, the speed of behavior change (i.e., behavior change is too slow), and the caregiver's lack of relevant skills to implement and follow the behavioral plan (Allen & Warzak, 2000).
Caregivers' lack of skills to implement and follow treatment plans is a particularly important consideration, given that the technical nature of behavioral terminology may not correspond to conventional language. In other words, caregivers may frequently have difficulty understanding and acting on instructions written in highly technical behavioral terminology (Axelrod, 1992; Bailey, 1991). Furthermore, research has suggested that caregivers prefer a behavioral plan when it is written in nontechnical language. For example, Kazdin and Cole (1981) examined this effect in two experiments conducted with undergraduate education and psychology majors. In the first experiment, education majors were asked to read various vignettes describing a classroom procedure. The procedure in all vignettes remained constant; however, the vignettes were labeled as behavior modification, humanistic education, or a new teaching method (i.e., neutral) and were worded in behavioral, humanistic, or neutral language. A full factorial design was not used; however, each label was presented in both its characteristic language (e.g., behavior modification with behavioral language) and in neutral language. Participants were then asked to rate the procedure in terms of various teacher–classroom characteristics (i.e., warmth, effectiveness, flexibility, learning, discipline, emotional growth) and their interest in learning more about the method. In this experiment, vignettes written in behavioral language were associated with lower ratings of teacher–classroom characteristics and less interest in learning more about the method, whereas the label behavior modification had no discernable effect. The behavioral descriptions, however, contained both high levels of scientific jargon (e.g., terms such as reinforcement) and an emphasis on controlling child behavior through the manipulation of consequences. The second experiment was conducted to more closely examine the effects of scientific jargon. In the second experiment, psychology students were asked to read similar vignettes manipulated to contain differing amounts of scientific jargon. Surprisingly, the procedures were rated more favorably when written with more scientific jargon.
Subsequent studies, however, have failed to replicate the latter finding, suggesting that favorable ratings of procedural descriptions containing scientific jargon may be particular to the population surveyed (i.e., psychology students). For example, Rolider, Axelrod, and Van Houten (1998) asked behavior analysts and individuals from the general public to read vignettes explaining behavioral treatments in technical language, conversational language, or conversational language with a statement of the intended outcome. The participants were then asked to rate the procedures along four dimensions, specifically, (a) their perceived understanding of the procedure, (b) their comfort with the procedure, (c) the degree to which they saw the procedure as caring, and (d) the level of participation they believed the procedure gave the participant. Unlike the psychology students in Kazdin and Cole's (1981) study, individuals from the general public neither understood nor were comfortable with behavior plans written in technical language. These findings were later replicated using a larger sample (84 behavior analysts and 74 individuals from the general public; Rolider & Axelrod, 2005).
In summary, research has suggested that caregivers are more likely to understand and are more comfortable with behavioral interventions that are described in conversational, nontechnical language. One implication of these findings is that using nontechnical language in behavioral intervention plans may ultimately improve treatment adherence. Allen and Warzak (2000) suggested that a solution to poor adherence “may be to ‘repackage’ the language of behavioral technology to be less discriminable from the language of contemporary culture” (p. 380). That is, behavior plans written in nontechnical language may result in better adherence to the behavioral intervention plans.
Though instructions are not advocated as a stand-alone training program, they are often a component of comprehensive training programs (e.g., Ducharme & Feldman, 1992; Kneringer & Page, 1999; Parsons, Rollyson, & Reid, 2004; Reid et al., 2003; Sarokoff & Sturmey, 2004). These training programs typically include additional components such as modeling (e.g., Ducharme & Feldman, 1992; Reid et al., 2003; Sarokoff & Sturmey, 2004), rehearsal (Ducharme & Feldman, 1992; Sarokoff & Sturmey, 2004), and/or feedback (e.g., Ducharme & Feldman, 1992; Kneringer & Page, 1999; Parsons et al., 2004; Sarokoff & Sturmey, 2004). Studies isolating and examining the effectiveness of individual treatment components (e.g., feedback; Roscoe, Fisher, Glover, & Volkert, 2006) strengthen the empirical foundation of the aforementioned training packages. One goal of this study was to strengthen the foundations of the aforementioned comprehensive training programs by similarly isolating and evaluating the effects of instruction type. Specifically, the purpose of the current investigation was to isolate and further examine how technical and nontechnical language affects treatment preference and, more important, treatment integrity. In Experiment 1, which was similar to Rolider et al.'s (1998) study, we asked both behavior analysts and members of the general public (novice, direct care staff in an inpatient unit) to rate descriptions of two behavioral procedures written in both technical and nontechnical (i.e., conversational) language. Our hypothesis was that the nontechnical description would be rated equally by both groups but the technical description would tend to be rated less favorably by members of the general public compared with the behavior analysts. In Experiment 2, we evaluated the effects of technical and conversational language on treatment adherence. We measured novice, direct care staff's implementation of behavioral interventions during simulated treatment sessions. We hypothesized that treatment adherence would be significantly greater when nontechnical– conversational language was used compared with when the interventions were described in technical language.
This experiment served as a replication of Rolider and Axelrod (2005). Therefore, we evaluated the effects of technical and nontechnical language on treatment preference as selected by experienced versus inexperienced individuals.
Participants and setting
Participants were divided into two groups of 20 (N = 40): experienced versus inexperienced. The experienced group consisted of individuals with at least 1 year of graduate study in applied behavior analysis. They were recruited from among supervisors, clinical staff, postdoctoral fellows, and trainees who worked in an inpatient unit for individuals with developmental disabilities who engaged in severe problem behavior, which was the setting for this study. The inexperienced group consisted of direct care staff working on the same inpatient unit who had no formal training in applied behavior analysis. They were new on the job (i.e., their participation in this study occurred on their first day at work, prior to any training or interaction with the patients) and had no prior experience implementing behavioral interventions.
Two vignettes describing a relatively common behavioral intervention, functional communication training (FCT; Carr & Durand, 1985) and extinction, were created. One vignette was written using technical, behavioral language and read as follows: “All appropriate responses (defined as handing the therapist the picture card) are reinforced with the delivery of a small edible item on a FR1 schedule. SIB (defined as hand biting or head hitting) is on extinction.” The other vignette was written using nontechnical, conversational language and read as follows: “When the child hands you the card with a picture of a snack on it, give the child a little piece of the snack. If the child bites his hand or hits himself on the head, simply ignore it.” These vignettes were designed such that they would be functionally equivalent if followed correctly (i.e., the same staff behavior would result from correct implementation), although the wording differed. Each vignette was followed by a four-item questionnaire similar to the one used by Rolider and colleagues (Rolider & Axelrod, 2005; Rolider et al., 1998). The questions are listed in Figure 1. The questions were designed to gauge to what extent the participants felt they understood the procedure (Question 1), how comfortable they felt with the procedure and how acceptable they felt it was (Question 2), how caring and compassionate they felt the procedure was (Question 3), and the extent to which they felt the procedure allowed participation by the client (Question 4). The participants were asked to evaluate each of these aspects via a 5-point, Likert-type scale.
Procedure and experimental design
Each participant was randomly assigned to one of two conditions: technical or nontechnical language. This created four groups that consisted of 10 participants each: experienced/technical, experienced/nontechnical, inexperienced/technical, and inexperienced/ nontechnical. Each participant was then given a description of the treatment. To approximate the experience of direct care staff, information regarding assessment and treatment development was not provided. After reading the description, the participant was asked to complete the questionnaire located at the bottom of the page. If the participant asked the experimenter questions about the description, the experimenter instructed the participant to answer the questions to the best of his/her ability. The experimenter remained with the participant until the questionnaire was completed. The participants in the experienced group completed the survey within a single day, whereas the participants in the inexperienced group responded to the survey as they became available (i.e., when they arrived for their first day at work). Typically, 2–3 new staff members arrived on a single day every other week, and their participation occurred 1 hr before their orientation and training started. All questionnaires were administered under the supervision of the first author.
The results of Experiment 1 are shown in Figure 2. For the first question (degree of perceived understanding) the mean score for the inexperienced/technical group (M = 2.70, SD = 1.25) was considerably lower than for each of the other groups (inexperienced/nontechnical, M = 4.30, SD = 0.95; experienced/technical, M = 4.60, SD = 0.70; experienced/nontechnical, M = 4.50, SD = 0.71). A two-way analysis of variance (ANOVA) revealed significant main effects for group (inexperienced vs. experienced), F(1, 36) = 12.76, p = .001, and for condition (technical vs. conversational language), F(1, 36) = 6.51, p = .015. The group/condition interaction was also significant, F(1, 36) = 8.36, p = .006. As expected, an examination of the means suggested that technical language led to decreased perceived understanding only with the inexperienced group.
For the second question, which gauged comfort with the procedures, the lowest mean was again seen for the inexperienced/technical group (M = 3.30, SD = 1.16). The highest mean rating occurred in the experienced/nontechnical group (M = 4.10, SD = 1.10), and the other two group means fell in between (inexperienced/nontechnical (M = 3.80, SD = 1.14; experienced/technical, M = 3.70, SD = 1.34). However, the group differences were small, and a two-way ANOVA did not reveal statistical significance of the main effects of group, F(1, 36) = 0.87, p = .357; condition, F(1, 36) = 1.44, p = .238; or group/condition interaction, F(1, 36) = 0.02, p = .895.
Minimal differences in ratings were seen for the third question, which evaluated perceived compassion inherent in the treatment. The mean ratings (with standard deviations in parentheses) were 3.70 (0.82), 3.80 (1.03), 3.80 (1.14), and 3.50 (1.18) for the inexperienced/technical, inexperienced/nontechnical, experienced/technical, and experienced/ nontechnical groups, respectively. A two-way ANOVA revealed statistically insignificant main effects for group, F(1, 36) = 0.90, p = .765; condition, F(1, 36) = 0.90, p = .765; and group/condition interaction, F(1, 36) = 0.36, p = .551.
For the fourth question, which targeted level of participation, slightly higher ratings were seen with the experienced group (experienced/technical, M = 4.00, SD = 0.82; experienced/nontechnical, M = 4.00, SD = 0.82) compared with the inexperienced group (inexperienced/technical, M = 3.50, SD = 1.08; inexperienced/nontechnical, M = 3.60, SD = 0.84). However, statistically significant main effects were not found for group, F(1, 36) = 2.52, p = .121; condition, F(1, 36) = 0.03, p = .861; or group/condition interaction, F(1, 36) = 0.03, p = .861.
The most notable result of Experiment 1 was the difference in perceived understanding of the interventions as indicated by responses to Item 1 of the questionnaire; mean ratings indicated significantly less perceived understanding by the inexperienced group. This finding is consistent with the results of Rolider et al. (1998) and was expected given the difference in experience with behavioral interventions between the two groups. However, significant differences between the experienced and inexperienced groups were not found with regards to the other three questionnaire items, which is inconsistent with the findings of Rolider et al. Differences between the groups in the current study, thus, seemed to be limited to differential level of perceived understanding rather than an overall dislike of the procedures. Differences between our results and those of Rolider et al. may have been a result of the differing samples used. The inexperienced group in the Rolider et al. studies represented a cross-section of individuals from the general public (i.e., administrators, health care workers, business persons, and maintenance workers), individuals presumably naïve to persons who were developmentally delayed. In our study, the inexperienced group consisted of individuals hired to work in a direct care role in a major hospital. This was due to the relevance of these individuals to the treatment integrity analysis (undertaken in Experiment 2); however, the hospital's job applicant screening process may have minimized the differentiation achieved on the survey measures. In light of this, one would not expect large differences in the scores. Hence, the obtained differences may be interpreted as a fairly robust demonstration of this phenomenon.
Although it is important to examine staff's perception of behavioral treatment, ultimately, it is most important to understand how technical language can affect treatment implementation. Therefore, the purpose of Experiment 2 was to examine the effects of technical and nontechnical language on treatment integrity.
Participants and setting
The 20 individuals from the inexperienced group who participated in Experiment 1 also participated in Experiment 2. Because the purpose of Experiment 2 was to evaluate the effects of technical versus nontechnical language on treatment integrity with staff who had no experience with behavioral interventions, the participants from the experienced group did not take part in Experiment 2. All sessions took place in a padded session room located on the inpatient unit.
Tasks and materials
During the simulated treatment sessions, the participants had access to the written description of the treatment (FCT plus extinction) they had rated in Experiment 1. A single leisure item (e.g., a soccer ball) was also present, as was a table containing a communication card (i.e., a 5 × 5-cm card depicting an edible item) and a bowl containing edible items (i.e., Cheerios).
Data collection and interobserver agreement
All data were collected using a real-time, data collection program on laptop computers. All observers were trained and experienced in the use of the data collection program, which was routinely used to collect data on client behavior on the unit. The observers collected data by observing the sessions through a one-way mirror and pushing designated keys everytime each of the following events occurred: (a) an opportunity to implement FCT—this occurred when the confederate (who played the part of the client in the simulated sessions) attempted to hand the participant the communication card; (b) correct implementation of FCT—this occurred when the participant handed the confederate a piece of the edible item within 5 s of receiving the communication card; (c) opportunity to implement extinction—this occurred when the confederate displayed problem behavior (e.g., hitting own head, biting own hand); and (d) incorrect implementation of extinction—this occurred when the participant provided food or attention within 5 s of the confederate displaying a problem behavior. These data allowed for calculation of percentage correct implementation of FCT (number of correct FCT implementations divided by number of opportunities for FCT multiplied by 100) and percentage correct implementation of extinction (number of correct implementations of extinction, defined as every opportunity for extinction not identified as incorrect, divided by total opportunities for extinction, multiplied by 100).
A second trained observer collected data simultaneously with, but independently from, the primary observer during 35% of all sessions—20% of all conversational language sessions and 50% of all technical language sessions. Interobserver agreement was calculated by partitioning the session into 10-s intervals and dividing the number of intervals in which both observers scored the same number of a given response by the total number of intervals and multiplying the resultant score by 100 (i.e., exact agreement). This yielded interobserver agreement scores of 93.7% and 94.5% for opportunities for FCT and for extinction, respectively, and 95.0% for correct implementation of extinction and 98.8% for incorrect implementation of extinction.
The sessions were conducted immediately following the completion of the questionnaire (as described in Experiment 1), and the participants retained the descriptions of the treatment (i.e., FCT plus extinction) that had been used in that part of the investigation. The random assignment to technical versus nontechnical language groups from Experiment 1, thus, carried over to Experiment 2 (i.e., participants in the technical language group in Experiment 1 remained in the technical language group in Experiment 2 and the participants assigned to the conversational language group in Experiment 1 remained in the conversational language group for Experiment 2). The participants individually entered the session room and participated in a single 40-trial session with a confederate who played the role of a client. The confederate was in all cases one of the authors or an experienced staff member on the unit who was closely familiar with the experimental procedures. The confederate followed a predetermined script consisting of a sequence of 40 behaviors spaced 10 s apart. The sequence consisted of 20 relevant behaviors (i.e., hitting self on head, handing the participant the picture card, and biting own hand) and 20 irrelevant behaviors (i.e., playing with toy, running across the room, and looking in the session-room mirror). Prior to the session, the participants were instructed to respond to the confederate's behavior in accordance with the procedural descriptions that they had in their hand. If the participant asked questions about the procedures, they were instructed that their questions could be answered after the session and asked to respond to the confederate's behavior to the best of their ability. The participants were debriefed regarding the purpose of the study immediately after the session.
The mean overall correct implementation for the technical and nontechnical groups is shown in Figure 3. Overall percentage correct treatment implementation was calculated by adding the percentage correct FCT and extinction implementation for each participant and dividing by two. The nontechnical group implemented the treatment with more integrity (M = 73.13%, SD = 22.77%) than the technical group (M = 36.79%, SD = 23.17%). An independent-sample t test revealed that the difference in means was statistically significant, t(18) = 3.54, p = .002.
Figure 4 displays the mean correct implementation for each group (technical vs. nontechnical), separated by treatment components (FCT and extinction). For FCT, the nontechnical group (M = 91.25%, SD = 27.67%) implemented the treatment with more integrity than the technical group (M = 44.77%, SD = 34.86%). An independent-sample t test showed this difference to be statistically significant, t(18) = 3.30, p = .004. Similarly, with extinction, the nontechnical group (M = 65.00%, SD = 44.28%) implemented the treatment with more integrity than the technical group (M = 29.13%, SD = 35.04%). This difference approximated statistical significance, t(18) = 2.01, p = .060. Thus, nontechnical language resulted in higher treatment integrity across both treatment components compared with technical language, but the difference was more robust with FCT.
The results of Experiment 2 are notable on two levels. First, a large difference in mean treatment integrity scores was observed between the technical and nontechnical groups for both the FCT and extinction components. This suggests the potential generality of the findings across various treatment components. These differences, however, were only statistically significant for the FCT component of the treatment. This is most likely a product of the increased variability of scores in this distribution and is reflected in standard deviation scores (i.e., 44.28 and 35.04 for the nontechnical and technical language groups, respectively) that are elevated relative to those for the FCT component (i.e., 27.67 and 34.86 for the nontechnical and technical groups, respectively). Second, in addition to the increased variability in the extinction component, treatment integrity scores were overall higher for the FCT component than for the extinction component. This may be because the FCT component requires therapist action (i.e., the therapist should deliver attention or tangible items following specified responses), whereas the extinction component requires therapist inaction (i.e., the therapist should refrain from delivering attention or tangible items following specified responses). Future research should examine this possibility.
Previous research (Vollmer, Roane, Ringdahl, & Marcus, 1999) on treatment integrity failures has shown a positive relation between the degree which a treatment is accurately implemented and appropriate behavior. Therefore, any method to improve treatment integrity is likely to result in an improvement in the lives of individuals with developmental disabilities and their caregivers. The findings of the current study may aid clinicians in their efforts to assure that necessary treatments are implemented with sufficient treatment integrity. A small body of research has examined the effects of behavioral terminology (e.g., Kazdin & Cole, 1981; Rolider & Axelrod, 2005; Rolider et al., 1998) on treatment preference. The current study extends this line of research by demonstrating that technical behavioral terminology has negative effects on measures of treatment integrity.
In addition, instructions are a part of a number of comprehensive staff training programs (e.g., Ducharme & Feldman, 1992; Kneringer & Page, 1999; Parsons, Rollyson, & Reid, 2004; Reid et al., 2003; Sarokoff & Sturmey, 2004). Such programs often include additional components such as modeling, rehearsal, or feedback. The current study adds empirical support to these approaches by isolating and examining the effects of the language style used in instructions on measures of treatment integrity. Hence, the current study enhances the empirical foundation of such comprehensive approaches. Our contention is that conversational language should be used as a component of comprehensive staff training programs, not as a standalone technique.
The results of current study are important for several reasons. The results of Experiment 1 add to the body of literature on the effects of language style on surveys aimed at evaluating treatment preference (Kazdin & Cole, 1981; Rolider & Axelrod, 2005; Rolider et al., 1998). The finding that technical language has detrimental effects on measures of perceived understanding is consistent with previous research (Rolider & Axelrod, 2005; Rolider et al., 1998); however, the population surveyed in the current study adds to its significance. Our participants were individuals who had been hired to work directly with the relevant consumers (i.e., persons with developmental disabilities who engaged in serious problem behavior). Lack of understanding of treatment procedures, thus, would be particularly likely to have negative consequences. Because behavioral interventions are often designed by experienced clinicians, direct care staff often have little or no involvement in treatment selection (Carr et al., 2002). In addition, direct care staff often report that they have insufficient training in treatment implementation and often have little formal education (Ford & Honnor, 2000). The current results suggest that those who design and write behavioral treatment plans should use language that is appropriate for the targeted audience; thus, treatment protocols written for direct care staff should not contain overly technical language.
The results of Experiment 2 bring this recommendation further into focus. Whereas Experiment 1 touched on staff perceptions of treatment procedures, Experiment 2 directly tested the effects of language style on treatment integrity. It should be noted that although the effects seen in Experiment 1 were not as large as previous findings (Rolider & Axelrod, 2005; Rolider et al., 1998), the group differences in Experiment 2 were robust. This suggests that overt behavior may be more affected than written or verbal reports by the language used to describe treatment procedures. It is possible that staff may not always realize that they misunderstood the treatment. Conversely, with the clinician playing an authoritative role, staff may not feel comfortable reporting that they do not understand the treatment. Regardless of the cause, the clinician is left in a position in which subjective impressions of a caregiver's understanding of a procedure are, at best, unreliable. Future research should determine if this lack of consistency between the self-report data and the behavioral data is a result of staff erroneously believing they understand the treatment or unwillingness to report that they do not understand.
Conclusions of the results of Experiment 2 should be made with caution. These data suggest that there may be a relation between the degree with which a participant reports understanding of the treatment and treatment integrity. This, however, does not suggest causality. Future research should more closely examine this possibility by directly manipulating the participants' level of perceived understanding through education.
Another limitation to this study was the sample size. As a between-subjects design, the number of participants (i.e., 40 for Experiment 1 and 20 for Experiment 2) was relatively small. This was a product of (a) the limited number of available experts for Experiment 1 and (b) time constraints (i.e., staff turnover was not high enough to feasibly recruit more than 20 participants for Experiment 2 in an acceptable period of time). This relatively restricted sample size may have limited the statistical power of the analysis; however, is it probable that the findings would have remained significant with a larger sample, given that our small sample demonstrated statistical significance. Future research should attempt to replicate these findings with a larger sample.
Last, Experiment 2 was conducted during a simulated treatment session rather than during actual situations. Although in situ evaluation would have increased ecological validity, we were concerned that subjecting our clinical population (i.e., individuals with severe behavior disorders and developmental disabilities) to the degraded level of treatment integrity would be detrimental to our their progress (Vollmer et al., 1999). Thus, we elected to conduct the study using a simulation approach. Although this decision may have handicapped the ecological validity of the study, ethical guidelines necessitated we consider the well being of the patients (Behavior Analysis Certification Board, 2004) and avoid doing harm (American Psychological Association, 2002). Similar to previous studies using simulated training environments to teach behavioral procedures (e.g., Iwata et al., 2000; Reid et al., 2003; Roscoe et al., 2006; Wallace, Doney, & Mintz-Rusudek, 2004), this study represents an important preliminary step toward understanding the effects of technical language on measures of treatment integrity. Future research should examine the effects of conversational language on measures of treatment integrity in more naturalistic settings and as a part of a comprehensive training program.
As previously noted, our sample was very specific to the question we were asking. Our main concern was to examine this phenomenon in the direct care population with which we work. While it is probable that the results of this study would generalize to other populations of caregivers, the generality of these findings should be tested with alternative caregiver populations.
Authors: David P. Jarmolowicz, MA, Clinical Specialist, Kennedy Krieger Institute and the University of Maryland Baltimore County, Behavioral Psychology, Baltimore, MD 21205. SungWoo Kahng, PhD, BCBA email@example.com), Assistant Professor, Kennedy Krieger Institute and the Johns Hopkins University School of Medicine, Behavioral Psychology, Baltimore, MD 21205. Einar T. Ingvarsson, PhD, Assistant Professor, Youngstown State University, Department of Psychology, Youngstown, OH 44555. Richard Goysovich, Program Specialist, Kennedy Krieger, Institute, Behavioral Psychology, Baltimore, MD 21205. Rebecca Heggemeyer, Senior Program Specialist, Kennedy Krieger Institute, Behavioral Psychology, Baltimore, MD 21205. Meagan K. Gregory, MA, Research Assistant, Kennedy Krieger Institute, Behavioral Psychology, Baltimore, MD 21205