The purpose of this study was to evaluate the effects of a computer-based video instruction (CBVI) program to teach life skills. Three middle school–aged students with intellectual disabilities were taught how to make a sandwich, use a microwave, and set the table with a CBVI software package. A multiple probe across behaviors design was used to evaluate for a functional relation between the software and skill acquisition. All students increased the percentage of steps completed in the correct order after receiving CBVI. During maintenance probes, the performance of all students deteriorated; after a single review session with CBVI, all students regained previous levels of performance, tentatively indicating a role of CBVI as a tool for reviewing previously mastered material. Results are discussed in terms of the use of CBVI for providing students sufficient learning trials on tasks that require the use of consumable products (e.g., food).
Increasing independence and autonomy of students with developmental disabilities to allow them to fully participate in all aspects of their communities is one of the driving forces behind the education and curriculum for this population. Brown et al. (1979) provided a litmus test to help educators select learning objectives for students: If a student was unable to perform a task themselves, would someone else have to perform the task? This concept of what constitutes a functional skill continues to shape much of the curriculum and instruction for students with developmental disabilities. Often, to acquire these skills, students with intellectual disabilities need frequent, meaningful repetition or behavioral rehearsal of the skill. This is evidenced in part by results of innumerable studies of interventions for functional skills, which have shown that students with intellectual disability might require 20 learning sessions or more to master what a typically developing child might be expected to learn. For many life skills, like getting dressed, making the bed, vacuuming the floor, or washing dishes, there are few if any consumable materials needed for the student to practice the skill. Practicing other life skills, however, such as food preparation, requires the use of consumable products. This can result in either high instructional costs or reduced opportunities for students to practice important skills. In these cases, teachers must plan carefully to maximize instructional time and not waste materials.
Considering this challenge, teachers could incorporate computer-based instruction (CBI) as an educational support for students who require a many opportunities to practice and improve their daily living skills. CBI can provide repetition and individualized presentation and feedback, as well as reduce the need for using consumables for practice. Because students can work on CBI independently, it has the added benefit of potentially freeing teachers and other staff to provide live instruction to other students. This could allow greater flexibility and more efficient use of staff time. There is a growing body of literatures demonstrating how CBI and computer-based video instruction (CBVI) can enhance learning of life skills. The bulk of this research is on community-based skills that are simulated on the computer. For example, Mechling, Gast, and Barthold (2003) taught ATM usage by integrating a constant-time-delay–prompting strategy into a CBVI program. By also using CBVI, Ayres, Langone, Boon, and Norman (2006) augmented classroom simulations with technology to teach a group of middle school students with mild to moderate intellectual disabilities to use the dollar-more purchasing strategy. Several studies have used CBVI to teach students to locate items in a grocery store (Hutcherson, Langone, Ayres, & Clees, 2004; Mechling, 2004; Mechling, Gast, & Langone, 2002; Wissick, Lloyd, & Kinzie, 1992). In related community settings, Mechling, Pridgen and Cronin (2005) incorporated CBVI into an instructional program to teach social responses needed to make purchases in a fast food restaurant. Mitchell, Parsons, and Leonard (2007) immersed students with autism into a virtual reality environment (a complete blend of computer and video technology) to teach social and functional skills needed to navigate café settings. Davies, Stock, and Wehmeyer (2003) incorporated CBVI into a program to teach ATM usage.
The common element that these studies shared was the delivery or mediation of video by student interaction with the computer to teach a functional, community-based life skill. The level of interaction or the degree of simulation with the natural environment varied across these studies. For example, in Mechling et al.'s (2005) simulation, participants interacted with the computer (selecting images on the screen) as well as verbally and physically with a teacher. Therefore, in their simulation, students were engaging in some response topographies that would be identical, or at least very similar to the natural environment. This contrasts with Hutcherson et al. (2004) as well as with Ayres and Langone (2002) in which the only instructional interactions provided for students involved CBVI (i.e., the teacher did not participate in the instruction after the student began the computer program). In addition, because these were pure simulations of community events, students were not engaging in response topographies during simulation that were similar to the goal response. This is significant because it represents a response generalization pattern, whereby students acquire one response topography on the computer (e.g., clicking icons) and generalize this to something else in the community (e.g., putting a cereal box in a shopping cart).
In the current investigation, we approached instruction in a similar format. Student responses during CBVI were not topographically similar to what they would have done in the natural environment. This was a systematic replication of Ayres, Maguire, and McClimon (2009) where 3 elementary-aged students with autism learned to make a sandwich, use a microwave, and set the table by using a computer-based simulation enhanced with video models. The researchers documented improvements in participants' responding on the computer and in vivo, but several confounding factors threatened the internal validity of the study. The study used multiple probe across behaviors to evaluate experimental control, but only 1 participant was able to acquire and generalize all three behaviors during the study. One of the participants acquired and generalized two of the behaviors but did not have time in the school year to acquire the final behavior. The other student in the study acquired and generalized all three behaviors but accessed all instructional materials in the software after baseline (thus receiving instruction on all three skills simultaneously). Therefore, the researchers could not use a multiple probe across behaviors to evaluate a functional relation between the software and skill acquisition (thus, leaving the researchers with a series of AB designs for the 3rd student).
Although Ayres et al. (2009) probed for generalization from the computer to the natural environment, they only did so in a pretest–posttest fashion. After students responded correctly to 90% or more of task-analysis steps on the computer, they were probed in vivo. This did not permit for continuous monitoring of generalization. In the current investigation, we addressed this weakness by probing for generalization throughout the study, after students began intervention. Another difference between this study and the one from Ayres et al. was the setting for the study. All students in the current investigation took part in the study in the school rather than in the school and home. The participants in the current study also differed from Ayres et al.'s original sample in age and disability. The current participants were older and did not have an autism diagnosis. The fundamental research question was identical though: Can students acquire and generalize a specific sequence for completing a functional skill taught via CBVI.
Three 15-year-old middle school students participated in this study. All of the students received special education services in a special education classroom and participated in some general education classes (e.g., health, art, computer technology). Students were recruited for this study based on (a) individualized education program (IEP) objectives related to self-help and food preparation, (b) experience using a computer with a mouse, (c) student interest and consent, (d) parental interest in having their child increase his/her independence related to food preparation, and (e) parental consent.
Donnie was a soft-spoken young man who was eager to engage in conversation despite teacher reports that he had difficulty interacting with same-aged peers. Donnie was very task oriented and exhibited a strong, deliberate work ethic when asked to complete preferred or nonpreferred work. His IEP objectives focused on functional academics and community-related skills. He could solve simple arithmetic problems with the aid of a calculator, count coin combinations up to $0.75 and he could use a written shopping list to locate items in a store. Donnie frequently used the computer to play educational software games in his free time. He scored a 51 SS on the Differential Abilities Scales (Elliot, 1990) and a 56 composite on the Vineland Adaptive Behavior Scales (Sparrow, Balla, & Cicchetti, 1984).
Bret had relatively strong verbal skills and the teacher reported that he exhibited a keen sense of humor. His disability was the result of a traumatic brain injury. Bret was most interested in hands-on tasks particularly anything that he could assemble. He had a particular weakness in functional academics and displayed severe attention problems. Like Donnie, his IEP goals focused primarily on functional academic and community-referenced skills (e.g., counting money). He also used the computer frequently for both academic tasks and recreational activities. On the Wechsler Intelligence Scales for Children–IV (WISC-IV; Wechsler 2003), his full scale score was 54, and on the Adaptive Behavior Assessment System–II (ABAS-II; Harrison & Oakland, 2003), he scored a composite of 51.
Alana was the only female participant in the study. She was a very polite young woman who constantly showed concern for the well being of her classmates. Alana was diagnosed with Down syndrome, and the classroom teacher reported that she struggled with functional academic skills and sometimes difficulty following directions. Like the young men, Alana's IEP goals focused on a range of functional and academic skills. She could use a calculator to solve simple math problems, she was working on functional sight vocabulary, and she had goals related to a range of community skills. She was the only participant whose parents reported that she assisted with meal preparation. Her full scale score on the WISC-IV was 40 and her ABAS-II composite was 68.
All session occurred in the students' resource classroom that measured approximately 12 m × 12 m and (a) included student desks, (b) a computer area with three Microsoft Windows computers, (c) a kitchen area with a table refrigerator and microwave, and (d) a kidney-shaped table at the front of the room. Students wore headphones and used one of the three classroom computers during all computer-based sessions. Students participated in all in vivo probes at the table that was within 1 m of the microwave. This environment did not include a true kitchen in the sense of what most individuals have in their homes. Rather, the teacher had set up a corner of the classroom to simulate a kitchen to the greatest extent possible. This setting is where in vivo probes took place.
Materials and Computer Software
In vivo materials
During in vivo probes, students were required to make a sandwich, microwave soup, and set a table. For making a sandwich, students were provided with a plate, bread, meat, and mustard. Sandwich ingredients were placed in a basket next to the plate. A single-serving mustard package that was preopened was used to provide control for the quantity of mustard applied to the sandwich. For making soup, students were provided a single-serving–sized container of Campbell's microwaveable chicken noodle soup. The soup was packaged in a styrofoam and plastic container with a metal pull-top lid and plastic ventilated lid. The pull-top metal lid was removed in advance because it contained sharp edges. Students used a GE Sensor Microwave that had 27 buttons, including the lever to open the door. For setting the table, students were provided a placemat, plate, fork, knife, spoon, napkin, and glass (for one table setting). The dishes were set in a single layer (i.e., not stacked) to the side of the placemat in a random fashion. The in vivo setting mirrored what the students' saw on the computer. Ideally, the software would have mirrored the natural environment instead. However, in this case, it was not possible to achieve this. To test for generalization, we decided to increase the number of similarities between the natural environment and the software.
The “I Can! Daily Living and Community Skills” (Sandbox Learning Company, n.d.) software program was used in this study; it was the same program used by Ayres et al. (2009). This computer program specifically targets the learning of functional skills. First, the program presented two video models performing each skill by depicting a first-person perspective of the action (Ayres & Langone, 2007), whereby the user saw the action take place as if they were the person engaged in the interaction. Each skill had six instructional videos that sampled a range of exemplars pertinent to the skill (e.g., different color plates, different types of bread). After watching the video models, the second portion of the program allowed the students an opportunity for a behavioral rehearsal of the skill. Students operated the program using a mouse or track ball. In addition, the program incorporated a system of least intrusive computer prompts to facilitate correct student responding. Specific computer-prompting procedures are described in the Procedures section.
Data Collection and Response Definitions
Event-recording procedures were used to record the number of correct and incorrect student responses for making a sandwich, cooking soup, and setting a table (see Table 1). Both the form and the function in which the task was performed were recorded; however, only the form in which the student performed the task was used to evaluate the effects of the computer program. For the form of the task, student performance was assessed topographically and sequentially using the task analysis. A correct response was defined as initiating a specific step of the task analysis within 5 s and completing the step within 10 s. Three types of incorrect responses were possible: latency errors (L), duration errors (D), and sequence errors (S). If the student did not initiate the performance of a specific step of the task analysis within 5 s, it was recorded as latency error. If the student took more than 10 s to complete a specific step, it was recorded as a duration error. If the student completed a step of the task analysis out of order, it was recorded as a sequence error. The number of correct student responses was then divided by the total number of task-analyzed steps to calculate the percentage of correct responses for each task. Sequence was believed to be important here because in many work settings (e.g., a restaurant), one has to learn to follow a specific sequence to complete a finished product. For example, a restaurant will have a specific sequence for steps for assembling a sandwich that corresponds with their desired presentation. Evaluation of performance of steps in sequence was important from an experimental standpoint to evaluate the transfer of stimulus control within the chained task.
Although we believed that sequence was important, we hypothesized that the students were capable of performing each of the tasks partially or completely in a “functional” manner. That is, to some extent, students could prepare an edible sandwich or soup, as well as, set a table. Therefore, at the conclusion of each in vivo probe, each step of the task analysis was recorded as correct or incorrect regardless of latency, duration, and sequence errors. If the student's task outcome was edible, usable, or set up accurately, all steps of the task analysis were recorded as correct. If steps were not completed (e.g., not putting meat on the sandwich) or were completed incorrectly (e.g., entering the wrong cooking time), those steps were scored as incorrect. Collecting data on the student's functional task correctness allowed students the opportunity to demonstrate, as much as possible, what they knew regarding each task. The number of correct student responses was then divided by the total number of task-analyzed steps to calculate the percentage of functionally correct responses for each task.
Although data were collected and graphed regarding the functional correctness of each task, they were not used to determine the effectiveness of CBVI because the interest was in stimulus control and a chained task. This may raise some concern regarding social validity of response definitions, because, after all, when making a sandwich at home, as long as one puts the bread on the plate first, the order of things that follow is of minor consequence. However, there are situations when performing a task in a specified topographical and sequential manner is more important than its function. For example, if an individual is at a restaurant, an employer will expect a certain quality regarding how the food is prepared and its overall presentation being completed in a specified manner, more than being simply edible. Likewise, an individual may have to set a table according to specific guidelines depending on the type of restaurant. In addition, the primary purpose of this study was to examine the generalized effects of the computer video simulation to in vivo probes. Chained tasks establish a discriminative stimulus for subsequent steps as each step is completed; therefore, transfer of stimulus control would be incomplete if students performed one way on the computer and then engaged in a different behavior in vivo.
Student responding during computer-based behavioral rehearsal also was recorded. The computer program automatically recorded student responses based on the student's mouse movements and clicks. The computer scored a student response as independent if the student initiated and completed the specified step of the task analysis within 10 s. The percentage of independent student responses also was graphed.
If the student did not initiate, complete, or incorrectly perform a specified step within 10 s, a hierarchal level of computer prompts (i.e., verbal, model, stimulus, and full) required for the student to perform the step correctly was recorded. Specific levels of computer assistance are described in the procedures section.
During baseline, students were assessed individually during all in vivo probes. The number of steps completed correctly according to the form of the task and the function of the task were recorded. The researcher provided each student with all materials necessary to successfully complete each task, and the teacher asked the student to either “make a sandwich, make a bowl of soup, or set the table.” No additional prompts or assistance were provided.
During CBVI sessions, the teacher described the computer program, what the participants would be seeing, and what they would be doing. All CBVI sessions were conducted in a one-to-one instructional format in an area free from distractions and other students. Students were asked to put on their headphones and follow the directions on the computer. First, the student watched two brief, narrated videos of a person completing the targeted task. After viewing the videos, the student engaged in a computer-simulated activity of the task. Students were required to use a computer mouse to progress through a task analysis to successfully complete the targeted task shown in the videos. The simulated activity began with the computer providing a task direction (i.e., “make a sandwich,” “make soup,” or “set a table”) while displaying all necessary materials on the computer screen. Students clicked on specific items and moved them to the appropriate places to complete a step of the task. If the student completed the step correctly within 10 s, it was recorded as an independent response. However, if the student responded incorrectly or did not respond, the computer presented a series of hierarchical prompts on a 10-s interval schedule between prompts to facilitate independent responding. Computer prompts included (a) verbal: providing an oral description of the step to be performed (e.g., “Put the bread on the plate.”); (b) model: showing a video clip of the step being performed by a model, (c) stimulus: highlighting the specific item or area that the student needed to click to perform the task, and (d) full: the computer performed the step of the task. Students engaged in the computer simulation program once daily and 3 days a week. Each session ranged from 2 to 4 min, depending on the number of prompts a student required for task completion.
In vivo probes
Immediately following CBVI sessions, the student's task performance was assessed with the actual materials and in vivo. Students participated during in vivo probes either at the table or kitchen area in a one-to-one instructional format. Students then were prompted to start the task (e.g., “Make a sandwich”). Similar to the baseline phase, no additional prompts or assistance were provided. If an incorrect response was observed and it did not inhibit completion of other steps, the student was allowed to continue performing the task. For example, if a student was setting the table and did not put the napkin to the right of the plate but on top of the plate, the student was allowed to continue the task because it would not inhibit their performance of placing the silverware in the correct location. However, if an incorrect response inhibited completion of the task, the researcher completed the step while shielding the student from observing the completion of the step. For example, if the student was making soup and did not press the “time cook” button before entering the time “1:30,” the researcher interrupted the student's response, asked the student to turn around to shield their view of the microwave, and pressed the “time cook” button. Because the remaining steps of the task analysis were contingent on pressing the “time cook” button, this step was completed by the researcher. In vivo probes continued until the student performed 100% of the task-analyzed steps correctly for three consecutive sessions. After reaching criterion, the next task was introduced. In vivo probes ranged from 30 s to 2 min, depending on the pace in which the student responded. Students who were not engaged in sessions or probes worked on other activities in other areas of the room to reduce the likelihood that they would observe a student participating with in vivo probes.
Maintenance probes were collected 1 day, 2 days, 6 weeks, and 12 weeks after the student met acquisition criterion. All maintenance probes occurred in vivo, identical to the daily probes taken during intervention. Similar to the baseline phase, the student was provided with all necessary materials to complete the task and the researcher asked the student to start the task (e.g., “Make soup”). No additional prompts or assistance were provided. However, for the 12-week probe, the student was asked to perform only one of the tasks for logistical and cost reasons (e.g., setting the table). In addition, if the student did not complete the task with 100% accuracy, the student participated in another CBVI session of setting the table. Immediately following this additional training session, the student was reassessed during an in vivo probe.
A multiple-probe design across behaviors (Gast & Ledford, in press) and replicated across students was used to evaluate the effects of CBVI for the acquisition and maintenance of functional skills. The staggered introduction of the intervention within a multiple-probe design allowed demonstration of intervention effects not only within each data series but across data series at the staggered times of intervention. As the student reached criterion (100% of task-analysis steps correct across three consecutive, in vivo probes) to move to the maintenance phase, the use of CBVI began with the next task, and so forth.
Interobserver agreement (IOA) and procedural reliability were collected in at least 20% of sessions in each condition, for each behavior and each student. For Donnie, data were collected during a total of 28.5% of his sessions; for Bret, 26.8% of his sessions; and for Alana, 28.5% of her sessions. Data were scored by one of two independent graduate research assistants, both of whom had experience collecting behavioral data and had been trained by the first author (K.A.) to score both dependent measures. These observers also scored procedural reliability for in vivo sessions. IOA on functional accuracy was 100% across students and skills. As raters were scoring a permanent product, this high level was expected. IOA on sequential accuracy across all behaviors was high. For Donnie, the mean was 92.7% across behaviors, with a range of 90%–100%, with disagreements occurring only on the microwave. Similarly, Bret's was high across behaviors with a mean of 95.8% and a range of 94.1–100%. Disagreements occurred in regards to sandwich making during intervention and setting the table in baseline. IOA on Alana's performance across skills and conditions averaged 96.9%, with a range of 95.5%–100%. The only skill where there were sequence errors where observers did not agree was on the microwaving skill.
Procedural reliability was 99.1% measured across sessions, with errors occurring on preparation and presentation of all materials (the researcher did not place all silverware on the table prior to beginning sessions on two occasions and did not immediately correct errors on two occasions that would have interfered with completion of later steps in the task analysis). Broken down by skill, this was 99% procedural reliability on setting the table, 99.3% on making a sandwich (error correction), and 100% on microwaving. Prior to beginning CBVI, the software program had been extensively tested for consistency and adherence to instructional protocol by several users not affiliated directly with this research study. No variances from protocols were reported. Furthermore, no malfunctions of the software or computer were noted during any instructional session.
Overall, CBVI facilitated the acquisition of making a sandwich, making soup, and setting a table for all students. Students also maintained all tasks during 1- and 2-day maintenance probes. However, at 6- and 12-week probes, task maintenance decreased. An additional training session was provided for each student on a single skill (setting the table), and in each case, the students reacquired the skill.
Figures 1–3 show student performance for each of the three tasks. Open squares represent the percentage of steps completed correctly following the prescribed sequence of the task analysis: the primary dependent variable of interest. Closed diamonds represent the percentage of steps that student completed independently that were functionally correct. The bar graph illustrates the percentage of steps correct during the behavioral rehearsal embedded in CBVI.
Figure 1 shows Donnie's performance on each of the three tasks. Donnie completed most or all steps for making a sandwich and setting the table in a functional manner during baseline. However, he did not perform these steps correctly in a topographical and sequential accurate manner according to the targeted task analysis. On the primary dependent measure, with all three tasks, the form in which Donnie completed the task correctly was either descending or stable prior to intervention. The baseline mean performance for sequentially setting the table, making soup, and making a sandwich was 47.3%, 40%, and 42.86%, respectfully. After intervention began, Donnie immediately acquired setting the table in the correct task-analysis sequence during computer-based and in vivo probes. Donnie required nine instructional sessions to reach initial mastery on all three tasks. He also maintained 100% steps correct during 1- and 2-day maintenance probes. However, Donnie's performance decreased to a mean of 45.44% across the three skills when probed 6 weeks later. Donnie's performance on setting the table continued to decrease to 50% sequentially correct when probed at 12 weeks. However, after completing an additional computer-based session, Donnie's performance returned to 100% steps correct.
Bret's performance is shown in Figure 2. Like Donnie, Bret showed functional independence with making a sandwich and setting the table but showed low levels of accuracy in sequentially following the steps of each task and relatively low levels of functional and sequential accuracy for microwaving soup. During baseline, his mean percentage sequentially correct for making soup was 40.00%; for making a sandwich, 20.73%; and for setting the table, 34.52%. After introduction of intervention for microwaving soup, Bret mastered the skill in eight sessions and maintained the performance at the 1-and 2-day follow ups. He mastered the sequence for sandwich making and setting the table more quickly, with immediate increases in CBVI and in vivo accuracy. He maintained this at the initial follow-up sessions as well. His 6-week maintenance of sequential correct steps decreased, with him completing 70% of the steps for microwaving soup, 54.55% of the steps for making a sandwich, and 50% of the steps for setting the table in the correct sequence. When retested at the 12-week follow up for setting the table, he correctly completed 58.33% of the steps, and then, after one computer sessions, he returned to 100% sequential accuracy.
Alana's performance, shown in Figure 3, is similar to her classmates in terms of baseline responding: relatively high functional accuracy and lower levels of sequential accuracy across skills. In baseline, the means for her sequential accuracy for making soup, making a sandwich, and setting the table were 62.50%, 20.83%, and 44.36% respectively. After introduction of intervention, she made mastered making soup in six sessions and maintained 100% sequentially correct at the 1- and 2-day follow ups. Similarly, she mastered setting the table in four sessions of CBVI and making a sandwich in three sessions; she maintained 100% sequentially correct at 1- and 2-day follow ups for both skills. Like her classmates, her sequential accuracy deteriorated at the 6-week follow up: 50% correct for making soup, 50% correct for setting the table and 63.64% correct for making a sandwich. At the 12-week follow up, she still performed at low levels for setting the table (50% sequential accuracy), and after one computer-based instruction session, her performance returned to 100% sequentially correct.
All participants mastered the performance of the three skills after the use of CBVI. These data add to findings that show the potential for using CBVI to teach functional skills in simulation (e.g., Ayres et al., 2009; Hutcherson, Langone, Ayres, & Clees, 2004; Tam, Man, Chan, Sze, & Wong, 2005). In addition, the data suggest the potential for using CBVI as a review strategy for previously mastered skills when the performance of those skills has deteriorated. The data indicated that CBVI may be a viable classroom instructional strategy when teachers do not have sufficient resources to practice functional skills (e.g., insufficient budget to purchase food for practicing meals) or when teachers need to allocate instructional time for other purposes and are not available to provide direct instruction to every aspect of the curriculum that students need. If students are able to master skills in a simulation that does not tax teacher time or resources, CBVI may allow teacher to dedicate greater time to other activities and classroom funds to other needs.
Because the software presented video models followed by an interactive component where students received feedback and prompting from the computer on their responding, it was not possible to identify which parts of the intervention were most important for skill acquisition. However, this study's results were different from most of the video modeling literature because the videos depicted skills happening in vivo and the student response was not the typical imitative response seen in most video modeling literature, whereby a student views a model and then is asked to do as they saw (see Ayres & Langone, 2005, for examples).
As with any single-subject design, the small pool of participants limits external validity of the investigation. Building external validity involves systematic replication and this study is the second to specifically evaluate aspects of “I Can! Daily Living and Community Skills” (Sandbox Learning Company, n.d.) software. Aside from the small number of inter- and intrasubject replications in the current investigation, certain characteristics of the participants are important to highlight. In this study, as in Ayres et al. (2009), all participants had experience using computers for various educational tasks and leisure activities. Future investigations into using software to teach functional skills could include novice computer users. Students who are more comfortable with traditional instruction and less skilled on a computer might not perform as well as these participants. Students' different learning strengths and preferences could influence the effectiveness of any instructional program. If a student has limited computer skills, CBVI might not be a good instructional choice for that individual.
Function and sequence
Several issues concerning the design of this study warrant cautious interpretation. First, the study was designed to evaluate the degree of stimulus control that might be achieved in a computer environment and then generalized to a natural setting. With the instruction of a chained task, stimulus control is typically thought of as occurring when the last step of a chain (or the stimulus conditions achieved by the last step) becomes a conditioned reinforcer for the previous step in the chain. Then, the second to last step becomes a conditioned reinforcer for the third to last step and an establishing operation or discriminative stimulus for the last step. This conceptualization highlights the importance of chained tasks, at least initially, being taught in a strict sequence even if that is not the only sequence in which the task can be done. In this study, all students could perform some or all of the tasks in baseline in a functionally correct manner. For example, they would make a sandwich and put all of the ingredients between the pieces of bread. This may be accurate and acceptable for making the sandwich at home but not if one were making a sandwich at a restaurant; there is usually a specific order. Regardless, the sequence is important in a practical sense as well as a scientific sense to evaluate transfer of stimulus control. Because the purpose of the study was to investigate the instruction of a chained task, accurate responding in sequence is critical to evaluation of stimulus control. The following question must be asked when examining the data: If the students were presented or told the correct sequence prior to baseline, would they have performed the skill correctly? It is possible that if they had known what the criteria were they would have responded accordingly. However, they learned the sequence after they were shown on the computer and practiced the sequence.
We designed this study to remediate a weakness of a previous study that did not concurrently monitor CBVI performance and generalization to in vivo settings. As mentioned previously, one of the primary ways this study differs from Ayres et al. (2009) is that students' in vivo responses were measured on a daily basis while they were engaged in intervention. Therefore, after a student began instruction on a skill, he or she would work through the computer simulation and then immediately take an in vivo probe. CBVI sessions were planned so that students could move quickly from the computer to the testing environment.
In many cases, CBVI might be selected because concurrent in vivo monitoring is not feasible. From an experimental standpoint, performance in both environments simultaneously is important because it permits immediate analysis of generalization. The drawback is that students may learn the skill simply by practicing it in vivo during probes. In addition, with the design of this study, students completed in vivo probes immediately after CBVI. This could have artificially inflated their in vivo performance and may be inseparable from the intervention itself. In other words, the intervention may be best described as CBVI and in vivo practice.
Another limitation closely related to testing is the environment in which the testing took place. The setting used for in vivo probes is not representative of wide range of real-life settings one would hope that students would be able to perform the skills. The in vivo setting did not take into account the set-up and environmental factors typically associated with food-preparation environments (e.g., sinks, countertops). The absence of these features limits what one can conclude about the generality of results. Daily probes in a home or restaurant environment where food would typically be prepared was not logistically possible; however, probing, at least in a pre–post fashion in a typical environment (e.g., at the home of the students) would have strengthened this study. Therefore, interpretation of generalization in this study must be restricted to generalization from a computer-based task to a “live” task.
Target skills were selected in advance of the study because of the time required to program the software. Students were then recruited for whom these skills were either a part of their IEP or their parents and caregivers identified the skills as important life skills for the students to learn. Instruction of these skills progressed as if the skills were indeed functional for the participants. However, after the deterioration in performance as measured at the 6-week maintenance probe, parents and teachers were asked if the students had participated in school or home by using any of the three skills learned. None of the students had used the skills in the previous 6 weeks. This raises the question as to whether these skills were truly functional for these students. Although the skills taught sound useful, they lack function if students do not use the skills. From an instructional standpoint, we waste time if we do not teach skills that the students will actually use, regardless of our perception of their importance.
In terms of research and practice, these findings extend the growing literature base concerning the use of CBVI as a tool for teaching life skills in simulation. A unique contribution of this study was the demonstration of CBVI as a review or a “booster shot” to assist students with reacquisition of skills that had not been used for a long period of time. This conclusion must be taken with caution, however, because it is only based on single demonstrations by each student of the skill recovery. The implication for practice may be that if students are taught a wide range of life skills, some of which might be immediately useful and some of which may be used further along in a student's life, CBVI may be able to facilitate retraining the student without additional support from a teacher or, in the case of an employment setting, a supervisor. In practice though, to arrive at a point where CBVI can be used by teachers, job coaches, and parents, the software must become more widely available. As with anything in a market system, demand or demonstrated need (and, therefore, potential buyers) will have to be evidenced before software companies invest the time and money into quality products. With more research pointing to the potential for this type of instructional tool, perhaps consumers will push for further development.
The development of software tools will require additional research. The results of this study add to the literature base that show the emerging potential for CBVI. Beyond the design and creation of individual products, researchers have to advance in their means of evaluating efficacy and generalization. As software interfaces become easier to navigate, testing students in simulation will provide information about skill acquisition in a single dimension. Evaluating the generalization of that skill concurrently is needed but creates potential facilitative testing effects. Regardless, researchers need to continue in these initial stages of investigation to identify the range and types of behaviors for which CBVI might be useful. From there, moving to find the boundaries of appropriate and effective use will help inform practitioners how to best use the existing technologies as well as potentially motivate software designers to create products that directly meet consumer needs.
Editor-in-Charge: Steven J. Taylor