Treffer: Programmed Instruction for Teaching Java: Consideration of Learn Unit Frequency and Rule-Test Performance
Weitere Informationen
At the beginning of a Java computer programming course, nine students in an undergraduate class and nine students in a graduate class completed a web-based programmed instruction tutoring system that taught a simple computer program. All students exited the tutor with an identical level of skill, at least as determined by the tutor's required terminal performance, which involved writing the program and passing multiple-choice tests on the program's elements. Before entering and after exiting the tutor, students completed a test of rule-based performance that required applications of general programming principles to solve novel problems. In both classes, the number of correct rule answers observed before entering the tutor did not predict the number of learn units that students subsequently used to complete the tutor. However, the frequency of learn units was inversely related to post-tutor rule-test performance, i.e., as the number of learn units used in the tutor increased over students, the number of correct answers on the post-tutor rule test decreased. Since time to complete the tutor was unrelated to learn unit frequency, these data suggest that high achieving students may have generated autoclitic learn units while using the tutor. Interteaching, as an occasion for generating and sharing interlocking learn units, may be an effective complement to programmed instruction in promoting optimal learning in all students. (Contains 2 tables and 4 figures.)
As Provided
Programmed Instruction for Teaching Java: Consideration of Learn Unit Frequency and Rule-Test Performance
<cn> <bold>By: Henry H. Emurian</bold> </cn>Programmed instruction (PI) is an effective tool to teach students in information systems to write and to understand a Java computer program as a first technical exercise in a computer programming course<anchor name="b-fn1"></anchor><sups>i</sups>. A web-based PI tutoring system to accomplish that objective was presented in Emurian, Hu, Wang, and Durham (2000), and behavior principles supporting the design and implementation of the system were described by Emurian, Wang, and Durham (2003) and Emurian and Durham (2003). Since the initial prototype tutoring system was developed, the implementation and assessments have evolved over several updates, ranging from analyses of learning performance (Emurian, 2004) to considerations of the extent to which skill gained by completing the tutor fostered generalizable or rule-governed performances (Emurian, 2005, 2006a). More recently, interteaching (Boyce & Hineline, 2002; Saville, Zinn, Neef, Norman, & Ferreri, 2006) was shown to support and confirm the students’ knowledge acquired initially from using the programmed instruction tutor (Emurian, 2006b, in press).
An obviously important consideration in the design of a programmed instruction tutoring system is the extent to which the knowledge or skill gained is generalizable to solve problems not explicitly taught or encountered with the tutor itself. Behavior analysts working in the areas of programmed instruction (
A tacit assumption in the design of programmed instruction is that all learners who exit the system do so having achieved equivalent skill. A framework for interpreting individual differences in the learning process is given by the
The importance of this framework is to be understood in terms of its assumed predictability of long-term outcomes. Although our previous work showed increases in the number of correct rule-test answers following completion of a PI tutoring system (Emurian, 2005; 2006a, b), in comparison to pre-tutor answers, no attempt was made to relate such tests of rule-governed performance to the frequency of learn units that students encountered until successfully completing the tutor. If generalizable knowledge is equivalent and if individual differences have been overcome by repetition of learn units until mastery has occurred, there should be no differences in rule-governed performances among students upon completion of the tutoring system, and the frequency of learn units should be unrelated to subsequent performance. This paper, then, presents an evaluation of learn unit frequency in relationship to students’ rule-test performance observed (1) prior to using the programmed instruction tutoring system (baseline) and (2) after successfully completing the tutoring system.
METHOD
> <h31 id="bar-8-1-70-d50e137">Subjects</h31>
Subjects were students in two classes of a course entitled
For the Fall 2005 undergraduate class, nine of the 14 students produced usable records of tutor performance. There were six male and three female students (median age = 23 yrs, range = 19 to 28 yrs; median number of programming courses taken = 3, range = 2 to 5 courses). For the Summer 2006 graduate class, nine of the 13 students produced usable records of tutor performance. There were six male and three female students (median age = 26 yrs, range = 23 to 33 yrs; median number of programming courses taken = 3, range = 1 to 15 courses). Data were automatically recorded on a server at the transition points in the tutor, and occasional technical problems with the server would prevent all records from being recorded for a particular student. That accounts for the number of data records being fewer than the number of students in a class.
<h31 id="bar-8-1-70-d50e145">Material</h31>At the first meeting for both classes, students completed a pre-tutor questionnaire (baseline) that included rule-based questions. Appendix A presents the 14 rule questions administered to the Fall 2005 students, and Appendix B presents the 12 rule questions administered to the Spring 2006 students. The questionnaire changed over classes in relationship to updates to the Java tutor. Other questionnaires were also administered, to include demographic items and software self-efficacy (see Emurian, 2005). Following completion of the Java tutor, students in both classes repeated the rule-based questions in a post-tutor assessment.
The programmed instruction tutoring system taught a simple Java applet that would display a text string, as a JLabel object, within a browser on the World Wide Web<anchor name="b-fn2"></anchor><sups>ii</sups>.
Table 1 presents the Java code for the tutor version presented to the Fall 2005 students. The program was arbitrarily organized into ten lines of code and 34 items of code. Each item occupies a cell in the table, and PI learn units were based upon cells and lines. Table 2 presents the Java code for the tutor version presented to the Summer 2006 students. The program was arbitrarily organized into 11 lines of code and 37 items of code. Each item occupies a cell in the table, and PI learn units were based upon cells and lines. The symbols in the tables may be cryptic on initial encounter, but the objective of the tutor was to teach students to understand and use the symbols in the tables, even if the student had never before written a Java computer program.
>
><anchor name="tbl1"></anchor>
>
><anchor name="tbl2"></anchor>
The tutor and the open source code are freely available on the web, as referenced above. Running the tutor is the best way to understand its components and operation. In brief, the web-based Java tutor consists of the following eight stages: (1) introduction and example of the program running in a browser (learn units = 1), (2) learning to copy an item of code (learn units = 34 or 37), (3) learning to discriminate an item of code in a list (learn units = 34 or 37), (4) learning the semantics of an item of code (learn units = 34 or 37) and learning the syntax by typing the item by recall (learn units = 34 or 37), (5) learning to copy a line of code (learn units = 10 or 11), (6) learning to discriminate a line of code in a list (learn units = 10 or 11), (7) learning the semantics of a line of code (learn units = 10 or 11) and learning the syntax by typing the line by recall (learn units = 10 or 11), and (8) writing the entire program by recall (learn units = 1). Thus, for the Fall 2005 class, the minimum number of learn units to complete the tutor was 178; for the Summer 2006 class, the minimum number of learn units was 194. The multiple-choice tests for items and lines of code, which are embedded in the tutor, had five answer choices. For an incorrect items answer, there was a 5-sec delay or “time-out” in the tutor's interaction with the learner. For a correct items answer, a confirmation window appeared stating a general rule associated with the correct answer or an elaboration of the explanation of the meaning of the item. For both tutor versions, the lines stage had no delay interval or confirmation window. Experience suggested that most students in our courses could complete the tutor within two to three hrs. The tutor transitioned automatically between stages, and students were able to take breaks between and within stages. The instructions, however, encouraged students to complete each stage before taking a break.
At the completion of each tutor stage, a record of performance, which included stage duration and performance errors, was transmitted to a database on the server. The only identifier was the student's email username, similar to what appears in any email transmission on the Internet. Students were informed of this record's transmission, and the instructional protocol was exempt from informed consent. Network interruptions sometimes prevented a record from being written, and this paper includes only those students who had records for all eight stages in the tutor.
<h31 id="bar-8-1-70-d50e177">Procedure</h31>For the Fall 2005 class, the first meeting provided orientation to the course. The pre-tutor questionnaire was administered and collected. As homework, students were instructed to complete the Java tutor before the next class meeting. At the completion of the tutor, students downloaded the post-tutor questionnaire from the course Blackboard site, completed it electronically, and returned it to the instructor by email attachment. For the Summer 2006 class, the procedure was similar, but it was the expectation that all students could complete the tutor within the three-hr class period. Thus, the post-tutor questionnaire was administered and collected as students completed the tutor in class. Only one student was not able to complete the tutor during class, and that student was not included in the data to be presented here because the records were not obtained.
RESULTS
>
Figure 1 presents scatterplots of total learn units and total correct rule-test answers for the pre-tutor baseline and post-tutor assessments for each of the students in the Fall 2005 class. Regression lines and Pearson correlation parameters are presented on the figure. The figure shows graphically that correct rule answers increased from pre-tutor baseline to post-tutor assessment for all nine students. The figure also shows that pre-tutor correct rule answers were not demonstrably related to subsequent learn unit frequency, and the test of the relationship failed to provide evidence of an orderly relationship. In contrast, the figure shows graphically that total learn units encountered during the tutor were inversely related to subsequent post-tutor correct rule answers, and the test of the relationship was significant. Students who required comparatively fewer learn units during the tutor achieved a higher number of post-tutor correct rule answers in comparison to students who required comparatively more learn units to complete the tutor.
>
><anchor name="fig1"></anchor>
Figure 2 presents scatterplots of total learn units and total correct rule-test answers for the pre-tutor baseline and post-tutor assessments for each of the students in the Summer 2006 class. Regression lines and Pearson correlation parameters are presented on the figure. The figure shows graphically that correct rule answers increased from pre-tutor baseline to post-tutor assessment for eight of the nine students. The exception was a student who answered 11 correct rule questions on both occasions. The figure also shows that pre-tutor correct rule answers were not demonstrably related to subsequent learn unit frequency, and the test of the relationship failed to provide evidence of an orderly relationship. In contrast, the figure shows graphically that total learn units encountered during the tutor were inversely related to subsequent post-tutor correct rule answers, and the test of the relationship was significant. Students who required comparatively fewer learn units during the tutor achieved a higher number of post-tutor correct rule answers in comparison to students who required comparatively more learn units to complete the tutor.
>
><anchor name="fig2"></anchor>
Figure 3 presents scatterplots of total learn units to complete the tutor and total time to complete the tutor for each of the students in the Fall 2005 class. The shortest time to complete the tutor was 60.5 min, and the longest time was 543.2 min. A Pearson correlation did not support a relationship between time and total learn units.
>
><anchor name="fig3"></anchor>
Figure 4 presents scatterplots of total learn units to complete the tutor and total time to complete the tutor for each of the students in the Summer 2006 class. The shortest time to complete the tutor was 60.0 min, and the longest time was 143.1 min. A Pearson correlation did not support a relationship between time and total learn units.
>
><anchor name="fig4"></anchor>
DISCUSSION
>
The relationship between learn unit frequency and rule-test performance was orderly for students over two different classes. As the number of learn units to complete the tutor increased over students, the number of correct answers produced on the rule test decreased. However, the data did not support a similar relationship between baseline rule-test performance and subsequent learn unit frequency or between time in the tutor and learn unit frequency to complete the tutor. These effects were observed for undergraduate and graduate students, for two versions of the Java program, for two versions of the rule test, and for two ways to complete the tutor: (1) homework, which favored distributed learning, and (2) class work, which supported massed learning.
The similarity in outcomes shows the reliability of the behavioral processes under conditions of systematic replication (Sidman, 1960), thereby demonstrating the generality of the findings.
All students in the Fall 2005 class and eight of the nine students in the Summer 2006 class showed improvement in rule-test performance between the pre-tutor baseline and the post-tutor assessment. However, the knowledge transfer from the tutor to the rule test (
The observation that total correct rule answers on the post-tutor assessment were generally lower for students who used a relatively greater number of learn units suggests that equivalent “knowledge” was not achieved by all students at the completion of the tutor. When multiple-choice tests are used to assess learning, as was the case in the present tutor, the opportunity for repetition may not always occasion the “studying behavior” essential to apply a principle to the selection of the correct alternative on a test. Although the term “learn unit” was applied to the tutor components, the failure of transfer to occur for all students indicates that the components did not always satisfy the definition of a learn unit for all students, since the interaction with the component events apparently did not change all students’ behavior in the direction intended. As stated by Greer and McDonough (1999), “The learn unit is present when student learning occurs in teaching interactions and is absent when student learning does not occur in teaching interactions” (p. 6). Although 17 of the 18 students in this study showed improvement in rule-test performance over pre-tutor baseline, this outcome does not support the conclusion that those same 17 students all encountered identical learn units. It was possible, for example, for a student to repeat a tutoring system component until a correct multiple-choice question was answered correctly simply by selecting a different answer until a transition occurred.
An automated instructional system is challenged to provide tests of learning complex rules, especially when the design of learn units requires correct performance for progress in the system. In the present tutor, for example, the display of textual descriptions and explanations of the semantics of Java code is followed by a multiple-choice test. A correct or corrective choice operationalizes the learn unit. The tactic is to present learn units until evidence of hierarchical relational networks (Hayes, Fox, Gifford, Wilson, Barnes-Holmes, & Healy, 2001) emerges and can be documented. Although a student may select the correct choice on a multiple-choice test, the challenge is to make certain that the choice is under the functional control of the rule and is evidence of the existence of equivalence relations. How a reader may come to such verbal expertise is addressed by behavior analysts in terms of milestones such as verbal mediation for problem solving (Greer & Keohane, 2006). Once such a milestone has been reached, presumably a reader would benefit from exposure to text that has been designed to facilitate the acquisition of rule-governed performances. One interpretation of these data, then, is that the textual information contained with the tutor frames did not adequately support the solution of rule-based problems for all students.
In that latter regard, the textual frames in the Java tutor follow many of the guidelines suggested by Mayer (2002) to promote meaningful learning: advance organizers; signaling; adjunct questions; immediate feedback for performance accuracy; tested understanding of facts, concepts, and rules; and sequential structure building as a terminal performance to organize the learning process within a single conceptual objective – the student's production and understanding of a Java applet. Moreover, that students may acquire skill to generate their own learn units is the rationale for much of the work in self-regulated learning (Kauffman, 2004) and reflection (Masui & DeCorte, 2005). The conditions for the manifestation of such repertoires, however, include a history of reinforcement necessary to sustain responding in relationship to the textual material under study, otherwise defined as a student's “interest” in the subject matter (Dornisch & Sperling, 2006). How learners might integrate motivational factors during learning with transfer performance has recently been addressed in the educational psychology literature (Pugh & Bergin, 2006).
The fact that insufficient evidence existed to support a relationship between learn units required to complete the tutor and time in the tutor suggests that the students’ behavioral interactions with the textual material differed, which was manifested in the post-tutor rule-test performances. Those differences plausibly relate to generative learn units in excess of the ones present in the programmed instruction tutoring system. How to foster a student's autoclitic learn units, occasioned by reading and by the questioner (speaker) and the answerer (listener) being the solitary ”thinker” (Catania, 1998; Skinner, 1957), is a challenge for behavior analysis. If teachers can be taught to modify their behavior to produce learn units that will change the behavior of their students (
Learning is a process that includes the actions of study and practice (Swezey & Llaneras, 1997), sometimes for years (Ericsson & Lehmann, 1996), and the assessment of effectiveness as a change in the learner (Skinner, 1953, 1954), a change that might be observed and documented by others or even by the learner as a self-evaluating authority (Eilam & Aharon, 2003; Pintrich, 2003; Zimmerman, 1994). Behavior analysts have proposed and evaluated instructional techniques to overcome individual differences
Footnotes
<anchor name="fn1"></anchor>
<sups>
i
</sups> The programmed instruction tutoring systems are freely available on the web, together with instructional course material, interteaching reports, and free open source software for the tutor. Each stage of the tutor is separately accessible, and the best way to understand the tutor's design is to run it in a browser. To run the tutor, your browser must be enabled with Java JRE 5, which may be downloaded from Sun Microsystems, Inc. How to do that is explained within the online material that supports the Java tutor, as given here:
<sups>
ii
</sups> The link to run the program is given in the tutor instructions, and it is given here:
REFERENCES
<anchor name="c1"></anchor>Azevedo, R., & Cromley, J. G. (2004). Does training on self-regulated learning facilitate students’ learning with hypermedia?
Barnett, S. M., & Ceci, S. J. (2002). When and where do we apply what we learn? A taxonomy for far transfer.
Boyce, T. E., & Hineline, P. N. (2002). Interteaching: A strategy for enhancing the user-friendliness of behavioral arrangements in the college classroom.
Buzhardt, J., & Semb, G. B. (2002). Item-by-item versus end-of-test feedback in a computer-based PSI course.
Catania, A. C. (1998). The taxonomy of verbal behavior. In K. A.Lattal & M.Perone (Eds.),
Chiesa, M., & Robertson, A. (2000). Precision teaching and fluency training: Making math easier for pupils and teachers.
Dornisch, M. M., & Sperling, R. A. (2006). Facilitating learning from technology-enhanced text: Effects of prompted elaborative interrogation.
Eilam, B., & Aharon, I. (2003). Student’s planning in the process of self-regulated learning.
Emurian, H. H. (2001). The consequences of e-Learning. (Editorial),
Emurian, H. H. (2004). A programmed instruction tutoring system for Java: Consideration of learning performance and software self-efficacy.
Emurian, H. H. (2005). Web-based programmed instruction: Evidence of rule-governed learning.
Emurian, H. H. (2006a). A web-based tutor for Java™: Evidence of meaningful learning.
Emurian, H. H. (2006b). Assessing the effectiveness of programmed instruction and collaborative peer tutoring in teaching Java™.
Emurian, H. H., Holden, H. K., & Abarbanel, R. A. (in press). Managing Programmed Instruction and Collaborative Peer Tutoring in the Classroom: Applications in Teaching Java™.
Emurian, H. H., & Durham, A. G. (2003). Computer-based tutoring systems: A behavioral approach. In J. A.Jacko & A.Sears (Eds.),
Emurian, H. H., Wang, J., & Durham, A. G. (2003). Analysis of learner performance on a tutoring system for Java. In T.McGill (Ed.),
Emurian, H. H., Hu, X., Wang, J., & Durham, A. G. (2000). Learning Java: A programmed instruction approach using applets.
Epting, L. K., & Critchfield, T. S. (2006). Self-editing: On the relation between behavioral and psycholinguistic approaches.
Ericsson, K. A., & Lehmann, A. C. (1996). Expert and exceptional performance: Evidence of maximal adaptation to task constraints.
Ferster, C. B., & Perrott, M. C. (1968).
Greer, R. D. (2002).
Greer, R. D., & Keohane, D.-D. (2006). The evolution of verbal behavior in children.
Greer, R. D., & McDonough, S. H. (1999). Is the learn unit a fundamental measure of pedagogy?
Halpern, D. F., & Hakel, M. F. (2003). Applying the science of learning to the university and beyond: Teaching for long-term retention and transfer.
Hayes, S. C. (1989).
Hayes, S. C., Fox, E., Gifford, E. V., Wilson, K. G., Barnes-Holmes, D., & Healy, O. (2001). Derived relational responding as learned behavior. In S. C.Hayes, D.Barnes-Holmes, & B.Roche (Eds.),
Kauffman, D. F. (2004). Self-regulated learning in web-based environments: Instructional tools designed to facilitate cognitive strategy use, metacognitive processing, and motivational beliefs.
Keller, F. S. (1968). Goodbye teacher...
Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching.
Keohane, D.-D., & Greer, R. D. (2005). Teachers’ use of a verbally governed algorithm and student learning.
Kudadjie-Gyamfi, E., & Rachlin, H. (2002). Rule-governed versus contingency-governed behavior in a self-control task: Effects of changes in contingencies.
Mathan, S. A., & Koedinger, K. R. (2005). Fostering the intelligent novice: Learning from errors with metacognitive tutoring.
Masui, C., & DeCorte, E. (2005). Learning to reflect and to attribute constructively as basic components of self-regulated learning.
Mayer, R. E. (2002). The Promise of Educational Psychology. Volume
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in learning and teaching contexts.
Pugh, K. J., & Bergin, D. A. (2006). Motivational influences on transfer.
Ross, D. E., Singer-Dudek, J., & Greer, R. D. (2005). The teacher performance rate and accuracy scale (TPRA): Training as evaluation.
Saville, B. K., Zinn, T. E., Neef, N. A., Norman, R. V., & Ferreri, S. J. (2006). A comparison of interteaching and lecture in the college classroom.
Sidman, M. (1960).
Skinner, B. F. (1957).
Skinner, B. F. (1954). The science of learning and the art of teaching.
Skinner, B. F. (1953).
Swezey, R. W., & Llaneras, R. E. (1997). Models in training and instruction. In G.Salvendy (Ed.),
Tudor, R. M., & Bostow, D. E. (1991). Computer-programmed instruction: The relation of required interaction to practical application.
Watkins, C. L., & Slocum, T. A. (2004). The components of direct instruction.
Zimmerman, B. J. (1994). Dimensions of academic self-regulation: A conceptual framework for education. In D. H.Schunk & B. J.Zimmerman (Eds.),