bypass breadcrumb links
  InfoBrief SRS Logo  
  NSF 03-301 | November 2002   PDF format PDF format | See Related Reports  

School Mathematics and Science Programs Benefit From Instructional Technology

by James A. Kulik [1]

Instructional developers have been working for four decades to improve mathematics and science education with computer technology, and they have made significant contributions to student achievement during this time according to a review of controlled evaluations of instructional technology in elementary and secondary schools. The review found that most evaluation studies reported significant positive effects of instructional technology on mathematics and science learning, but not all technological approaches appeared to be equally effective.

The forthcoming review, Effects of Using Instructional Technology in Elementary and Secondary Schools: What Controlled Evaluation Studies Say, includes discussion of findings about mathematics and science in 36 controlled evaluations published since 1990 and from earlier reviews of controlled evaluations and less formal studies. The review did not cover theoretical works, case studies, policy or cost analyses, or other studies that investigated learning processes or social dimensions of technology without measuring learning outcomes.

The 36 evaluation studies examined four types of computer applications in mathematics and science: (a) integrated learning systems in mathematics; (b) computer tutorials in science; (c) computer simulations in science; and (d) microcomputer-based laboratories. The findings for each are discussed below.

A Note on Method

In the discussion, effect-size measures are used to summarize findings. An effect size specifies the number of standard deviation units separating outcome scores of an experimental and control group. Effect sizes are positive when the experimental group outperforms the control group and negative when the control group comes out on top. Slavin, an expert in educational evaluation, considers effect sizes above 0.25 large enough to be educationally significant.[2] Cohen, a pioneer in the use of effect sizes in the social sciences, classifies effect sizes of around 0.2 as small, 0.5 as moderate in size, and 0.8 as large.[3] More information about effect size can be found in Effects of Using Instructional Technology in Elementary and Secondary Schools.

Integrated Learning Systems in Mathematics

The term integrated learning system (ILS) refers to software programs that provide tutorial instruction at several grade levels and keep extensive records of student progress on networked computer systems. ILSs commonly focus on instruction in the basic skill areas of reading and mathematics. The Computer Curriculum Corporation and Compass (formerly Jostens Learning Corporation) are among the best known commercial sources for these systems.

Patrick Suppes pioneered in the development of this type of computer program during the early 1960s at Stanford University. The programs developed by Suppes and his colleagues presented drill-and-practice and tutorial lessons, required students to respond frequently during the lessons, provided feedback to students on their responses, and kept detailed records of student performance. In the late 1960s Suppes helped establish the Computer Curriculum Corporation to market this type of software to schools, and later other instructional developers followed Suppes' lead and began selling software built on the same instructional model. During the late 1980s and early 1990s educational experts began referring to these instructional programs as integrated learning systems.[4]

Reviewed in Effects of Using Instructional Technology are 16 reports published since 1990 on controlled evaluations of ILS effects in mathematics. The studies, which examined ILS programs from seven different vendors, were carried out in elementary and middle-school grades in the United States and abroad. Sample sizes in the studies ranged from 52 students in the smallest study to more than 1000 students in the largest. Duration of ILS instruction ranged from 71 days to five years. In seven of the studies, students received ILS instruction in mathematics only; in the remaining nine studies, students received ILS instruction in both mathematics and reading.

Each of the 16 studies found that mathematics test scores were at least slightly higher in the group taught with an ILS. In nine of the studies the test-score superiority of the ILS group was large enough to be considered both statistically significant and educationally meaningful. The median ILS effect in the 16 studies was to increase mathematics test scores by 0.38 standard deviations, or from the 50th to the 65th percentile.

Becker's 1992 review of studies of ILS effectiveness reported similar results.[5] Becker's report reviewed results from 32 early studies of ILS effectiveness in basic skills instruction. Eleven of the studies presented mathematics results separately from other findings. The median effect on mathematics achievement in these 11 studies was an increase in test scores of 0.40 standard deviations. An effect size of 0.40 is equivalent to an increase in test scores from the 50th to the 66th percentile. This median is virtually identical to the median ILS effect on mathematics tests in recent evaluations.

Research conducted during the 1990s suggests that in typical implementations students spend only 15-30% of the recommended amount of time on ILS instruction and that ILS instruction is usually treated as a curricular add-on, like band or art, rather than an intrinsic part of the curriculum.[6] Evaluations of ILSs typically focus on such incomplete implementations rather than on ideal ones. Evaluation results might have been even better if evaluators had focused on model implementation rather than on typical ones.

In addition, Effects of Using Instructional Technology reported that ILS effects on mathematics tests were higher in the seven studies in which the ILSs were used exclusively for mathematics instruction and lower in the nine studies in which the ILSs were used for both mathematics and reading instruction. It seems possible that students received too little ILS instruction in mathematics when ILS instruction was split between reading and mathematics.

Overall, most evaluation studies from the 1960s through the 1990s suggest that students benefit from ILS instruction in mathematics. In the typical evaluation study of the 1980s, ILS instruction raised mathematics test scores by about 0.4 standard deviations. More important, in the typical evaluation study from the 1990s, ILS instruction raised mathematics test scores by about the same amount.

Computer Tutorials in Science

Teachers have been using computer tutorials in natural and social science courses since the early 1970s. Unlike the broadband tutorial programs used in ILS instruction, science tutorials usually focus on specific topics. The programs present instructional material to a learner, require the learner to respond, evaluate the learner's response, and then on the basis of the evaluation determine what to present next. Tutoring programs are so named because they are meant to do the same things that individual tutors do.

Reviewed in Effects of Using Instructional Technology are six reports published since 1990 on controlled evaluations of computer tutorials in science. The studies were carried out in this country and abroad in courses in chemistry, biology, meteorology, and the social sciences. Four of the studies used researcher-developed software, and two used commercially produced software. Most of the studies were short in duration, with treatment duration ranging from ten days to three months.

In all but one of the six cases, the effect of computer tutoring was large enough to be considered both statistically significant and educationally meaningful. In the remaining study, the boost from computer tutoring was near zero. In the median case, the effect of computer tutorials was to raise student achievement scores by 0.59 standard deviations, or from the 50th to the 72nd percentile. Tutorial effects on student attitudes toward instruction and subject matter were also strong and positive. In all cases, computer tutoring produced significant positive effects on these attitudes. In the median study, the effect of computer tutorials was to raise attitude scores by 1.10 standard deviations.

Evaluation studies carried out during the 1970s and 1980s also found that computer tutoring had positive effects on student learning. A major meta-analytic review of such studies, for example, reported that the average effect of computer tutorials was to raise student test scores by 0.36 standard deviations.[7] This is equivalent to a boost in test scores from the 50th to the 64th percentile. The review covered many evaluations of computer tutorials in mathematics and reading but very few evaluations of computer tutorials in science. Too few studies were available in science education, in fact, to warrant separate conclusions about the effectiveness of computer tutorials in natural and social sciences.

Overall, evaluations of computer tutorials in the natural and social sciences have produced very favorable results. Effects on test scores in most studies were large enough to be considered educationally meaningful, and tutoring effects on student attitudes were even more notable. Computer tutorials had a good record in evaluation studies of the 1970s and 1980s, and this record has grown stronger in recent years.

Computer Simulations in Science

Computer simulations provide science students with theoretical or simplified models of real-word phenomena—for example, a frictionless world where the laws of Newtonian physics are more apparent—and they invite students to change features of the models so that they can observe the results. Science teachers use simulations in a variety of ways. They can use them to prepare students for future learning, or they can use them to supplement or replace other expositions on a topic. For example, a teacher might use a simulated frog dissection as a preparation for an actual dissection or as a substitute for the dissection. Science teachers can also use simulations to help students integrate facts, concepts, and principles that they learned separately. For example, students might play the role of world leaders or citizens in other countries in a simulation designed to help them apply their learning to realistic problems.

Many science educators consider simulation programs to be a real advance over tutorial programs because simulation programs seem to focus on higher-level instructional objectives. Early evaluation studies, however, provided little evidence of improved learning with simulations. For example, a comprehensive review of studies of computer-based instruction analyzed results from six simulation studies carried out during the 1970s and 1980s.[8] None of the studies found significant positive effects from instructional simulations. The median effect size in the six studies was –0.06. This means that students learning with and without simulations scored at nearly identical levels on the relevant tests of science learning.

Effects of Using Instructional Technology reviewed six reports published since 1990 of controlled evaluations of use of computer simulations in science teaching. The studies were carried out in high school courses in this country and abroad. The studies were short in duration; most examined a single simulation presented in one class period. The simulations were in biology, chemistry, earth science, and physics. Four of the studies found positive effects on student learning from the use of the simulations, but two studies found negative effects. The median effect of computer tutorials was to raise student achievement scores by 0.32 standard deviations, or from the 50th to the 63rd percentile.

Overall, the results of these studies suggest that computer simulations can sometimes be used to improve the effectiveness of science teaching, but the success of computer simulations is not guaranteed. The median effect size in the six studies was large enough to be considered educationally meaningful, but simulation effects were variable and sometimes negative. Teachers may therefore need to use some care in deciding when to use simulations, which simulations to use, and how to use them.

Microcomputer-Based Laboratories

Microcomputer-based laboratories (MBLs) use electronic sensors to collect data on physical systems, immediately convert the analog data into digital input, and concurrently transform the digital data to a graphical system.[9] As a result, learners in MBLs are able to witness a phenomenon in the laboratory while concurrently viewing the development of a graph describing the phenomenon. MBL instruction has long been a showpiece in discussions of computer applications in science teaching.

During the late 1970s and early 1980s, researchers at the Technology Education Research Center in Cambridge, Massachusetts, laid the groundwork for today's MBL programs by developing the probes and analog-to-digital circuits that make MBLs possible. Today, MBL software is available to measure and present data on such variables as temperature, heat, light, pH, force, pressure, and motion.

Developers of MBL instruction expected MBLs to increase and deepen student learning in science.[10] MBLs were expected to increase student learning for several reasons:

  • MBLs represent data in a number of ways for students.

  • They graph data representing physical events concurrently with the events, thus helping learners to link the two representations mentally.

  • They give students a genuine scientific experience.

  • They eliminate the drudgery of graph production so that students can concentrate instead on the interpretation of graphs.

Reviewers who examined the early evaluation literature found few studies that showed learning advantages for MBL instruction, however.[11] Mixed evaluation results were a common finding.

Effects of Using Instructional Technology also found no consistent MBL contribution to student learning. The report reviewed eight studies carried out in junior and senior high schools. The studies provided from one to four class periods of MBL instruction for an experimental group and an equivalent amount of conventional laboratory instruction for a control group. The laboratories covered topics in biology, chemistry, graphing, and physical sciences.

Seven of the eight studies found either small negative or small positive effects of MBL instruction on student learning. The remaining study found a very strong effect of MBL instruction on student learning, but the study had a design flaw that might account for the anomalous result. The median of the eight effect sizes was 0.01, a trivial effect. This means that students who learned in MBLs performed no better on tests than did students who learned in conventional laboratories.

Conclusion

For more than three decades, evaluators have been documenting the positive effects of ILSs in mathematics instruction. Evaluation studies of the 1970s and 1980s usually found that students learned more in math classes that included ILS instruction, and evaluation studies of the last decade found similar results. ILS effects on mathematics test scores in most studies were not only statistically significant, but they were large enough to be considered educationally meaningful. Research also suggests that students spend too little time on ILSs in typical implementations relative to the recommended amount. It is possible that ILS contributions would be even greater with fuller implementation of ILSs.

Recent evaluation studies also suggest that computer tutorials can produce very favorable results in natural and social science instruction. Effects of tutorials on test scores in most studies were large enough to be considered educationally meaningful and were also unusually large for field studies in education. Tutoring effects on student attitudes toward instruction and science were also large. Evaluation studies suggest that student attitudes go up dramatically when students receive some of their instruction from computer tutorials.

Science educators often think of simulation programs and microcomputer-based laboratories as advances over tutorial programs. That is because simulation programs and MBLs are designed to help students achieve higher order instructional objectives, whereas tutorial programs may seem to focus on more mundane objectives. Evaluation results from simulations and MBLs, however, were weaker and less consistent than were the results from tutorial programs. Although simulation programs sometimes improved the effectiveness of science teaching, some studies conducted during the 1980s and 1990s found negative effects from simulations. Results from MBLs were usually small, and they were negative as often as positive.

This report was funded in part by the Digital Society and Technologies Program in the Division of Information and Intelligent Systems, NSF.

For more information, contact:

Eileen L. Collins
Division of Science Resources Statistics
National Science Foundation
4201 Wilson Boulevard, Suite 965
Arlington, VA 22230
703-292-7768
ecollins@nsf.gov

Footnotes

[1] James A. Kulik of the University of Michigan prepared this InfoBrief as a consultant to the Science and Technology Policy Program of SRI International under contract to the National Science Foundation.

[2] Slavin, Robert E. 1990. "IBM's Writing to Read: Is It Right for Reading?" Phi Delta Kappan 72(3):214-216.

[3] Cohen, Jacob. 1977. Statistical Power Analysis for the Behavioral Sciences (Revised Edition). New York: Academic Press.

[4] Wilson, Judy. 1990. "Integrated Learning Systems: A Primer." Classroom Computer Learning 10(5):22-23,27-30,34,36.

[5] Becker, Henry J. 1991. "Computer-Based Integrated Learning Systems in the Elementary and Middle Grades: A Critical Review and Synthesis of Evaluation Reports." Journal of Educational Computing Research 8(1):1-41.

[6] Van Dusen, Lani M. and Worthen, Blaine R. 1995. "Can Integrated Instructional Technology Transform the Classroom?" Educational Leadership 53(2):28-33.

[7] Kulik, James A. 1994. "Meta-Analytic Studies of Findings on Computer-Based Instruction." In Baker, Eva L. and O'Neil, Harold F. Jr., eds., Technology Assessment in Education and Training, 9-33. Hillsdale, NJ: Lawrence Erlbaum Associates.

[8] Kulik, James A. 1994. "Meta-Analytic Studies of Findings on Computer-Based Instruction." In Baker, Eva L. and O'Neil, Harold F. Jr., eds., Technology Assessment in Education and Training, 9-33. Hillsdale, NJ: Lawrence Erlbaum Associates.

[9] Nakhleh, Mary B. 1994. "A Review of Microcomputer-Based Labs: How Have They Affected Science Learning?" Journal of Computers in Mathematics and Science Teaching 13(4):368-81.

[10] Mokros, Janice R. and Tinker, Robert F. 1987. "The Impact of Microcomputer-Based Labs on Children's Ability to Interpret Graphs." Journal of Research in Science Teaching 24(4):369-83.

[11] Weller, Herman G. 1996. "Assessing the Impact of Computer-Based Learning in Science." Journal of Research on Computing in Education 28(4):461-85.

Links to additional reports in the Implications of Information Technologies series
are available on the IT Overview page.


Top

SRS home page
Last Modified:  Dec 03, 2002.  Comments to srsweb@nsf.gov