Evaluation of Educational Software.

0
529

Program evaluation is a recent phenomenon and can rightly be described as undergoing an evolution itself, just as software is. Serious evaluation of educational software is only a few years old, and confined almost exclusively to microcomputers. An obvious exception is CONDUIT (P.O. Box C, The University of Iowa, Oakdale, IA 52319). For over 15 years they solicited evaluations of maxicomputer software submitted to them. The constructive comments of reviewers were incorporated into each program before it was formally offered to educators. They continue to do this with their programs for microcomputers, as do many reputable organizations. Several reasons account for current high interest in evaluations of programs available on microcomputers: many programs exist, but some are not very good pedagogically; the programs are relatively expensive and budgets are limited; the number of educators wanting help in choosing programs best for their situation is large and increasing; the federal government is supporting ambitious evaluation projects; and, regrettably, program piracy among educators is widespread enough such that vendors can not risk sending programs on a trial basis. Educators must evaluate a program not in some general context but with the specifics of their particular courses and students in mind. What is a successful program in one person’s General Biology course may impede education in another’s. The guiding principle is again the total educational computing systems approach (Crovello 1982a, 1983). When selecting software you must consider the hardware and people that are part of your particular class. The reason for considering hardware is obvious; certain programs only run on certain types of computer, or even only on certain configurations of certain types. People must be considered since they will react differently to different programs. The same program may be too easy for some students and too frustrating for others; it may give wonderful coverage of an aspect of a topic, but an aspect that you do not cover; it may take too long for each student to use; etc. Crovello (1982b) considered aspects of the who, when, where, and why of software evaluation. Among other points, he emphasized that evaluation of a program is not made just once, and that your students should be involved at several stages of the evaluation process. Not only should students help, but also faculty colleagues. Rose and Klenow (1983) summarized the DISC model for software evaluation and support material design. It is characterized by teacher training for evaluation and actual evaluation, both at the school district level . The two most important principles of software evaluation are: 1) ask whether the software really fits your course, not whether your course can be changed (with harmful results) to fit the software; and 2) do not go evaluating alone; involve your students and faculty colleagues.