October 2010 Index
Home Page

Data! Data! Data!

Donald Perrin


In a 1992 seminar led by Edward Deming, I questioned the value of Teacher of the Year awards. The concept so annoyed Dr. Deming that he rose from his wheelchair and said such awards are mere popularity contests. “You won’t know how effective the teaching was for another five years.” Good decisions require focused questions and lots of relevant data – “Data! Data! Data!”

This raises questions about evaluation beyond those raised in earlier editorials and journal articles in that dealt with objective vs. subjective, performance vs. knowledge, rubrics vs. standardized tests, and criterion-based measures vs. grading on a curve. The products of education are realized in the future. Only then will we truly know if learning experiences were relevant, effective, and productive in the development of the learner for his/her future life and career.

I projected these same questions into real world decisions in business, government, pure and applied research, cost-effective use resources and accountability. In the sixties and seventies I read and evaluated federal education programs for learners with disabilities. The reviewers were well versed in evaluation techniques such as the Stufflebeam’s CIPP (Context, Input, Process, Product) evaluation model that provides data from actual experience rather than costly and time-consuming statistical studies. In those days, we argued theory-into-practice issues with industry and academic leaders such as Robert Diamond and Egon Guba as we served on advisory committees for new and existing educational services and programs.

On one occasion we were evaluating project proposals for funding up to $100,000. A proposal from Ray Kurzweil, a person unknown to our team members, requested $250,000 to complete development and initiate distribution of reading machines for deaf students in US universities and colleges. Based on the documentation provided, we recommended full funding with the hope that this significant technology might be available in one to two years. It was approved. Within three or four months there was a steady trickle of Kurzweil Reading devices, one for each university or college library. It was an excellent machine and a huge success making the entire library accessible to blind students.

Looking backward, The evaluation techniques we used were shoddy by today’s standards. We did not know that Ray Kurzweil was to be the leader in a new industry. He invented the flat-bed scanner, the optical character reader, text to speech software, and the first really practical reading machine. He would continue to innovate throughout his career with speech recognition, music systems to synthesize orchestral performances from sheet music, and devices to accelerate performance through artificial intelligence. Ray Kurzweil continues to be internationally acclaimed for his inventions. We did not – and could not – anticipate the quality and success of the Kurzweil Reader.

Some of the other projects we funded were not so successful. People with great ideas do not always have the ability to implement them. From the evaluator’s point of view, decisions about the future based on new and not-yet-known technologies present considerable risk. Today we have better tools for research, decision making, and risk management. The question is: do these adequately support the genius of a successful entrepreneur or can they become tools for exclusion? Can public education find and adopt technologies that will stimulate learning and develop curricula that will prove relevant, effective, and productive in the future? How will this transformation come about?

go top
October 2010 Index
Home Page