To: The editor, and my colleagues
I pose a challenge to all of my colleagues. As coordinator of one of the last MET programs to convert to the TAC- ABET new criteria, I’ve
led the creation of a review process that, like many others, includes many assessment metrics for our education process. The problem is that few of these metrics address ‘validity’ or ‘reliability’. Thus
the challenge: as a community of professional engineers and educators I posit the need to embrace and utilized these concepts.
As a professional engineer, I use ASTM standards for testing materials. All of these standards (engineering assessment metrics) begin with definitions and a discussion of validity. Why don’t all education
process metrics begin this way? Are we duplicitous?
As a professional educator (e.g. secondary education degree, past licenses in three states), I am cognizant that many education processes are quite variable. Teaching and learning styles can be quite disparate, and much education can occur asynchronously. This can make an orderly evolution of ‘outcomes’ development both difficult and hard to assess. But correlation values (statistical data) of student
performance to outcomes (or reference indicators) are uncommon. There is little discussion of how relevant it (SPC?) is.
Other statistics may be equally important. Many ASTM standards continue to address ‘reliability’ and use statistics to include aspects such as sample size. This would be applicable to small class sizes, for instance. Our institution recently set a lower class size limit for administering the ubiquitous ‘student evaluation of instruction’, and is a direct reflection of this issue.
In general, I have found my local queries into this matter unwelcome. Creating and maintaining a good assessment instrument takes time and money. I have had little support from my institution to address assessment quality issues, and few examples from my peers. Perhaps a larger forum is needed to address this matter.
I propose that we view any ‘non- valid’ metrics as
‘indicators’, and avoid using them as primary reasons for program modifications. Our discipline has embraced the assessment instruments maintained by NCEES as correlating to minimum engineering knowledge for safe practices. Do any other ‘valid’ assessment instruments exist?
When using my professional engineering privileges, I am bound by a ‘Code of Ethics’ that requires me to use best practices. How honest are you? How honest can we be?
Craig Johnson, Ph.D, P.E., WY Alpha ’83