To: The editor, and my colleagues
I pose a challenge to all of my colleagues. As coordinator of one of the last MET programs to convert to the TAC- ABET new criteria, I’ve
led the creation of a review process that, like many others, includes many assessment metrics for our education process. The problem is that few of these metrics address ‘validity’ or ‘reliability’. Thus
the challenge: as a community of professional engineers and educators I posit the need to embrace and utilized these concepts.
As a professional engineer, I use ASTM standards for testing materials. All of these standards (engineering assessment metrics) begin with definitions and a discussion of validity. Why don’t all education
process metrics begin this way? Are we duplicitous?
As a professional educator (e.g. secondary education degree, past licenses in three states), I am cognizant that many education processes are quite variable. Teaching and learning styles can be quite disparate, and much education can occur asynchronously. This can make an orderly evolution of ‘outcomes’ development both difficult and hard to assess. But correlation values (statistical data) of student
performance to outcomes (or reference indicators) are uncommon. There is little discussion of how relevant it (SPC?) is.
Other statistics may be equally important. Many ASTM standards continue to address ‘reliability’ and use statistics to include aspects such as sample size. This would be applicable to small class sizes, for instance. Our institution recently set a lower class size limit for administering the ubiquitous ‘student evaluation of instruction’, and is a direct reflection of this issue.
In general, I have found my local queries into this matter unwelcome. Creating and maintaining a good assessment instrument takes time and money. I have had little support from my institution to address assessment quality issues, and few examples from my peers. Perhaps a larger forum is needed to address this matter.
I propose that we view any ‘non- valid’ metrics as
‘indicators’, and avoid using them as primary reasons for program modifications. Our discipline has embraced the assessment instruments maintained by NCEES as correlating to minimum engineering knowledge for safe practices. Do any other ‘valid’ assessment instruments exist?
When using my professional engineering privileges, I am bound by a ‘Code of Ethics’ that requires me to use best practices. How honest are you? How honest can we be?
Craig Johnson, Ph.D, P.E., WY Alpha ’83
Thanks for your post which gave me a opportunity to organize some of these thoughts.
(To readers: This has aspects of a rant and it is because it’s an honest presentation of my own frustrations. But don’t let the tone cloud the observations and issues that I raise.)
So In My Humble Opinion:
There’s no metrics for most things because, before the time and money, it takes thinking. And the older I get the more I realize there’s very little thinking in the world. And if there is a good idea, then there’s almost never the drive to change personal habits. Almost all people do not realize that when you build anything, every part is required. And they don’t realize that doing any small thing on time will advance a project incrementally and in the process of doing, you’ll think of something new and you’re next steps will be smarter.
( There are of course, spectacular exceptions to this general tendency: The Constitution of the United States with the first 15 amendments, Apple Computer after Steve Jobs return, Netflix, Google, Wikipedia, Southwest Airlines, NASA in the ’60s … )
I’m a system architect for web servers. I spec and build the systems that support commercial web sites. I work with many software engineers who re-invent everything each time. So, we have 5 address change modules that conflict with each other. We have engineers breaking sftp inter computer transfers on a regular basis. I’ve been told it’s impossible to write a specification for a complex implementation.
I write documentation, standards, how to’s; I’ve standardized specification documents, workflows and I now make instructional videos to describe how to use the documentation.
And I’ve had a great insight recently, no matter how easy a process is to follow, no matter how easy it is to learn. People won’t bother to learn any new process unless specifically threatened by their boss. And even they they will extort the project with the attitude “If you ask me to do something new, I can’t make the budget and the deadline” and that then wilts even the boss’s resolve.
I’ve always admired Professional Engineers. In my immaturity, in the early ’70s, I never realized how valuable the effort to obtain the PE certification would be. PE’s need to sign off that the project completed safely and poses no risk to human life. Your comfort with that real accountability is the cause of your consternation when you find “your local queries into the matter are unwelcome.”
Most people aren’t duplicitous which would require that they under stand 2 sides of an issue. They don’t understand even one. And they also don’t understand Teilhard de Chardin’s observation, “Not to decide is to decide.”
So, what to do? You need to build your own things.
• Publish metrics the way they ought to be on your own website.
• Create docs and videos showing demonstrating a step-by-step application of Statistical Process Control to Education.
• Devise an incremental approach that can reach small completions without much money and that you can advance in the time you can schedule in your life.
• Gleefully embrace volunteers who want to help and complete tasks on time. ( Internet/Video Conference availability make it trivially simple to collaborate world-wide these days. It seems to me these forums are under utilized to the approximation of not at all. )
• Contact the Tau Beta Pi, MindSET organizers to incorporate SPC into their educational effort.
• It’s near impossible to change existing organizations so find ways to create educational programs that you control so you can apply your metrics.
• Apply Statistical Process Control to your own program to honestly and ethically measure your own outcomes.
Ed Kulis, M.S., M.Eng. NY Gamma ’75