In IT the problems are solvable. We can adopt systematic troubleshooting steps and most problems can be identified and resolved in minutes or a few hours. (Yes, I have problems that extended much longer—all IT professionals have—but those tend to be rare.) IT professionals have knowledge of the systems they manage, and they have resources (from online communities, colleagues, vendors, and even user manuals) to help them resolve unfamiliar problems. When their problems are solved, all users can recognize the green LED’s that signal functioning connections, operational computers, and—in schools—smiling teachers.
I once served on a committee hiring a professional who was primarily going to serve as network administrator. We were in the second interview, so there were fewer questions and more discussions, and he asked, “What can you tell me about the environment?” The superintendent (who admitted little knowledge of technology) began describing efforts they had made to improve the working climate in the schools. He looked puzzled as he listened for several minutes. When she paused to take a breath, I interrupted, “I think he was asking about the network environment?” He said, “yes, I’m glad to have heard that, but I was asking about the servers and stuff.” Sometimes IT professionals and educators are not even speaking the same language even when they use the same word.
So, I was helping a faculty member prepare their exam to import into the LMS. I noticed and pointed out "you only give 'all of the above' as a choice when it is the correct answer." They dismissed my suggestion we change it.
This is a faculty member who vehemently insists tests are the best way to measure learning.
@Umbr4R3d @ChristophB @garyackerman I guess there is something wrong with the testing regime as well. Students/pupils should be allowed to demonstrate their skills and competencies in ways that makes this whole scenario go away.
The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.
The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.
It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.
“Every kid starts out as a natural-born scientist, and then we beat it out of them. A few trickle through the system with their wonder and enthusiasm for science intact.”
— Carl Sagan
@margreta @garyackerman I wonder if the situation I characterise here: https://davelane.nz/explainer-digitech-risks-school-boards also applies where you are... I suspect it's fairly universal, and it's a very big elephant in the room.
A union has voted “work to rule,” so folks have been declining invitations to committee meetings. While I’m no longer in the union, I fully support the decision.
I do have to convince colleagues that suggesting, “I’ll get your input in an individual meetings” is really not an appropriate response to those who skip extra committee work as part of collective action.
Worth a read! #Edtech has changed. I’m wondering what might be abandoned.
Director of Teaching and Learning Innovation at a community college in New England
Retired k-12 science/ math/ technology teacher/ technology integration specialist/ coordinator