The first killer application I came across was in the mid-1980s, when my Dad brought home his Kaypro computer. Working in the sign business, he purchased it for its word processing and spreadsheet capability. The word processor was WordStar and it had the ability to print bold face. Using a mark-up language, the boldface feature was turned on with a <b> and then turned off with an ending </b>, just how HTML works today:
The following will be in boldface: <b>Killer App</b>
The opposite of a killer app might be a dud app, and for that Kaypro it might have been ELIZA. As an early chat bot, it was intriguing for computer scientists — but limited. Programmed to be a Rogerian therapist to reflect back input, it would respond to “I feel down today” with “Tell me more about why do you feel down today.”
In the Generative Artificial Intelligence field today, the “dud apps” are the broad swath of tools that claim to detect patterns made by generative intelligence engines. It purports to be able give some percent confidence that some text was either human-crafted, or else machine-generated. I have looked deeply into the topic, talked to experts deeply immersed in the field, and concluded that educators should not rely on the such technology currently used (as of Sept 2023!) for informal or formal procedures. Now, while that is my perspective, explaining why would take up a lot of space — beyond the space of a newsletter. To put it simply though: It is not a solved problem. Tools that we use to detect similarity to other submitted papers is a solved problem, and that can be relied upon. GenAI detection is not in the same class.
It is of course understandable for teachers to request data. Data is power after all. However there is also the counter argument that such percentages are not replicable, that it isn’t really data. Another tool being used on the same sample might have wildly different results, which is not the case for similarity scores. This is the case because there is no known approach that can possibly solve that problem space. Thus, if we did offer such feature, we enter into the realm of forming biases.
While an interesting curiosity, AI detection tools are perhaps as ethically challenging as using ELIZA for therapy. In both cases they aren’t able to do what it says it’s doing on the tin.
