I hate to say it, but the idea is doomed to failure from the start.
Both human writing and AI writing are moving targets, and you have no real visibility into the mechanics of either one. By the time you test and validate any detector, it will be obsolete. You will never have the opportunity to test an individual validator’s effectiveness over time, because major new models are released every month or two. And the prevalence of AI writing is already influencing how real people write (especially young people who are only learning to write in the age of AI).
I’m not sure what the answer is here. But pouring time and money into a bad idea just because you don’t have a good idea is not a winning strategy.
That so many academics / school systems / teachers / educators are falling for this snake oil is very alarming. The very people who have access to information systems to research topics go with the flow of the latest apps and trends blindly without validating and researching that the tools don’t have massive harm problems.
“But to what extent has computer technology been an advantage to the masses of people? To steelworkers, vegetable-store owners, teachers, garage mechanics, musicians, bricklayers, dentists, and most of the rest into whose lives the computer now intrudes? Their private matters have been made more accessible to powerful institutions. They are more easily tracked and controlled; are subjected to more examinations; are increasingly mystified by the decisions made about them; are often reduced to mere numerical objects. They are inundated by junk mail. They are easy targets for advertising agencies and political organizations. The schools teach their children to operate computerized systems instead of teaching things that are more valuable to children. In a word, almost nothing that they need happens to the losers. Which is why they are losers.” ― Neil Postman, Technopoly: The Surrender of Culture to Technology, 1992