Like the myth of frogs reluctant to leave a pot of boiling water.
It seems our problems with technology often share a common thread. We foolishly expect there to be strategic solutions to inherently tactical problems.
A related problem is the idea of designing a car so that it cannot be started by a drunk driver. In emergencies, e.g, escaping a forest fire or a stalker in a bar's parking lot, you might want the car to work even if the driver were drunk. Neither the drunkstop technologists nor the car manufacturers want to be liable in those hypothetical situations {according to a story by WWJ} so what is to be done?
The assumption that AI will be able to solve all tactical problems with machine learning alone is flawed from the start. A system's ability to continuously improve its own performance is great but that alone does not mean that it can achieve any and all minimum necessary performance standards without human correction.
A hyperbola approaches its asymptote forever but never reaches it.
Bookmarks