In today’s column, I am continuing my ongoing series that has closely been exploring the use of generative AI as a generalized interactive chatbot that imparts mental health guidance.
But serious and sobering qualms exist. There isn’t a pre-check to validate that someone ought to be resorting to generic generative AI for such advisement. There isn’t any ironclad certification of the generative AI for use in this specific capacity. The guardrails of the generative AI might not be sufficient to avoid professing ill-advised guidance. So-called AI hallucinations can arise (as an aside, the parlance “AI hallucination” terminology is something that I demonstrably disfavor as a phraseology, for the reasons stated at the link here, but anyway generally connotes that generative AI can produce specious or fabricated answers). And so on.
All in all, you might declare that we are immersed in the Wild West of AI-based human mental health advisement, which is taking place surreptitiously yet in plain sight, and lacks the traditional kinds of checks and balances that society expects to protectively be instilled.
I’ve got a bit of an additional surprise for you. Consider a new facet that you might find notably intriguing and at the same time disturbing. It is the latest novelty approach that veers into the mental health realm by controversially using generative AI in a rage-room capacity.
Comments are closed.