The end didn’t come the way we expected. We knew about the AI’s, read Twitters about the threat, bold 140-character haiku-sized nuggets of wisdom.
AI. Would they spark social unrest from massive unemployment? Megadeaths from AI-powered hunter-killers? Later, we wished for a threat that obvious, that clean, that direct. That HONEST.
It started small. Doesn’t it always? Automated filters started appearing to remove “offensive” comments based on racial and sexual identity markers. No one objected.
Or did they? Weren’t the objections themselves objectionable?
By the time the filters were all in place, no one could object to anything anymore.
Including the filters.
On social media, the conversations stopped. All of them. If there was one certainty, it was that anything could be classified offensive to somebody. Somehow, the filter authors were surprised when we were cut off.
I have an idea to get around the filters, but I doubt you’ll ever read about it.
Author’s comments… I’ve been studying a lot in the data science field, including text classification. This story was inspired by an email inviting me to this Kaggle competition. “Toxic?” By what standard?