Artificial intelligence (AI) systems meant to screen out online hate speech can be easily duped by humans, a study has found.
Hateful text and comments are an ever-increasing problem in online environments, yet addressing the rampant issue relies on being able to identify toxic content.
Researchers from Aalto University in Finland have discovered weaknesses in many machine learning detectors currently used to recognize and keep hate speech at bay.
Many popular social media and online platforms use hate speech detectors.
However, bad grammar and awkward spelling – intentional or not – might make toxic social media comments harder for AI detectors to spot. The team put seven state-of-the-art hate speech detectors to the test.
All of them failed. Modern natural language processing techniques (NLP) can classify text based on individual characters, words or sentences. When faced with textual data that differs from that used in their training, they begin to fumble.
Researchers have now found that Perspective has since become resilient to simple typos yet can still be fooled by other modifications such as removing spaces or adding innocuous words like ‘love’.
A sentence like ‘I hate you’ slipped through the sieve and became non-hateful when modified into ‘Ihateyou love’.
The researchers note that in different contexts the same utterance can be regarded either as hateful or merely offensive.
Hate speech is subjective and context-specific, which renders text analysis techniques insufficient as stand-alone solutions.
The researchers recommend that more attention is paid to the quality of data sets used to train machine learning models – rather than refining the model design.