Hate speech, microaggressions and whatever other variation of the two you prefer has been the most sought after thing to crush and destroy in online communities, two decades ago it was perfectly normal and socially acceptable to say the dreaded “gamer words” but doing so in today’s time will see that your account is banned, your private details are leaked publicly and that you’re fired from your job.

With the rise to prominence of artificial intelligence (AI) it was only the natural order that corporations would begin deploying AI on a wide scale to calibrate, detect and censor rogue individuals who dare utter anything insensitive or bigoted.

And as a result, Activision, prominent for their claims of sexual harassment towards female employs have begun deploying artificial intelligence inside Call of Duty to monitor the in-game voice chat to assist in banning racists, bigots and transphobes.

The AI in question was actually curated by a firm called Modulate in collaboration with the ADL or (Anti Defamation League), who are coincidentally trending on Twitter as of this moment under the hashtag #BanTheADL for being toxic hypocrites.

You see the ADL, the Anti Defamation League operates as a sort of mafia, they use their influence and social standing to lobby against corporations to ban problematic individuals, non-compliance from said corporations results in slander campaigns of being dubbed antisemites, the ADL basically extorts others into commissioning them for guidance and assistance to combat online racial prejudice against Jewish people.

What the ADL deems as being “hate speech” is lucrative and ever increasing, especially considering how the Anti Defamation League considers 10% of the numbers between 0 and 100 as being symbols of hate.

This is of course excluding the fact that people cannot scrutinize globalist Marxists on their falsified epidemic of “climate change” and carbon emissions such as George Soros because any criticism about Soros and his cancerous policies immediately falls under the antisemitism banner simply because of his ethnicity.

Anyways I’m rambling again, Activision have officially announced that their new anti-hate speech AI in collaboration with AI outfit Modulate will be launching alongside the upcoming cash grab, Modern Warfare 3 on November 10th.

The artificial intelligence will be called “ToxMod” and it’s already active right now in limited instances on North American servers in Call of Duty today, its goal? To identify and enforce against toxic / hate speech, discriminatory language and harassment in real-time.


Which of course means that big brother is always watching, obviously why the AI doesn’t have the authorization to ban you simply for uttering hateful words, it will however be able to detect foul language and submit reports about your toxic behavior to the articulate Activision moderation team who are too busy banning players for saying the n-word to please their wives.

ToxMod goes beyond mere keywords in its identification of potential offenses. Modulate claims that its tool stands out due to its capability to analyze the tone and intent in speech, enabling it to differentiate between toxic and non-toxic content, while the precise methodology may not be explicitly explained, there are numerous compelling assertions made by AI companies in this regard.

According to Modulate, their language model has extensively analyzed speech from individuals with diverse backgrounds, enabling it to accurately differentiate between malicious intent and friendly banter.

Interestingly, Modulate’s ethics policy explicitly states that ToxMod does not detect or identify the ethnicity of individual speakers. However, it does pay attention to conversational cues to assess how others in the conversation react to the use of certain terms.

For instance, the policy addresses terms like the n-word, acknowledging that while it is generally regarded as a derogatory slur, many black players have reclaimed it and use it positively within their communities, ToxMod takes into account the impact of such language on others in the chat.

If the use of the n-word clearly offends individuals in the conversation, it will be rated more severely than instances where it appears to be reclaimed and integrated naturally into the conversation.

Modulate also provides an example regarding harmful speech directed towards children. Their ethics policy states, “For instance, if we detect a prepubescent speaker in a chat, we might rate certain kinds of offenses more severely due to the risk to the child,”.

To enhance its flagging capabilities, ToxMod has introduced more detailed categories in recent months. One significant addition is the “violent radicalization” category, which was implemented in June.

This category enables real-time detection of terms and phrases associated with white supremacist groups, radicalization, and extremism. Examples of such flagged content includes the promotion or sharing of ideology, recruitment efforts to persuade others to join a group or movement, targeted grooming of vulnerable individuals (such as children and teens) for recruitment purposes, and discussions involving the planning of violent actions or explicit plans to commit physical violence.

While ToxMod works to combat extremism and the indoctrination of children into radical alt-right groups, I highly doubt that perpetrators looking to indoctrinate children into becoming the newest member of the LGBTQ+ considering how Activision themselves instantly disassociated themselves with NICKMERCS who stated that LGBTW+ political activist should “leave the children alone”.

And by disassociation I of course mean the instant cancelation of the individual in question who had their branded microtransaction available for purchase in the Call of Duty Modern Warfare 2 / Warzone in-game MTX store.

ToxMod will be globally implemented in Call of Duty, starting with the launch of Modern Warfare 3 on November 10. Initially, it will provide moderation services in English only, with plans to expand to support additional languages in the future.