Investigators Make Horrible Find At Twitter

Investigators Make Horrible Find At Twitter

( – After the primary elections in May 2020, then-President Donald Trump began tweeting his concerns about mail-in voting and the potential for election fraud. Twitter immediately began doing something no one in US history had done before. It began censoring the President of the United States.

Initially, it started censoring Trump by adding supposed fact-check labels to his tweets. However, after the January 6 riot on Capitol Hill, they banned Trump. It wasn’t long before they started taking down pages of prominent conservatives who questioned the election results using a new algorithm and an army of fact-checkers. As mistakes were made, it led some to question whether Twitter was really serious about accurate fact-checking.

Twitter Creates New AI with User Support for Fact-Checking

Early on in the fact-checking process, Twitter CEO Jack Dorsey acknowledged criticism about Twitter’s new fact-checking policy. He said the policy needed to be more transparent and verifiable. He acknowledged the process was subjective. As a result, Twitter created Birdwatch, an artificially intelligent algorithm that utilizes user input to make fact-checking less subjective.

However, one has to wonder how objective Twitter wants its fact-checking to be. On February 5, controversial YouTuber Tim Pool tweeted that the election was rigged. Twitter slapped a censorship label on the tweet and shut down comments. However, Pool was directly mentioning a Time Magazine article that brazenly backed up the tweets premise.

In addition to the tweet accurately portraying the Time Magazine article, Birdwatch users largely agreed that the tweet wasn’t misleading. So, if most Birdwatch users believed the tweet was accurate, why did the algorithm override the users?

Questions About Objectivity

Now that Twitter is in the fact-checking game, conservatives, in particular, are questioning whether Twitter can be impartial and to what degree. Additionally, the question arises: How can an algorithm that obtains its information from a small number of unbiased people accurately rate the truth?

Approximately 1,000 people are participating as fact-checkers in the Birdwatch pilot program. When a participant flags a tweet and adds a note explaining why the tweet may be misleading, it’s up to Birdwatch users to rank the note based on whether it was helpful – at which point the AI takes over.

However, what expertise does someone need to participate in Birdwatch? Do they have experience fact-checking? How does the system distinguish helpful notes from partisan motivations?

Unfortunately, it’s currently flagging too many legitimate tweets. Fact-checking is tedious and hard work, and critical thinking skills are required. If a fact-checker is partisan, it makes the fact-checking all that much harder, regardless of whether the fact-checker supports the tweet or not.

Perhaps the real issue is that Twitter isn’t a publisher or an expert in everything or anything. Are they really interested in the facts? Whatever the reasons, Twitter is getting a lot of factually correct posts wrong.

Copyright 2021,