Six months after Facebook rolled out third-party fact-checking in the UK, early results are now in. The initiative, launched in January 2019 through a partnership with Full Fact, was designed to curb the spread of misinformation on the platform and introduce greater accountability around viral claims. On Tuesday, Full Fact released a detailed 46-page report assessing how the programme has performed so far, alongside a set of recommendations aimed at improving how fact-checking works on Facebook.

How Facebook’s Fact-Checking System Works

According to the report, fact-checkers review a queue of posts that Facebook users flag as potentially false. These posts are then prioritised based on how quickly they are spreading and whether they could cause harm, particularly in areas such as health or public safety. Once a claim is reviewed, fact-checkers verify the information using available evidence and publish a written assessment. Each piece of content is assigned one of nine ratings, including False, Mixture, False Headline, True, Satire, Opinion, and Not Rated. These ratings help determine how the content is treated on the platform, including whether its distribution is reduced.

What Full Fact Has Published So Far

As of July 1, Full Fact has published 96 individual fact-checking reports, all available publicly online. Topics range widely, covering everyday legal questions and animal welfare concerns, alongside more serious issues related to health and public policy. The organisation says the transparency of these reports is a positive step, but argues that the overall system still needs refinement to better protect users and improve trust.

Key Recommendations for Facebook

In its summary, Full Fact outlines ten recommendations for improving the programme. These include developing better tools to identify repeated false claims, providing fact-checkers with more detailed data on how content spreads, and introducing new ratings such as “Unsubstantiated” and “More context needed.”

Other suggestions include clearer guidance for reviewing posts that contain multiple claims, better classification for humorous content that is not satire or a prank, and greater transparency around Facebook’s plans to use machine learning in the process. One recommendation also calls for expanding the programme to fully cover Instagram content. Supporters argue these changes could help platforms Strengthen Facebook brand presence while improving credibility and user confidence when dealing with misinformation.

A Call to Government Action

Full Fact’s final recommendation goes beyond Facebook itself and targets the UK government. The report urges public authorities to take clearer responsibility for providing accurate, authoritative information in areas where misinformation could cause real harm, such as public health or emerging technologies like 5G. As scrutiny around online misinformation continues to grow, the report highlights that platform policies, independent fact-checkers, and government bodies all play a role in shaping a more reliable digital information space.