An insight into the Social Media Market View

2020 has been an unprecedented year, not only due to the disruption caused by COVID-19 but also in terms of how social platforms are handling political content and misinformation.

In May this year, Twitter fact-checked Trump for tweeting false information. What prompted the social giant to break the seal was a two–tweet tirade about the supposed dangers of expanding vote-by-mail during the coronavirus pandemic. The false information was that ballots were only being sent to registered voters, not all residents of state.

In addition, during the Black Lives Matter movement, it was highlighted that certain Facebook groups were promoting hate speech along with other violations, which led to the #stophateforprofit movement, led by Facebook Boycott.

The IPG project involved a 238 point audit across the major social media platforms, benchmarking each of them against the media responsibility principles.

As each social platform defines their responsibilities differently, the issue becomes more complex.. As such, IPG Mediabrands has now established an industry-first scoring system to elevate industry norms.

These actions have raised awareness amongst IPG agencies’ planner-buyers, making them better placed than ever before to advise clients. In turn, our clients have increased confidence in decisions regarding investment and partnerships, and are also able to make better decisions about platforms that have ever-increasing complexity.

Hate speech was flagged as a key principle where platforms had diverse definitions of what this meant. Whilst all partners had a definition, many didn’t explicitly mention “racism”, but instead mentioned “acts of hate based on race”.

Pinterest set the bar for the number of protected group classifications and is the only platform to include weight, lower socio-economic status and pregnancy status. Twitter and YouTube have thorough enforcement policies and strike rules. Twitter is the only platform that has outlined enforcement or policies on hate speech and racism at the content, messaging and account level.

TikTok was reported to have the longest response times for following up on raised issues. They also had the fewest volume of banned groups (SnapChat and LinkedIn scoring the highest in this area).
Misinformation was another principle that highlighted disparate scoring across the different social platforms. SnapChat, Twitter and YouTube scored highest as they separate trusted news partners from unknown or non-credible sources.

Twitch and Reddit, however, rely on self-moderated groups and therefore risk misinformation due to their user-generated content models. Facebook, in comparison, score above the benchmark average as they work with over 70 fact-checking organisations, significantly more than any other platform disclosed.

The UM view


All platforms should adopt more consistent and expanded definitions for hate speech and racism alongside an industry body like GARM, Global Alliance for Responsible Media.

All platforms should report on the prevalence of hate speech on their platforms on a regular basis. Only Twitter, YouTube, TikTok and LinkedIn do this currently.

Some platforms cite the uniqueness of their engagement model as reasons to not rely on moderation strategies specifically focused on hate speech. Whilst this may be true, a minimum commitment should be considered.

Facebook and TikTok do not label consistent sources of misinformation. This is a concern given the prevalence of such issues on their respective platforms. Reddit, Pinterest and Twitch do not employ or subcontract fact-checkers.

All platforms can be doing more to combat misinformation and disinformation, especially in group and forum environments.

All platforms except for Twitch publish policy enforcement reports, though scope and cadence have significant room for improvement. These reports are largely self-assessments with limited engagement with independent third parties. All platforms, therefore, need to improve policy enforcement. Currently, these policies and reports are limited. We need to see platforms actively improving these to become more transparent and also conducting impartial, independent audits.


IPG Mediabrands are looking to share full report findings with have shared the partner scorecards with the platforms and discussed the results at a global level. We are holding the platforms accountable to address our insights and recommendations and encourage clients to see if their values align where each platform is falling short. We are also liaising with platforms, such as Facebook, regarding how they progress in meeting planned deliverables across their roadmap for change and constant improvement.

Facebook has released a tracker of proposed improvements; however this commitment has no timeline. At this point in time, we understand that they are looking to work with a Big 4 audit firm. The scope of the audit is yet to be determined andwe are awaiting more detail. This project is therefore expected to run for a much longer length of time (est. 6-9 months), and the delivery date is vague, currently suggested to be some time in 2021.

Advertisers should, therefore, consider whether Facebook’s response takes appropriate action for them to continue working in partnership. In the meantime, IPG Mediabrands is committed to reviewing our audit process regualrly to monitor improvement over time. All platforms should be held in a similar account, which is why the detail is the same across partners.

We will continue to keep our clients informed and offer as much support as possible. We want to create an open dialogue about the improvement opportunities that our clients would like to see and push our partners to keep accountable commitments and dates for step change.

If you have specific questions or are considering boycotting social media platforms please let your UM team know. We would be glad to schedule an in-depth conversation with you and connect you to our brand safety team.