Misinformation spreads fast online like way faster than most people realize. Platforms like TikTok and YouTube have become major sources of news and information, especially for younger users, but with that comes a big problem: false or misleading content can go viral just as easily as accurate information. Both platforms have introduced policies to deal with misinformation, but how well do they actually work?
TikTok’s strategy is to use labels, do removals, and have algorithm control.
TikTok relies heavily on a mix of content moderation and algorithm control to limit misinformation. According to its community guidelines, the platform does not allow misinformation that could cause “significant harm,” such as false claims about health or elections.
One key approach TikTok uses is partnering with independent fact-checkers. If a video is flagged as misleading, TikTok can label it, reduce how often it appears on users’ “For You” pages, or remove it entirely.
For example, during major events like elections or public health crises, TikTok often adds warning labels or redirects users to reliable sources. It also makes some questionable content ineligible for recommendation, which is a big deal since the algorithm is what drives most views.
Recently, TikTok has even experimented with adding community-based context features alongside its professional fact-checkers, showing that it is trying to combine expert review with user input.
This kind of works, but not perfectly. Research and reporting show that misinformation still slips through. For instance, one report found that a large portion of popular mental health content on TikTok included misleading or inaccurate advice.
From a user perspective, misinformation on TikTok often does not look obvious. It can be subtle like oversimplified health advice or biased political takes, which makes it harder to catch. Even when videos get labeled or removed, they may already have thousands or millions of views.
YouTube’s strategy is to use strict policies and exceptions.
YouTube takes a slightly different approach. Instead of focusing as much on algorithm suppression, it relies heavily on clear rules about what content is not allowed.
According to YouTube’s misinformation policies, content that poses a “serious risk of egregious harm” is banned. This includes things like false information about elections, manipulated videos, or harmful medical claims.
For example, videos that mislead people about voting processes can be removed and medical misinformation that contradicts health authorities is not allowed.
YouTube also allows users to report content, and repeated violations can lead to strikes or channel bans.
However, YouTube has an important twist: it sometimes allows misleading content to stay up if it has educational, documentary, or “public interest” value.
This has kind of worked. YouTube has had some success. Research suggests its system reduces how often users are recommended misinformation compared to accurate content.
There are still problems. Unfortunately, because of its “public interest” exceptions, some misleading content can remain online if it is tied to political or controversial topics. Recent reporting shows YouTube has even relaxed some moderation rules, allowing more borderline content to stay up.
From experience, it is fairly easy to fall into a rabbit hole of questionable videos if you keep clicking similar content. Even if YouTube removes the worst material, it does not always stop the spread of less obvious misinformation.
TikTok and YouTube are trying to solve the same problem in different ways. TikTok focuses on limiting reach, while YouTube focuses on removing harmful content but allowing some exceptions.
TikTok is more proactive because it tries to stop misinformation before it spreads widely. YouTube is more rule-based, stepping in when content clearly crosses a line.
Neither approach is perfect. TikTok struggles with speed and subtle misinformation, while YouTube struggles with consistency due to its exceptions.
Both platforms still have some major gaps.
First, speed is a huge issue. Misinformation often spreads faster than it can be reviewed or removed.
Second, transparency is lacking. Users do not always know why content is flagged, removed, or promoted.
Finally, there is not enough focus on media literacy which in turn helps users recognize misinformation themselves instead of relying entirely on platforms to fix it.
There are a few ways both platforms could do better. They could make moderation more transparent, so users understand decisions, respond faster by combining AI with human review more effectively, promote corrections more clearly, not just remove content, or teach users how to spot misinformation, rather than just hide it.
TikTok’s idea of combining fact-checkers with community input is a good step, and YouTube could improve by making corrections more visible instead of relying mainly on removal.
Final Thoughts
At the end of the day, neither TikTok nor YouTube have fully solved the misinformation problem, and they probably never will completely, but both are making real efforts, even if those efforts are imperfect.
Misinformation is not just a platform problem but also a user problem. Platforms can slow it down, but users still need to think critically about what they see.
Sources:
Goel, S. (n.d.). TikTok is adding community notes, but it’s taking a different approach than Meta and x. Business Insider. https://www.businessinsider.com/tiktok-adding-community-notes-keep-fact-checkers-meta-x-footnotes-2025-4
Juneja, P., Bhuiyan, M. M., & Mitra, T. (2023, February 15). Assessing enactment of content regulation policies: A post hoc crowd-sourced audit of election misinformation on YouTube. arXiv.org. https://arxiv.org/abs/2302.07836
Roth, E. (2025, June 9). YouTube has loosened its content moderation policies. The Verge. https://www.theverge.com/news/682784/youtube-loosens-moderation-policies-videos-public-interest
Shultz, C. (2025, June 4). Half of TikTok’s top mental health videos contain ‘misinformation’: Report. https://people.com/tiktok-mental-health-tips-misinformation-report-11747657
Leave a comment