TikTok and Instagram’s Misinformation Policies

TikTok’s principles align with diverse views and opinions and encourage open dialogue. To maintain their safe and trustworthy environment they take action against misinformation through the use of independent fact-checking partners and databases of previously fact-checked claims. The platform’s fact-checking partners are accredited by the International Fact-Checking Network. In 2022, they continued investment in machine learning models, improved detection of known misleading audio and imagery to reduce manipulated content and investment in a detection program to flag new and evolving claims across the internet. There are four categories that their networks target: health misinformation, misinformation that risks public safety and undermines public trust, environmental or climate misinformation and election misinformation. 

TikTok came out with the ‘Footnotes’ feature that aims to combat misinformation. This feature allows users to add informative context to videos and is a community-driven tool that is similarly found on X, Community Notes. While this system is being tested, their fact-checking program will continue to run its course. This means that their platforms use a hybrid model that includes the insight of the community and also utilizes verified fact-checking organizations. Based on my experience with the platform, there are cases where you’ll see a disclaimer that warns users to fact-check the content that’s being presented. This was a common disclaimer during the 2024 elections and the COVID-19 pandemic. Nowadays, these disclaimers seem to pop up during political videos and educational videos even if the content being shared are educational and are aimed to inform the public about political issues. Full Fact, a team of independent fact checkers, journalists, technologists and policy experts, posted a video on TikTok regarding misinformation about the conflict in the Middle East. Their video was removed due to community guidelines violation and was flagged as “violent and criminal behavior.” In their case, their violation was identified through an automated system rather than human moderation. 

It seems that TikTok’s platform is missing human moderation and global parity. During the height of the Palestinian movement regarding Gaza, I’ve seen many creators’ videos being removed due to a violation in community guidelines, similar to Full Fact’s case. A lot of creators who are using their platforms to spread awareness, are finding loopholes in censoring certain words in order to avoid being restricted. Words like shooting are being turned into “pew pew” as a loophole. The platform’s system is its own ‘downfall’. TikTok should place human moderation above artificial learning models. The company can implement a process where content, that’s being flagged and ruled as a violation of community guidelines, are undergoing a final-review process done by humans. This can combat their issue of AI automatically flagging educational content and removing them from the platform feeds. They can push these educational contents onto user’s feeds to educate the masses on essential topics, all while building an image of being a credible platform. For this platform to truly combat misinformation, they must first address their anti-misinformation system. 


Instagram is committed to reducing the spread of false information. They use both technology and feedback from the community to identify posts and accounts that may contain false information. Instagram also partners with third-party fact-checkers who review content. These third-party fact-checkers are certified by the non-partisan International Fact-Checking Network. Instagram has a system that rates photos and videos for misinformation. If a post is rated as ‘false’, it is demoted from the algorithm and removed from the Explore and hashtag pages. Its’ visibility in Feed and Stories are also reduced. When a post is considered to be false, it will also be labeled as ‘false information’ and allows users to see why it is labeled as such. Instagram has a cross-platform match with Facebook that automatically labels the content if it is posted on both platforms. Instagram also has an image-matching system that filters photos of the same ‘false information’ image across the platform and labels it as such. Users on Instagram are able to report content as ‘false information’. Cross-platform image matching is an effective tool for a company that owns two separate social media platforms. Their image-matching technology is also an amazing tool to have that can label all of the content, of misinformation, being reposted and spread throughout the algorithm. 

Based on my experience with the platform, I do think that these technologies have flaws in handling misinformation. In one of my previous blog posts, I used an instagram post as an example of how misinformation spreads quickly. There are many posts like Bill Gates’ cancerous lab grown meat that floats around the platform’s algorithm and is not labeled as false information or warns users of false information. I have been told by my close friend how their partner sends them posts like these and emphasizes on how they need to avoid what was being said. Susceptible people, like their partner, are being affected by posts that aren’t properly being labeled as false information. 

Instagram should continue to have content moderation policies in place, rather than end their fact-checking program. Meta CEO Mark Zuckerberg has announced its testing of Community Notes. Community Notes is drawing from a broader range of voices to decide which content would benefit from additional information — Meta does not decide what gets rated, the community does.

It seems as if Instagram is going back on its commitment to reducing the spread of false information. Rather than focusing solely on Community Notes, Instagram should have a hybrid system that utilizes third-party fact-checkers and Community Notes. The platform should also resist their system that demotes the post/account from the algorithm. Instagram should move to fully-removing the content and account that is aimed at spreading misinformation. Simply demoting them from the feed gives users the chance to still come across such content. Users are still likely to engage in these types of content if they believe in the false information that is being spread. 

TikTok and Instagram’s policies to combat misinformation are very flawed and essentially are not working the way it should. TikTok’s system flags educational content and anti-misinformation content as violence or a violation of the community guidelines. Instagram’s system is blinded and does not fully remove content that is being flagged as misinformation. The companies’ policies of misinformation should start expanding globally. In order to truly reduce the spread of misinformation, we need to address how people, globally, are consuming the media that they see on their feeds. Misinformation is not an issue solely on the United States of America. It is in an issue everywhere. My recommendations may move the needle on misinformation by fixing the flaws that hurt the platform’s commitment to reducing misinformation. Users who debunk misinformation or are using their content to educate the masses on the current issues of the world shouldn’t be punished but rather uplifted. 

References:

Jl. (2025, April 17). Tiktok Rolls Out “footnotes” feature to boost context, fight misinformation in the US: The Daily Tribune: Kingdom of bahrain. DT News. https://www.newsofbahrain.com/trends/111595.html

Tiktok and the war on misinformation: Capitol Technology University. Washington D.C. & Maryland Area | Capitol Technology University. (n.d.). https://www.captechu.edu/blog/tiktok-and-war-misinformation


Why is TikTok penalising content designed to highlight misinformation? – full fact. Full Fact Blog RSS. (2026, April 7). https://fullfact.org/technology/tiktok-penalising-misinformation-content/

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *