Author: Kerry Molly

  • The Dangers of a Trump-Jesus Deepfake

    The Dangers of a Trump-Jesus Deepfake

    This blog post explores the effects of transfer, synthetic media, and confirmation bias that breaks reality. The primary target audience are non-expert adults who are politically moderate or politically curious. They are not locked into a rigid framework and are likely to ask questions like “is this real?” and “How should I feel about this?”. These questions make them receptive to education. The chosen medium for this was my blog because there are no algorithms this piece needs to combat in order to gain reach. My piece allows the audience to slow down, read the argument, process the information, and decide how they want to move forth. Rather than a quick informational video, my piece allows my target audience the time and space to ingest the information that I’m providing them. They are not rushed into making educated decisions.

    Earlier this month, President Donald Trump posted an image of him depicted as a god — He appeared to be healing an elderly man while a woman was praying over them and three other characters, two of them being a nurse and a military officer, are looking up at him. The background consists of: an American flag, angels coming from the sky, the Statue of Liberty is situated right behind them along with bald eagles soaring the sky. 

    Whether this image was posted by the president to “troll” those against his administration or to pander to his followers is almost irrelevant. As AI technology continues to improve and become photorealistic, we are slowly witnessing a new genre of political iconography: AI-generated religious apotheosis. 

    On its surface, the picture of Trump as Jesus Christ might seem like an online satirical image uploaded by the President. However, for an average viewer, it can function as a potent psychological weapon. To understand the dangers of AI-generated religious apotheosis of political figures, we need to look into three mechanisms: transfer, deepfake technology, and confirmation bias. 

    Transfer

    The transfer (propaganda) technique is when you take the credibility, sanctity, or emotional weight of a symbol and attach it to a person or product. The AI-generated photo of the president is, in its purest form, the transfer technique. It’s not uncommon to see a political ad that shows a candidate standing by a church or helping those in need. The issue comes from the fact that generative AI depicts its image as a literal visual claim. 

    In the Untied States, 62% of Americans identify as Christians. The Christians who stumble upon content of political leaders superimposed onto the visage of Jesus Christ hijacks religious loyalty. The viewer subconsciously associates the holy figure and political figure as one. This phenomenon is not persuasion but rather neurological shorting. Researchers, Michael Klincewicz, Mark Alfano and Amir Ebrahimi Fard, call this idea ‘slopaganda’, which is the use of AI-generated content to manipulate sentiment and build emotional associations that bypass rational agreement. As documented, “through repetition, those associations stick, even when the viewer knows the content is fake” (Klincewicz, Alfando, and Fard, 2025). Content pushed out by political figures, like the president, allow them to portray themselves however they see fit without having to wait for the media to do so. They are able to pander to the ideologies that their supporters see them as, which plays into the idea of slopaganda. This article creates a great connection between the Rorschach test and the president’s followers — they believe that he could, and would, and is the kind of president who would do what these AI-generated videos and photos are depicting himself to be. 

    Deepfake

    Deepfake is an artificial image or video generated by a special kind of machine learning called “deep” learning. Deepfake encompasses any synthetic media where a person is depicted doing something they never did. The photo of Trump as Jesus Christ is an example of a deepfake of divinity. 

    Years ago, deepfakes had glitchy portions of the picture and sometimes had garbled texts. Due to the advancement in AI technology, these models can produce 4k resolution images with perfect texture and lighting. Modern deepfakes are a lot more difficult to spot than in previous years — it’s reached a point where they have heartbeats now. “Modern text to image systems such as Stable Diffusion and DALL E can now generate images so realistic that they often appear completely natural, leaving little to no visible artifacts for traditional deepfake detectors to rely on” (Ameen, Islam, 2025). Authentic photos are susceptible to having synthetic photo copies. Issac Record and Boaz Miller argue that generative AI “allows ordinary computer users to create and widely share fictional worlds indistinguishable from the real world” (Record, Miller, 2025). The real slowly becomes dubious and the fake becomes gospel to all.

    Confirmation Bias

    Confirmation bias is defined as people’s tendency to process information by looking for, or interpreting, information that is consistent with their existing beliefs. As I mentioned above, the image is a perfect depiction of the Rorschach test. Depending on who you are and where your views align with, the AI image can feel like a prophecy for those who support the president. Dopamine can be released just by the sight of the visual because it matches the ideology that you align with — Trump as a persecuted savior. On the other hand, those who do not support the president are likely to see the image as a blasphemous horror. This does not erase the fact that Trump’s supporters’ can have the same feeling toward the image as those who do not support him. Former U.S. representative and Trump supporter Marjorie Taylor Green commented on the image with, “I completely denounce this and I’m praying against it!!!”. 

    The synthetic nature of the image is irrelevant. Neither side, Pro-Trump or Anti-Trump, are checking the image to see if it is real. Rather, they are using the photo to confirm their beliefs instead — this act is how AI accelerates polarization. AI isn’t creating the bias, it acts as a source to fuel the bias that has already generated from either side. It has been found that a voter’s political affiliation directly dictates how they process identical information. Partisans were “open and forgiving of an in-party politician’s transgression but critical and unforgiving of an out-party politician’s identical transgression” (Lee, Romdhane, 2025). In a study, it was found that simply telling someone a face belongs to a political ally or opponent changes the way the faces’ trustworthiness and characteristic is perceived by others (Cassidy, Hughes, Krendl, 2022). AI-generated images of political figures as divinity can be utilized as a weapon to exploit the vulnerability. 

    Although these images are lies, their harm runs deeper than just lies. These images damage three distinct domains: democracy, religion and reality. It harms democracy by allowing political figures to have a pass. When supporters genuinely draw a connection between their candidate and a divine being, it justifies undemocratic behavior by placing them above the laws. They are likely to support their candidates’ unlawful actions because they are of divine being. These images harm religion as it reduces the influence of Jesus Christ to the political figure’s idolatry. As images of divinity are being associated with political figures, anything that the public figure does, good or bad, can reflect onto the divinity and change the perceptions of the established image. As images like these surface a lot more frequently, it can condition us to not care about the effects these images have. This is the damage it can do in reality. As we begin to “get used to” seeing these AI-generated photos, we are less likely to enhance our mental model in defending ourselves from the rising tides of deepfakes. Generating these fictional worlds that are indistinguishable from the real world erodes the very foundation of public reasoning. We are unable to reason and act upon if we are not in agreement of what happened. 

    Defense against mechanisms

    As society continues to advance in the use of AI technology, it is imperative we defend ourselves against transfer, deepfakes and bias. Below is a guide in resisting content as such:

    Check your own bias first. Before we emotionally react, ask yourself if you would believe this content if it was depicting someone else. If your answer is yes, pause. You are biased. 

    Lateral Read. As we read, pixel-level detection is currently failing. To combat this, do your research. Check who posted the content, if it is known to be a satire account and look for content credentials. 

    Pause. Rather than jumping to resharing content you see on your social media feeds, pause and wait. During this moment, many users are working toward fact-checking and analyzing the content and verifying its validity. Slopaganda thrives in rapid emotional repetition.


    The AI-generated photo of Donald Trump can be seen as an internet meme, however it goes deeper than that — it was a stress test of the 21st-century mind. Content as such weaponizes transfer to hijack your reverence and exploits the usage of deepfakes to break your trust in visual media. The use of confirmation bias is in hopes that it can bypass your logical side. As content like these start to arise, it’s important to remember that it’s not just a meme you’re seeing. There are psychological warfares that are banking on you to worship what you see and dismiss the ethics behind the content. 

    Get educated and learn to defend yourself. 

    References

    Ameen, M. R., & Islam, A. (2025). Detecting AI-generated images via diffusion snap-back reconstruction: A forensic approach. arXiv preprint. https://arxiv.org/abs/2511.00352

    Casad, B. J., & Luebering, J. E. (2026, March 30). Confirmation bias. In Encyclopedia Britannica. Retrieved April 30, 2026, from https://www.britannica.com/science/confirmation-bias

    Cassidy, B. S., Hughes, C., & Krendl, A. C. (2022). Disclosing political partisanship polarizes first impressions of faces. PLOS ONE, *17*(11), e0276400. https://doi.org/10.1371/journal.pone.0276400

    Klincewicz, M., Alfano, M., & Fard, A. E. (2025). Slopaganda: The interaction between propaganda and generative AI. arXiv preprint. https://arxiv.org/abs/2503.01560

    Lee, S., & Romdhane, S. B. (2025). The politics of ethics: Can honesty cross over political polarization? Journalism and Media, *6*(1), 23. https://doi.org/10.3390/journalmedia6010023

    New Lines staff. (2026, April 14). Slopaganda comes of age. New Lines Magazine. https://newlinesmag.com/spotlight/slopaganda-comes-of-age/

    Pew Research Center. (2025, February 26). Decline of Christianity in the U.S. has slowed, may have leveled off (2023-24 Religious Landscape Study). https://www.pewresearch.org/religion/2025/02/26/decline-of-christianity-in-the-us-has-slowed-may-have-leveled-off/

    Record, I., & Miller, B. (2025, June 18). Ways of worldfaking: Identifying the threat and harm of synthetic media. Social Epistemology Review and Reply Collective, *14*(6), 57–65. https://social-epistemology.com/2025/06/18/ways-of-worldfaking-identifying-the-threat-and-harm-of-synthetic-media-isaac-record-and-boaz-miller/

    Science Focus staff. (2026). Deepfakes have heartbeats: Why AI-generated faces are now nearly perfect. BBC Science Focus. https://www.sciencefocus.com/news/deepfakes-have-heartbeats

    University of Virginia. (n.d.). Deepfakes. UVA Security, SAFE & Privacy. Retrieved April 30, 2026, from https://security.virginia.edu/deepfakesYafai, F. A., & El-Kholy, C. (2026, April 14). Slopaganda comes of age. New Lines Magazine. https://newlinesmag.com/spotlight/slopaganda-comes-of-age/

  • TikTok and Instagram’s Misinformation Policies

    TikTok and Instagram’s Misinformation Policies

    TikTok’s principles align with diverse views and opinions and encourage open dialogue. To maintain their safe and trustworthy environment they take action against misinformation through the use of independent fact-checking partners and databases of previously fact-checked claims. The platform’s fact-checking partners are accredited by the International Fact-Checking Network. In 2022, they continued investment in machine learning models, improved detection of known misleading audio and imagery to reduce manipulated content and investment in a detection program to flag new and evolving claims across the internet. There are four categories that their networks target: health misinformation, misinformation that risks public safety and undermines public trust, environmental or climate misinformation and election misinformation. 

    TikTok came out with the ‘Footnotes’ feature that aims to combat misinformation. This feature allows users to add informative context to videos and is a community-driven tool that is similarly found on X, Community Notes. While this system is being tested, their fact-checking program will continue to run its course. This means that their platforms use a hybrid model that includes the insight of the community and also utilizes verified fact-checking organizations. Based on my experience with the platform, there are cases where you’ll see a disclaimer that warns users to fact-check the content that’s being presented. This was a common disclaimer during the 2024 elections and the COVID-19 pandemic. Nowadays, these disclaimers seem to pop up during political videos and educational videos even if the content being shared are educational and are aimed to inform the public about political issues. Full Fact, a team of independent fact checkers, journalists, technologists and policy experts, posted a video on TikTok regarding misinformation about the conflict in the Middle East. Their video was removed due to community guidelines violation and was flagged as “violent and criminal behavior.” In their case, their violation was identified through an automated system rather than human moderation. 

    It seems that TikTok’s platform is missing human moderation and global parity. During the height of the Palestinian movement regarding Gaza, I’ve seen many creators’ videos being removed due to a violation in community guidelines, similar to Full Fact’s case. A lot of creators who are using their platforms to spread awareness, are finding loopholes in censoring certain words in order to avoid being restricted. Words like shooting are being turned into “pew pew” as a loophole. The platform’s system is its own ‘downfall’. TikTok should place human moderation above artificial learning models. The company can implement a process where content, that’s being flagged and ruled as a violation of community guidelines, are undergoing a final-review process done by humans. This can combat their issue of AI automatically flagging educational content and removing them from the platform feeds. They can push these educational contents onto user’s feeds to educate the masses on essential topics, all while building an image of being a credible platform. For this platform to truly combat misinformation, they must first address their anti-misinformation system. 


    Instagram is committed to reducing the spread of false information. They use both technology and feedback from the community to identify posts and accounts that may contain false information. Instagram also partners with third-party fact-checkers who review content. These third-party fact-checkers are certified by the non-partisan International Fact-Checking Network. Instagram has a system that rates photos and videos for misinformation. If a post is rated as ‘false’, it is demoted from the algorithm and removed from the Explore and hashtag pages. Its’ visibility in Feed and Stories are also reduced. When a post is considered to be false, it will also be labeled as ‘false information’ and allows users to see why it is labeled as such. Instagram has a cross-platform match with Facebook that automatically labels the content if it is posted on both platforms. Instagram also has an image-matching system that filters photos of the same ‘false information’ image across the platform and labels it as such. Users on Instagram are able to report content as ‘false information’. Cross-platform image matching is an effective tool for a company that owns two separate social media platforms. Their image-matching technology is also an amazing tool to have that can label all of the content, of misinformation, being reposted and spread throughout the algorithm. 

    Based on my experience with the platform, I do think that these technologies have flaws in handling misinformation. In one of my previous blog posts, I used an instagram post as an example of how misinformation spreads quickly. There are many posts like Bill Gates’ cancerous lab grown meat that floats around the platform’s algorithm and is not labeled as false information or warns users of false information. I have been told by my close friend how their partner sends them posts like these and emphasizes on how they need to avoid what was being said. Susceptible people, like their partner, are being affected by posts that aren’t properly being labeled as false information. 

    Instagram should continue to have content moderation policies in place, rather than end their fact-checking program. Meta CEO Mark Zuckerberg has announced its testing of Community Notes. Community Notes is drawing from a broader range of voices to decide which content would benefit from additional information — Meta does not decide what gets rated, the community does.

    It seems as if Instagram is going back on its commitment to reducing the spread of false information. Rather than focusing solely on Community Notes, Instagram should have a hybrid system that utilizes third-party fact-checkers and Community Notes. The platform should also resist their system that demotes the post/account from the algorithm. Instagram should move to fully-removing the content and account that is aimed at spreading misinformation. Simply demoting them from the feed gives users the chance to still come across such content. Users are still likely to engage in these types of content if they believe in the false information that is being spread. 

    TikTok and Instagram’s policies to combat misinformation are very flawed and essentially are not working the way it should. TikTok’s system flags educational content and anti-misinformation content as violence or a violation of the community guidelines. Instagram’s system is blinded and does not fully remove content that is being flagged as misinformation. The companies’ policies of misinformation should start expanding globally. In order to truly reduce the spread of misinformation, we need to address how people, globally, are consuming the media that they see on their feeds. Misinformation is not an issue solely on the United States of America. It is in an issue everywhere. My recommendations may move the needle on misinformation by fixing the flaws that hurt the platform’s commitment to reducing misinformation. Users who debunk misinformation or are using their content to educate the masses on the current issues of the world shouldn’t be punished but rather uplifted. 

    References:

    Jl. (2025, April 17). Tiktok Rolls Out “footnotes” feature to boost context, fight misinformation in the US: The Daily Tribune: Kingdom of bahrain. DT News. https://www.newsofbahrain.com/trends/111595.html

    Tiktok and the war on misinformation: Capitol Technology University. Washington D.C. & Maryland Area | Capitol Technology University. (n.d.). https://www.captechu.edu/blog/tiktok-and-war-misinformation


    Why is TikTok penalising content designed to highlight misinformation? – full fact. Full Fact Blog RSS. (2026, April 7). https://fullfact.org/technology/tiktok-penalising-misinformation-content/

  • Bill Gates’ cancerous lab-grown meat

    Bill Gates’ cancerous lab-grown meat

    Claim Analysis Assignment:
    Social media platforms spread misinformation quickly. “Rapid publication and peer-to-peer sharing allow ordinary users to distribute information quickly to large audiences, so misinformation can be policed only after the fact (if at all). (APA, 2024). If people were to come across misinformation that aligns with their personal identity or values, they are more than likely to share their findings with the people around them. This creates a cycle of sharing misinformation that adds onto the ongoing literary crisis. It is imperative that we take the proper steps to evaluate the information we see on social media. We are no longer trusting everything that we see on our social media feeds. 


    The claim above talks about Bill Gates and his cancerous lab-grown meat being pushed out into our grocery stores. This is an instagram post made by @cakemenu whose page is dedicated to talking about the “biggest headlines in food” and “stories that everyone is talking about.” This claim is worthy of evaluation due to the information being presented: lab-grown meat being sold in our grocery stores which are linked to aggressive cancer. This raises some concerns due to the nature of the topic: cancer and lab-grown meat. In order to start our investigative process of finding out whether this is true or false, we will look up “bill gates’ lab-grown meat linked to aggressive cancer.”


    The first thing that you see on the page is an AI overview of the situation. It states, “claims that lab-grown meat (cultivated meat) backed by Bill Gates is linked to “turbo cancer” or aggressive cancer in humans are false and unsubstantiated, according to fact-checkers and food safety experts.” Although we “have” our answer through AI overview, we want to practice lateral reading and SIFT in order to get accredited answers. 


    Looking back at the instagram carousel, the second slide had a news article that is titled, “Cloned ‘Meat’ secretly flooding American food supply without labels.” This article is allegedly by Frank Bergman from slaynews.com. Our next step is to look up this article and read through what it says. 


    I had my doubts on the validity of the website existing but it does exist and the “Cloned ‘Meat’ secretly flooding American food supply without labels” article is real. When it comes to lateral reading, we are looking through what credible sources are saying about the author, organization and claims. My next steps are investigating the author and organization itself. My investigation is simply reading up the author’s ‘about me’ page and seeing the organization’s ‘about us’ page. Based on both pages, I can hypothesize that Frank Bergman and Slay news have conservative/republican views. 

    In order to confirm my suspicions, I start looking to see what others say about the author and the organization. I google searched ‘Frank Bergman’ and click on the second link listed. Based on the mini descriptions of the website, this seemed like the right link to click and learn about the author. Upon clicking the Science Feedback link, it directed me to a page about content from Frank Bergman. There are four articles that are either misleading, inaccurate, or flawed reasoning from Frank Bergman. I read through the article about the Covid vaccine and saw that a preprint of misinformation, regarding the vaccine, was being spread on outlets known to publish misinformation. Slay News and Frank Bergman were name dropped as one of those outlets. 

    I googled “Is slay news credible”. The AI overview noted that it was not a credible site but I chose to look at what Media Bias/ Fact check had to say about Slay News. Media Bias/Fact Check categorized Slay News to be a conservative news and opinion website that promotes misinformation and false claims.

    I clicked on Miami University’s link ‘Avoiding Bad, Misleading, or Fake News: Evaluating News Sources: Resources’. As I scrolled down the page, I came across the section regarding Fact Checking websites and saw Media Bias/Fact Check was a part of the trusted websites that fact check. 

    My final verdict, based on our findings, is that the claim about Bill Gates and his cancerous lab-grown meat is false. By researching the author of the article and the organization’s reputation, we were able to conclude that what they were saying were false. To ease my mind, I looked up evidence on lab-grown meat and its connection with aggressive cancer. An article by Full Fact concluded that these claims were false. If we had not gone through the few extra steps to verify these claims, we would have played into the goals of websites like Slay News: spreading misinformation and creating public hysteria. 

    The current administration tends to throw random claims about our health, economy, climate issues, etc. around. It’s no secret that he has a following that blindly follows his every move. In order to combat people like them, we must do our part in searching for the truth in a world full of lies and deceit. It’s concerning to think that this instagram post has over thousands of comments agreeing with this false claim, knowing they did not take the time to validate the claims being presented.

    References:
    American Psychological Association. (2024, March 1). How and why does misinformation spread? https://www.apa.org/topics/journalism-facts/how-why-misinformation-spreads

    Bergman, F. (2025, November 24). Cloned “meat” secretly flooding American food supply without labels. Cloned ‘Meat’ secretly flooding American food supply without labels. https://slaynews.com/news/cloned-meat-secretly-flooding-american-food-supply-without-labels/

  • Evaluating Misinformation Educational Tools

    Evaluating Misinformation Educational Tools

    News Literacy project’s RumorGuard was created to help teach users how to recognize misinformation. Their goal is to ensure students are skilled in news literacy before graduating high school. Misinformation is everywhere and they highlight the importance of learning how to understand what is real and what is misinformation.

    Their website landing page talks about the five factors of misinformation: authenticity, source, evidence, context and reasoning. To learn more about each factor, it takes you to the “Techniques” section where it further explains each factor and pairs it with a game. For misinformation, it taught you the five types of misinformation that exist: satire, false context, imposter content, manipulated content and fabricated content. You’re then given a prompt and you have to decide which of the five types of misinformation it is. They give you a description about the prompt that helps you decide the correct answer. In the evidence section, there was a quiz I took that questioned you if you could make sense of misused data. They also taught you the art of lateral reading and how you can use that to see if your source is credible. With just a few open tabs, you are able to verify your sources before quoting the information that you learned from them. 

    RumorGuard is an effective tool when it comes to learning about misinformation and building the skills necessary to combat these issues. The five credibility factors provide a clear, memorable structure that users can apply immediately (Ward, 2025). Their website layout is user-friendly and does not bombard learners with a ton of information for them to absorb. It’s simple and perfectly explained. The site supports Checkology, an e-learning platform that furthers their skills on misinformation, news media bias, reliable sources, etc. The use of real-world viral rumors allows their users to discern how misinformation in the media really looks like. 

    Bad News is the second misinformation educational game that I played. It starts off with the objective of getting as many followers as you can with fake news. The learning factors of Bad News goes as follows: Impersonation, Emotion, Polarization, Conspiracy, Discredit, and Trolling. The game guides you on how these fake news outlets play into these factors. Bad News puts you in the mind of a purveyor of disinformation and allows you to choose options that will further your credibility despite spreading misinformation to your followers. While the examples they use are not from real-world examples, their reaction tweets are very relatable as it is the same exact responses you would see from a Twitter thread today. By the end of the game I amounted to a score of 13,263. The platform Bad News is another effective educational tool that users can use to enhance their critical thinking skills. Users can take what they learned about  manipulation tactics and apply it to what they see on Twitter threads. While this game shows you how easy it is to start the spread of fake news, these tactics are well-known. Without the political or financial motivation to embark on a rise of fame due to misinformation, users who play this game pose no risk (Roozenbeek, J., van der Linden, 2019). Adopting the mindset of a manipulator, develops your critical thinking skills, media literacy and builds resistance to misinformation content on social media.

     

    Educational games and interactive tools are an effective way to teach the masses about misinformation. These tools are effective in building the critical thinking skills that can combat the issues of misinformation and media literacy. Tools like Bad News is a great introduction to the topic of spreading fake news, however, it should not be used as the only tool to educate yourself. Pairing these activities with things like RumorGuard will truly prepare you to combat against false narratives that are prevalent in the media space. Learning things like the SIFT method (Tokar, 2025) or how to lateral read your sources are other important skills to enhance and build off of. With the advancement of AI and the frequent spread of AI content, sharpening your skills of media literacy and critical thinking could never be more important than ever. 

    References:

    Roozenbeek, J., van der Linden, S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun 5, 65 (2019). https://doi.org/10.1057/s41599-019-0279-9

    Tokar, S. (2025, July 23). How educators can help students navigate misinformation. https://www.rochester.edu/newscenter/ever-better-educators-students-navigate-misinformation-660942/

    Ward, B. (2025, September 6). Misinformation — tools of the trade: | by Bryan Ward | Medium. Misinformation — Tools of the Trade: https://medium.com/@traveling4one/misinformation-tools-of-the-trade-3c7b3ca19254

  • Blog Assignment 1.

    24-hour Media Diet

    This assignment runs through what my media intake is like throughout a Saturday. It will focus on questionable content that I may stumble on throughout my time on social media.

    6:45am— I woke up and checked to see if I had any important notifications that came through. I opened TikTok and scrolled through the app, something that I gravitate toward when I have the day off. I stumble upon videos about ab workouts that tone and define your core. I reposted a video talking about Ariana Grande teasing a new album despite in the process of starting her eternal sunshine tour. This could completely be misinformation and simply not true at all but I made no effort to actually fact check what was said, I simply read a few comments and reposted it. I believe this is mainly delusion. 

    8:50am — I woke up again after falling asleep from watching TikTok’s. I check my emails. I start reading up on master’s programs that I can do with a degree in Digital Audiences. There was one that caught my eye, Gonzaga University. A master’s in Destination Management — which is in tune with my minor in Tourism Development and Management. 

    9:45am — I’m driving back home after coming from my boyfriend’s place and I make a pit stop at Starbucks. I use their mobile app and I joined their bonus 15 stars reward for ordering a blonde vanilla latte once. I scroll through their front page and stumble upon the Secret Poster Refresher. It’s a Hannah Montana 20th Anniversary secret menu special. It’s a grande strawberry açaí refresher with no inclusions, 2 pumps of raspberry syrup and raspberry cold foam. 

    Although it’s being advertised as Hannah’s drink, it is inaccurate due to the fact that Hannah canonically despises raspberry

    10:25am — I’m making a quick tuna melt sandwich for brunch while I’m watching Casey’s YouTube video “I’m working at a corner store..”. I come across some bell peppers going bad, one with signs of mold and I debate with my boyfriend about being able to use the bell peppers. He argues that as long as you cut it off and don’t cross contaminate with the knife, it’s safe to eat. However, I’m just simply not a fan of the idea even if it is safe to eat. So we look it up. We cut around the mold but decided to toss the bell peppers after seeing the fuzzy mold inside the peppers. 

    11:15am — We eat our meal and put on the anime, Jujutsu Kaisen. We wrapped up our rerun of season 1 and take a nap. 

    12:45pm — We get up and head to the gym for our push day. My source of media are my Spotify playlists.

    3:00pm — We start to prep for dinner which was a pesto pasta. I took inspiration from Tini and followed her version of a creamy pesto pasta but I added other ingredients.

    6:25pm — After dinner, we started playing the co-op game of A Way Out. We got to a scene where the characters are climbing up a vertical space to escape the prison. It’s a technique called chimneying. Upon seeing this in the game, I doubted how realistic this move was. However, I’ve seen many rock climbers who utilize this technique to climb up tricky, narrow sections. It still fascinates me that it is possible to do with a partner and not just solo.

    8:00pm — I read for about 30 minutes, This is how you lose the time war by Amal El-Mohtar and Max Gladstone, before heading to bed by 8:30pm. 

    Analysis: 

    This particular day, I did not stumble upon many questionable contents. A lot of the things I do question are workouts that are actually beneficial for you, where I go on a deep dive of what workouts to consider incorporating with my routine.  There are a lot of fitness influencers on the app that say contradicting information and it’s really important to do your research with topics like these. One thing that was misinformation was the Hannah Montana drink that Starbucks pushed out for the 20th anniversary. This collaboration just seems like it lacked actual research about what Hannah Montana dislikes and what drink truly encapsulates who Hannah Montana is. As a consumer who liked Hannah Montana, I would not feel inclined to support a product that lacked proper thought behind their ideas.

    Something that I have noticed, and what you may have picked up on, is that even if something is false, I will repost it on my TikTok page. I did not bother to do my own research regarding the AG8 album that is possibly coming. I blindly followed the couple of videos that talked about AG8 and the comments I saw that mentioned seeing it. I believed it. I think fact-checking is important but it is not something that I will do for every single thing that stumbles on my feed. If it is highly questionable, I will do my own research about the topic. Things like health, politics, world issues, etc. are on this list. For things like a possible AG8 album in the works of being released, it is not something that I will go out of my way to research deeply. Seeing a couple videos on it and comments agreeing with the video are all I care to see about. If it gets proven wrong later down the line, then that is okay with me. It is not a detrimental thing. Debating whether it’s safe to eat a contaminated food is a good example of doing your own research. People tend to listen to people due to their relationships, however, it is always best to make your own informed decision by doing your own research and choosing what is best for you. While it was completely okay for my boyfriend and I to consume the contaminated bell pepper, backed by my own research and his knowledge of it, I consciously made my own informed decision only after I had fact-checked his knowledge.