AI in a Post-Truth Society

A screenshot of a new chat on the ChatGPT app with a question typed into the text bar: "How can I trust you?"

This week I wanted to write about an online squabble I witnessed that made me stop and think about the current state of our information envrionments. On the evening of January 27th, the White House Office of Management and Budget sent out a (now-rescinded) memo that instructed a pause on all federal grants. This, understandably, caused a great deal of panic as thousands of institutions, research labs, nonprofits, and individuals on federal assistance of any kind (medicaid, SNAP, student financial aid, etc.) attempted to ascertain how this would affect them personally. Since news outlets take some time to publish breaking stories, millions flocked to microblogging platforms like Bluesky, Twitter, and Threads to read in-the-moment updates that had varying degrees of veracity; as a result, some active posters were caught in the crossfire for posting unverified claims.

Alejandra Caraballo (@esqueer.net on Bluesky), an American civil rights attorney and clinical instructor at Harvard, posted a now-deleted series of screenshots that displayed a list of federal departments and grants that could be affected by the pause. Several users were quick to point out that quite a few entities listed in the screenshots would not actually be affected by the funding freeze. After several individuals requested her sources, she made an additional post in the thread explaining that the initial list of organizations was generated by ChatGPT, but she had then independently verified and collected them in a separate document. Unfortunately, hundreds of users replied to the original thread in outrage, pointing out that the damage had already been done and that, in a sensitive and frenzied situation, Alejandra had irresponsibly posted misinformation.

This situation led me to carefully consider how I use generative AI in my daily life, and the degree to which I trust the conclusions that it arrives at. Arizona State University, my alma mater and current employer, was the first educational institution to partner with OpenAI in January of last year; as a staff member, I have access to ChatGPT Enterprise, a version of ChatGPT Plus that I don’t have to pay for out of my own pocket. I also have the new iPhone 16, which allows Siri to tap into ChatGPT to answer certain questions. My proximity to these models has led me to rely on them more for questions requiring general knowledge (“Is rubbing my eyes bad for them?”) as well as questions that help me compose (“What’s a good synonym for ‘panicked’?”). While these questions are fairly innocuous, relying on them to make decisions or inform my opinion will, inevitably, make me the victim of misinformation.

A screenshot of an answer to a question asked to Siri. The answer reads: "A good synonym for "panicked" is "frantic." "Check important info for mistakes."

Despite frequent and visible disclaimers to “check important information for mistakes,” many generative AI users have a tendency to place more trust than they should in the responses they receive, especially when those responses are provided confidently and authoritatively. Anecdotally, I’ve had users tell me that they even rely on ChatGPT as an information source more often than Google. However, the fact remains that no matter the authoritativeness or believability of the response, these models hallucinate as a byproduct of how they function; they will confidently produce information that is incorrect because they are finding patterns that don’t actually exist but appear like they would logically follow, in a similar fashion to human conspiracy theorists. Essentially, large language models produce incorrect information because that informations seems like it would be true; while not purposeful, the outcomes of hallucinations can often be quite insidious.

While there are other unforeseen consequences of generative AI models that I’ll likely cover in future posts (poor environmental effects, the displacement of artists, the malappropriation of their work for training data, etc.), I’d like to focus in on this issue of misinformation and the erosion of trust. In Alejandra’s situation, she did eventually do her diligence to verify which organizations would be affected by the federal funding freeze; the list generated by ChatGPT became a starting point from which she could work off of. While she should not have presented the initial ChatGPT response as an authoritative or exhaustive list, and though there may be disagreements around whether generative AI use is completely ethical in the first place, her use of ChatGPT in this instance appeared to be ethically sound. Alejandra had good intentions, but not all AI users do.

Fake AI social media profiles and AI spam have become exceedingly common on Meta platforms, with Meta even leaning into the trend by announcing their own AI character accounts before discontinuing them shortly after. The “Dead Internet Theory,” a conspiracy theory that claims most Internet activity is now generated by bots, has become creepily prescient. Additionally, much popular discourse after the 2016 American elections has promoted the belief that we are now in a “post-truth society,” meaning that facts have begun to matter less in decision making or the formation of new opinions than already-entrenched personal beliefs. This trend can be attributed to the widespread use of social media platforms, which has led to an explosion in misinformation, disinformation, and siloing (which I briefly touched on in my previous post). Given these pre-existing conditions in our digital information environment, the popularization of generative AI, and the seemingly bottomless amount of investment pouring into AI development, these products are poised to further erode trust in the information we consume.

While the drama surrounding Alejandra’s post caught my attention, this was not the first nor will it be the last time that someone presents a generated response as a factually accurate statement. ChatGPT, Gemini, Claude, and now Deepseek all exist to serve one common purpose: to answer questions. They are literally question answering machines, but we cannot fully trust the answers they provide unless we verify them ourselves. This of course calls into question their usefulness, but the current state of affairs in our society leads me to ask: at a certain point, will we even care to check?

We must each be cautious in the questions we ask, and suspicious of the answers we receive. This is the new reality of the post-truth society, accelerated by these machines that appear oracular, but are in fact perfunctorily seeking patterns that may or may not exist. I will be the first to admit that generative AI can be quite useful for a great number of tasks, but the trust we place in these models to be accurate and truthful will be inevitably betrayed. The veracity of information we consume online has always been questionable, but it is more essential than ever that we exercise our critical thinking skills and challenge our own assumptions about what seems true and what actually is.

Let’s think about the future together.

Join the Friday Foresight Newsletter to receive insightful commentary like this every Friday!


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Izaac Ocean Mansfield