Financial Markets

PRESIDENTIAL PARDONS OR AI LIES? MISINFORMATION RAMPANT IN AI-GENERATED FACT-CHECKS!

In today's high-tech, rapid-fire era of information consumption, artificial intelligence systems such as OpenAI's AI bot, ChatGPT, and Google Search's AI function seem to have become a go-to source for people seeking answers to specific questions. However, misuse or overreliance on such tools can sometimes foster misinformation, leading to widespread confusion and misinterpretation of facts, particularly related to history, politics, or law where information must be precise.

A high-profile instance revealing the flaws in AI's presumed infallibility involves the supposed presidential pardons of their relatives. There have been widespread claims that several U.S. Presidents, namely Woodrow Wilson, George H.W. Bush, and Jimmy Carter, extended presidential pardons to their respective family members. However, rigorous research couldn't substantiate any of these claims, raising eyebrows at the reliability of AI as an information source.

For instance, the notion that President Woodrow Wilson pardoned his brother-in-law, Hunter deButts, or George H.W. Bush offered his son Neil a similar reprieve from justice, was found to be merely a product of AI misinformation. This error was initially traced back to answers generated by ChatGPT.

When inquired about the number of U.S. Presidents who pardoned their relatives, ChatGPT continued along this mistaken strand of history. While accurately asserting that Bill Clinton pardoned his half-brother Roger Clinton, it also erroneously claimed that Neil Bush was similarly pardoned by his father, George H.W. Bush — a piece of information proven incorrect.

Challengingly, Google Search's AI function has also played its part in churning out these inaccuracies. It spread the statement that Jimmy Carter pardoned his brother Billy while George H.W. Bush pardoned his son Neil, and glaringly omitted the factual instance of Donald Trump’s pardon of Charles Kushner, his son-in-law’s father.

So, what's the real harm? It lies in the design of these 'answer engines', which deliver answers without citing the sources of information, making it challenging for users to verify their accuracy. This framework not only exposes people to potentially incorrect or misleading information but also undermines the necessity of critical engagement with sources.

The impact of such misinformation is considerably damaging to the information environment. Notwithstanding the triviality or grandiosity of the issue, misinformation can distort beliefs, influence views and choices, and alter the narrative of significant events.

As we stand on the cusp of an era where information is primarily digital, and the usage of AI services for information consumption is rapidly increasing, it's crucial to develop more foolproof systems. But as users, we must also remain vigilant whenever we turn to AI for answers, understanding its limitations, and making conscious strides to prevent the spread of misinformation.