I’ve published well over 5,000 words about ChatGPT getting things wrong. In one article ChatGPT gave me the wrong date and title for a blog post about a Neil Gaiman novel. I know it’s wrong because Gaiman responded to me with a link to the correct post.
In another article I wrote about ChatGPT source laundering an article from a small blog, and attributing the source to the United Nations. The only reason I could prove ChatGPT plagiarized and obfuscated the source, is because the blog’s author made a factual error. Large language model’s are essentially built from plagiarizing text that humans wrote, and humans lie, get facts wrong, and are generally unreliable.
I also ranted a bit about the fact that AI generated content will eventually start getting pulled into generative chat AIs. Blackhat SEOs and large media companies like CNET creating webpages with AI written content will make it impossible for the AI’s creators to filter for only human written words.
Why ChatGPT and Bing Chat are so good at making things up
By: Benj Edwards, April 6, 2023, arstechnica.com
The article from Ars Technica covers some far more harmful examples of AI created misinformation. An Australian mayor who allegedly found that ChatGPT said he went to prison for bribery, and a “law professor who discovered that ChatGPT had placed him on a list of legal scholars who had sexually harassed someone.”
This and other reports have sparked a linguistic debate about calling Chat GPT a liar. Techmeme has aggregated some decent articles discussing this point.
The arguments for calling ChatGPT a liar, that it is an utterly unreliable arbiter of information, answering questions confidently even faking citations. The arguments against are that ChatGPT and other AI tools aren’t people and thus cannot lie, because lying requires intent.
I just checked out of the “10 items or less” line at the store. If someone had 12 items in that line it would matter more to me than if they knew less should read fewer. I’m probably more pedantic than the next guy but what matters here is that regular people understand that ChatGPT is not to be trusted.
The folks who will humanize AI will do so anyway. To everyone else, call AI the liar that it is, because that’s what matters.