Recently, on a question-and-answer thread on Reddit, somebody asked about the exact heel height of a discontinued weightlifting shoe. “I’ve looked everywhere, but not even ChatGPT knows,” I remember them saying. It was true that the information was hard to find, but that’s all the more reason to not use ChatGPT. If you can’t fact-check the bot, the answers it gives are useless.
As we’ve explained before, ChatGPT is a text generator. It doesn’t “know” anything, and makes no guarantee that anything it is saying is correct. In fact, a lot of the things it says are provably wrong. We’ve seen it make up exercises that are physically impossible, and it told Lifehacker writer Stephen Johnson that he had written specific articles that were not, in fact, his writing at all. AI can “hallucinate” facts and double down on them when pressed.
For example, I asked it who Beth Skwarecki is. It got my job title and beat correct, and it knows I write for Lifehacker, but it keeps trying to credit me with a Master of Public Health degree. (Plausible guess, but no.) If you regenerate the response a few times, it will offer different universities I supposedly earned this degree from. None of them are universities I’ve ever attended.
In other words, you can’t tell whether an AI-generated fact is true or not by the way the text looks; it’s designed to look plausible and correct. You have to fact-check it. If you could get ChatGPT to tell you that a certain weightlifting shoe has a standard 3/4" heel, that certainly sounds like it could be correct, but if you can’t find that information elsewhere, you can’t check it—so you’re wasting your time.
ChatGPT is not a search engine
Now, there is such a thing as AI-powered search engines. This is how Bing Chat works: It does an actual search to find information, then uses the AI to format that information as friendly text. For each factual thing it tells you, you can click on its source to see where that information actually came from. But the other AI chatbots out there, including ChatGPT, aren’t built that way.
And yet, ChatGPT is sometimes positioned as an alternative to search engines. Check out this Guiding Tech article praising it for not making you wade through pages of search results, or this CNBC article in which it’s judged as better than Google for providing safety information on medications. (Oh my god, do not use ChatGPT for medical advice.) But it really, really does not do the same job as a search engine, and can’t be used as one.
For example, car enthusiast Chris Paukert tweeted that he received an email fact-checking a quote of his about cars. The marketer who sent the email said that they “found” the quote through ChatGPT, and wanted to make sure it was real. Good on them for checking, because it turned out not to be anything Paukert had ever said or written. But why would they think a text generator is a good place to “find” quotes at all?
Yes, ChatGPT has been trained on a massive amount of data (sometimes described as “the whole internet,” although that’s not literally true), but that just means that it has seen facts. There’s no guarantee that it will use those facts in its responses.
Using myself as another example, I asked the bot for the names of books I’d written. It listed five books, four of which are real but not mine, and one that does not exist at all. I think I know what happened there: It knows I’m an editor, and that I write about fitness. So it credits me with books whose authors include “editors of” a fitness magazine, such as Runner’s World or Men’s Health.
So if you want to use ChatGPT to get ideas or brainstorm places to look for more information, fine. But don’t expect it to base its answers on reality. Even for something as innocuous as recommending books based on your favorites, it’s likely to make up books that don’t even exist.