One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation.