There is an arms race between AI that creates misinformation and AI to detect it. Indeed the AI that creates deep fakes is trained using adversarial learning. In some cases it is possible to cross-check features, for example the BBC routinely checks the background of photos against images of the real area and if a large language model halucinates academic references, these can be cross-checked against the journal's contents list. However, as the fakes become more relaistic, this becomes harder, in particular humans can be fooled, for example by long arguments with lots of true, but irrelevant, facts. Furthermore, the best disinformation may be true, but leave the wrong impression. Probably the best guard against misinformation is being better informed oneself and knowing the provenance of information – does it come from a trusted source.
Used in Chap. 20: page 326