This is completely false. The odds of an LLM predicting the text of a novel that is not part of the training set is basically 0 - you can experiment with this if you want. It is essentially like the infinite monkeys on infinite typewriters thing (only slightly more constrained).
This is not to say that they couldn't write a novel, even a very good one - that is a completely different discussion.
This is not to say that they couldn't write a novel, even a very good one - that is a completely different discussion.