No, because the LLM isn't just copying from the same text. Rather, it's "classifying" the text using the self-attention, and then applying a simple Markov Chain (supposedly). The classification is the hard part because how do you know what text from the training data is "similar" to the prompt text.
From the blog post for example:
Original string: 'And only l'
Similar strings: 'hat only l' 's sickly l' ' as\nthey l' 'r kingly l'
One idea is to feed some examples of this language into an LLM. Then you could take out the programming part entirely and just have natural language description -> program -> result. I'm thinking something like this paper: https://arxiv.org/abs/2211.11559
From the blog post for example:
Original string: 'And only l'
Similar strings: 'hat only l' 's sickly l' ' as\nthey l' 'r kingly l'