As for practical limits, if you do the arithmetic naively, then you'll generally need O(n) memory to capture a region of size 10^-n (or 2^-n, or any other base). It seems to be the exception rather than the rule when it's possible to use less than O(n) memory.
For instance, there's no known practical way to compute the 10^100th bit of sqrt(2), despite how simple the number is. (Or at least, a thorough search yielded nothing better than Newton's method and its variations, which must compute all the bits. It's even worse than π with its BBP formula.)
Of course, there may be tricks with self-similarity that can speed up the computation, but I'd be very surprised if you could get past the O(n) memory requirement just to represent the coordinates.
For instance, there's no known practical way to compute the 10^100th bit of sqrt(2), despite how simple the number is. (Or at least, a thorough search yielded nothing better than Newton's method and its variations, which must compute all the bits. It's even worse than π with its BBP formula.)
Of course, there may be tricks with self-similarity that can speed up the computation, but I'd be very surprised if you could get past the O(n) memory requirement just to represent the coordinates.