The answer is, “about a large enough number to index into a normal sequence far enough to have a good chance of finding a match for a 1GB file.” 
As the size of a number increases, the probability of finding that number in a given section of some normal sequence diminishes rapidly. The fact that you can address 4GB of memory with only 32 bits of information is irrelevant.
Say you want to find a match for a 4-bit number within some normal sequence. If you pick any 4 consecutive bits from that sequence, the probability that every one of them matches your number should be about 1 in 2^4, or 1/16. So, your chances of finding your number beginning within 16 places of the start of your sequence are fairly good. To actually save any space, you would need to find it within 8 places, so you could use 3 bits for your index. If you’re unlucky and it takes more than 16, you need to use 5 bits to store the index.
A one gigabyte file is a sequence of 8,589,934,592 bits. Thus, the probability of an arbitrary subsequence being a match would be 1 in 2 to the power of 8,589,934,592, a phenomenally huge number. Even if a match exists within, say, the first sixteenth of that space, after you’ve take a few quintillion years to generate and check that many bits of your sequence, you wind up saving 4 bits of storage. (Since 8589934592 - 4 bits of data can address the one sixteenth the space of 2^8589934592 bits.)
Not a compression algorithm I’d place any bets on, that’s for sure.
Now, if you get astonishingly lucky and find your number within a few million places, what are the odds that the data you’re storing happens to be “digits 42,053,221 through 946,533,410 of pi”? 