Starting around post#8 in this thread there’s some discussion about how ChatGPT groks rot13.
I wondered if it was able to do so well because there was plenty of samples of rot13 in its training data, so I thought I’d give ChatGPT-4 a test on something that almost certainly did not exist as words or tokens anywhere:
Translate the following nonsense word into Rot-13: Mawplophinating.
And it answered: “Znjcybucvangvat,” which interestingly enough is not perfectly correct. That string translates back to “Mawplohpinating,” note the letter transposition.
I checked OpenAI’s free tokenizer, which apparently doesn’t have a version for GPT-4 yet, and my original word tokenized as M, aw, pl, oph, inating
I asked it to do the same nonsense word in rot-14 and it came back as “Nboxqmvqbwjupq” which turned out to be nowhere near correct, just a jumble of characters when I tried to rot-12 them back. I noted the first 2 letters of that were Rot-1, but the rest, who knows.