That, in fact, is exactly why I was using Austen as my trial text, and why I suggested it above. If you’re trying to turn, say, William Gibson into Classical Latin, you’ll crash into a terminological wall, as you note. So I consciously and deliberately tried to avoid that by choosing source texts that (a) minimize superficially modern vocabulary as much as possible while (b) maximizing literary complexity in order to challenge the bots’ ability to “understand” the language and map a Latin equivalent. The point is to expose the computational model, not to struggle with an archaic dictionary.
That being said, there are, of course, subtle conceptual discontinuities between modern English and ancient Latin that also add to the difficulty of translation. For example, there is quite famously no evidence in surviving Roman writing that they had a fully developed concept of internal guilt, i.e. the voice in your head that nags you when you’ve done something wrong. Rather, they had an honor/shame culture, and all their writing expresses forms of guilt in the sense of worry about being caught and exposed and humiliated. Given the subtle psychology involved, I’m unsure how a machine translator would even begin to approach handling that challenge.
But those are beside the point. The key takeaway from my tinkering is to understand that the 'bots do not “translate” in the sense of knowing and applying equivalent semantic meaning, they “translate” by constructing statistical models and choosing the words and phrases that are most likely to represent correspondent expression.