Trig Values Set At Four (Places)

Nope. And they no longer have any tables in them. Although statistics textbooks do.

When I took trig, in 12th grade in 1968, it was a full semester (not a full year) class. The second half of the year was a class called “Math Analysis”, which I think was just their name for pre-calculus. Nobody learned calculus in high school.

That old 1948 algebra book I mentioned is intended for a full-year college algebra class. It is far more thorough and in-depth than any dumbed-down modern textbook, although the choice of topics to include is not entirely the same as currently. Since that was before the days of pocket calculators (and even mainframe computers were not widely accessible), there was a lot of emphasis on numerical calculations with logarithms. There was also an entire chapter on working with approximate numbers. Nobody learns the rules for that anymore.

Sensitivity to numerical error is something that all programmers need to be exposed to, and preferably formally taught. When I went through undergraduate classes there was Numerical Methods (2nd year, and compulsory), Numerical Analysis (3rd year, optional) and at Honours (4th year) level Advanced Numerical Analysis.
The compulsory level course took students through machine epsilon, and sensitivity to round-off and precision right at the start. The classic example being an order 20 polynomial with roots at each integer around 0. A bit flip of the lowest precision bit of any coefficient yields a polynomial with no real roots. (I think I have remembered this OK, it was a veeery long time ago.)
Sadly students for the most part disliked this subject. Too much like mathematics and not enough like having fun coding. It took most of us some hard won experience to realise just how critical to our day jobs this stuff was.
These issues can come and bite you at any time. For years I coded stuff that just never involved floating point. Automata of various forms, operating systems internals, virtual machine interpreters. One used floating point numbers to do things like printout average run times, and nothing more.
Then one suddenly finds oneself in the world of signal processing, potential fields analysis, forward modelling of fields, and it all comes crashing in. Need to get up to speed, and those dimly remembered lectures start to matter.
The nature of sensitivity to precision is nuanced. At one end people will exploit things like the small angle approximation for sine. At the other end you will be working hard to ensure a calculation stays sane and avoid the edges and blowing up when angles get small. Things like degenerate geometries will drive you nuts.

Oftentimes, in those cases, it’s the small-angle approximations that will save you. Though you might need a higher-order small-angle approximation, if you have things like x - \sin(x). Or worse, 1\over{x-\sin(x)}. Which just goes to show that understanding is more important than knowledge.

Very simply, one of the first examples in Numerical Analysis, the prof solved for the intersection of two lines, plus error - ending up with, say, 0.003 +/- 0.005; then asks what this means for the first coefficient of a quadratic. Basically, the error is significant enough that we don’t even know if the parabola opens upward or downward, so a very relevant detail to consider for a solution.

At the middle or high school level? If

then it sounds like no, although

they do obviously learn it in the first year of university or whenever they take Numerical Analysis.

This is a very clever observation and I would bet that that is the reason.