Did Any School Math Textbook Actually Include The Bit About Trains Traveling In Opposite Directions

It’s not really a distinction between what different institutions consider “beginning level” as it is between what is the lowest level math classes a particular institution offers, and/or where most incoming freshmen start out, and this will depend, among other things, on how selective the college or university is.

Community colleges, and some other colleges and universities, offer what are known as “developmental” math classes for students who are not prepared for college-level math. These are things like beginning and intermediate algebra, and sometimes even pre-algebra, that students should have learned in high school, but somehow either don’t know or don’t remember. They’re typically not for college credit (they don’t count towards any degree and don’t transfer), but they need to be taken before a student is ready to take classes that are college level.

Then there are classes that are sort of in-between: you can take them in college and get college credit for them, but some students take them, or their equivalents, in high school. This would include things like more advanced algebra classes (sometimes called “college algebra” or “precalculus”), statistics, and calculus.

You are right, it seems it is not meant to be horrifying if you think about it physically, especially since only one axis is involved. Now there is no answer key or anything, but my interpretation is that the student is asked to translate it to an algebraic problem along the lines of:

Find the smallest real number t such that the system

x² + y² + z² ≤ (0.1)²
(x - 10 + 2t)² + y² + z² ≤ (0.1)²

has a real solution (x, y, z).

Unless I misunderstood. And of course to “explain how semidefinite programming can be used to solve it”.

I’ve taught math at four different universities. Every one of them had some form of a “College Algebra” class, but don’t let the title fool you, it was really high school algebra. A lot of college students really struggle with math, in my experience. I mean, really struggle, like have a lot of trouble doing arithmetic with fractions. So yes, universities absolutely do teach beginning algebra like community colleges do.

Where I currently teach, I was told when the “College Algebra” course was developed (I don’t know how long ago that was) the intention was for it to be advanced beyond high school algebra. Due to the nature of the students taking it, it did not work out that way; it evolved into a repeat of high school algebra. And many students really struggle with it.

Every college will have remedial courses, for students who aren’t ready to take real college classes, but are ready to pay tuition to the school.

It sounds like you might not be American? American colleges don’t usually offer a field of study called “classics”. Most American schools will require at least a few classes each in math, science, literature, history, etc. for all degree programs. Sometimes there are interdisciplinary courses designed to meet those requirements while still appealing to students interested in other topics, but you’ll still have to take something for them.

At my school, they had remedial classes for math and English that were numbered under 1000. They didn’t earn any college credit but were required if you didn’t do well enough on your placement tests. The placement tests were waived with high enough ACT or SAT scores. A dorm buddy of mine got stuck in the math one, he took algebra I in high school but then skated by on consumer math classes for the rest of high school. I kinda wonder if he stuck it out for 4 years, we lost contact after freshman year.

I find that last part interesting, because every class I ever had that taught the matrix method always taught it more in an optional way. We learned enough to see how it was equivalent to other methods, and then we just kinda stopped. You could use it if you liked it, but you could also just use any of the other methods to solve equations.

This is unlike adding systems of equations or solving for one variable and substituting. Those we had to spend time actually using each method.

So, while I was good a math (got the highest grade on my AP calculus test in high school), I never retained any info about solving simultaneous equations using matrices. I remember there was only one student I ever knew who actually used matrices in their homework.

Similarly, math with matrices in general was just something we spent a few days on, and never really used again. With most everything else, we’d constantly get review problems. But matrices were just treated as not that important.

The addition and substitution methods get unwieldy when you have more than two or three equations and/or variables. That’s when matrix methods (Gauss-Jordan elimination) really come in handy: the basic procedure is the same no matter how many equations or variables are involved, and a computer can handle the calculations.

If you had ever taken a class in Linear Algebra (which, I am told, is used quite a bit in some areas of engineering), you would have done a lot with matrices.

As I recall (and I’m remembering something a third of a century ago), we did **a lot **with matrices, with matrix inversion provided as the general way to solve simultaneous equations (and as the way anyone would actually solve any set of simultaneous equations that was more than just a toy problem).

I’m not convinced that is always the best way in terms of computational complexity or numerical stability. The documentation for Matlab says it uses LU or QR decomposition, for instance. Certainly students are supposed to know how to do Gaussian elimination by hand (unless anything a human can do counts as a toy problem…)

Sorry. I should have said “matrix manipulation” not “matrix inversion” as the thing I was contrasting with doing the task by hand. The point is that it makes sense to abstract away the variables so that you are solving

Ax=V not

x+y=2, x-y=12,

just as you learn the general form to solve Ax^2+Bx+C=0 after you’ve played around with solving problems like x^2+7x+12=0 (something a human can easily solve by inspection)

Really, math students (at any level) should be taught a lot more general algebra. I’d even argue that we should start teaching algebra before arithmetic. There’s plenty of algebra that doesn’t even involve numbers at all, and once you get the hang of general algebra, it’s easy to pick up the specifics, like algebra of numbers, or of operators, or of matrices, or of rotations, or whatever.

Your mention of computational complexity reminds me of something that happened when I was a grad student. A fellow I knew (math major) was taking a computer class and wrote a program (probably Fortran - this was 1988) that used the most naive matrix inversion algorithm possible and applied it to a moderately large matrix (200 by 200, maybe). The campus Vax mainframe slowed to a crawl (and since we could see who owned the processes that were taking up the system resources, it was easy to see who was responsible (him and the PageSwapper were using 99% of the resources, and it was his program causing the page swaps)).

Matrices are useful for other things besides solving systems of equations.

I took a class called Finite Math at the community college. It is a more-or-less business oriented class – lots of topics dealing with exponential growth (of investments, e.g.), compound interest, present value and future value, stuff like that. Lots of other topics (long forgotten by me) that dealt with matrices. Lots of useful things to do with matrices that didn’t have much to do with solving systems of equations.

One example I sort-of remember: A topic called Linear Programming. You have a linear function, and a set of constraints on the variables (expressed as inequalities the variables must satisfy), and you want to find the maximum or minimum value of the function while remaining within the constraints.

The graphic solution is to graph the inequalities, each graph being a line, and if the problem is well-formed, the graphs of all the inequalities (and possibly also the X and Y axes) form a polygon. The polygon and its interior are the valid points of the system. The maximum and minimum values of the function will be at one of the vertices of the polygon.

This can also be solved by developing a matrix to describe the problem and reducing that matrix.

Matrices can also be used to describe the connectivity of graphs, or for describing a sequence of events, and matrix products can be used to find the results of sequences of events (Markhov chains). Stuff like that.

Matrices are also certainly much used in computer graphics, where operations on a picture such as translation, rotation, and shrinking or expansion can be viewed as matrix operations, which of course computers will happily do on thousands of points.

My old 1948 book had an entire chapter on determinants, with lots of examples of things that determinants were good for (Cramer’s Rule) and how to compute them, but nothing about matrices other than determinants. No addition, subtraction, or products or any other matrix topics. I guess that’s what was considered important in those days.

And Cramer’s Rule, of course, is a horribly inefficient way of solving equations. It’s an interesting result, but I don’t think it has any practical applications at all.

This isn’t true and doesn’t even really make sense as stated.

Cramer’s Rule, in and of itself, is neither efficient nor inefficient. While naive implementations of Cramer’s Rule are generally inefficient (because they calculate determinants using minors), implementations exist that are O(n[sup]3[/sup]) which is the same as Gaussian elimination.

Here’s a paper… A condensation-based application of Cramer’s rule for solving large-scale linear systems

I had several of those, but they were all on “ditto” sheets. I don’t recall seeing it in a book. And I don’t know that they ever used Ypsilanti. It seems like I would have remembered that name. Our trains were generally leaving Boston. . .

While theoretically the complexity of solving a general system of linear equations is less than O(n[sup]3[/sup]) (unless I’m misunderstanding something?), the standard routines like DGESV used by Matlab & friends are still O(n[sup]3[/sup]), or at least they used to be, so apparently the theoretical speedup does not kick in at practical matrix sizes, or else there is some other reason to stick with the standard.

I’m not sure what you are saying here. What theoretical speed up?

I mean, without considering the practicality of any hypothetical algorithm, let’s consider the number of numerical operations required to solve a system. Starting from an algorithm for fast matrix multiplication (one that takes O(nᵝ) operations, for some power less than 3) you can use a divide-and-conquer method to invert a matrix in the same order of magnitude of time. And if you can invert a matrix, you can solve a linear system. So even the existence of something like Strassen multiplication shows there is a theoretical speedup for large enough matrices.

The question is, does GESV ever attempt to do something faster than 2/3 n[sup]3[/sup]

I don’t believe so. My understanding is that it is O(n[sup]3[/sup]), but I could be wrong.