Is this really only relevant if you have to manually graph a line or curve on graph paper, or does it have some mathematical significance? IOW is it a theoretical or practical matter?

The x-intercepts are called the roots of a function and it’s almost impossible to overstate their importance. Entire branches of math were created just to explain their numbers and values.

The y-intercepts are the same thing, just for the inverse function (i.e., the y-intercept for sin(x) is the x-intercept for arcsin(x)).

As far as practical matters go, the roots are often the solution to whatever the problem happens to be. It might be where your artillery shell lands, or it might tell you what type of investment yields the best return.

Right. For instance, it’s fairly common for the independent variable (on the x-axis) to represent time, and the dependent variable (on the y-axis) to represent something that changes over time. In that case, the y-intercept is the “starting value” for the thing that’s changing, and the x-intercept is the time when that thing reaches zero.

Since **Dr.Strangelove** mentions a financial example, I would note this little idiosyncrasy: When I took an intro-level class in economics, I was confused by a lot of the graphs, until I figured out: In the field of economics (which I suppose includes financial topics), they tend to put the *independent variable* on the vertical axis and the *dependent variable* on the horizontal axis. Either that, or they have a very different idea than I do of what are the “independent” and “dependent” variables.

ETA: Example: The Laffer Curve. This curve looks vaguely like a parabola. But it may be drawn either with a horizontal axis or a vertical axis. The independent variable appears to be “Tax Rate” and the dependent variable appears to be “Resulting Revenue”. The graph shown at this link shows “Tax Rate” on the vertical axis and “Revenue” on the horizontal axis:

That is odd. I’ve always seen the Laffer curve oriented the other way, with tax rate on the X axis like a sane mathematician would do. When I search on Google, the first 3 results show it that way. The 4th shows it the other way.

That’s right. There is no consistency. But when I took the above-mentioned Intro Econ class, there were other graphs as well that had the independent variable on the vertical axis. This happens also with supply/demand/price graphs, which put Price on the vertical axis and supply or demand on the horizontal axis. Maybe it’s a bit ambiguous as to which is the independent variable and which is the dependent variable. See the several graphs in this introductory discussion:

This one, for example, among others on that page:

Not every plot is supposed to be a graph. E.g., in the following diagram, *both* the *x*- and *y*-axes are independent variables (draw a line through them to read your answer):

Another aspect of roots is that they provide a common way to “phrase” certain mathematical questions in a way that gives you access to other mathematical tools.

As an easy example: what’s the square root of N? Maybe you have a calculator–but how does the calculator do it?

We can turn the question around though and ask what the roots of f(x) = x^2 - N are. And from there we can bring in tools like the Newton-Raphson iteration, which takes an approximate root and gives you one that’s closer.

Or you might consider two curves, and wonder where they intersect–say, Ax^2+Bx+C=Dx+E. A little rearrangement gives you f(x) = Ax^2+(B-D)x+(C-E), which we can then find the roots of using the quadratic formula.

Yes, it is basically a nomogram.

In general, I suppose the interpretation of the *x* and *y* axes/intercepts just depends on whatever problem you have set up. All kinds of various problems can be formulated in terms of zeros of some function, not least the Riemann Hypothesis.

As a more fundamental reply to the OP, a huge number of problems can be expressed as solving an equation for the condition where the result is zero.

Perhaps the most common question is to ask what value is needed so that two elements of the problem are both happen simultaneously.

Say we want to meet someone driving cross country, and we want to know when we will meet. We have an equation that describes the position of their car in terms of time, P_{friend}(t),

and we another equation that describes the position of our car P_{ours}(t).

So we want to work out when the two positions are the same. So:

P_{friend}(t) = P_{ours}(t)

or

P_{friend}(t) - P_{ours}(t) = 0

This may look like a trivial manipulation, but the important thing is that, as noted above, there is a huge body of knowledge with is concerned with finding the roots of equations. By assembling the problem as a single expression we start on the path of applying this knowledge. Often we discover that both components of the expression have very similar forms, and can be manipulated into a single more tractable form quickly, and then well known tools and rules applied to find the roots.

Say we have two bike riders in a race. We stage a race, and the endurance rider is in front, but we are about to start a sprint stage, and the sprint rider is expected to close in on the endurance rider. We want to find the time where the sprint rider overtakes the endurance rider.

The distance of a rider is given by its headstart, initial velocity, acceleration, and time. Call these h, v, a, t

So d = at^2 + vt + h

Only the endurance rider has a head start. So:

d_{fast} = a_{fast}t^2 + v_{fast}t and

d_{slow} = a_{slow}t^2 + v_{slow}t + h

The riders meet when the distances are the same, so

a_{fast}t^2 + v_{fast}t = a_{slow}t^2 + v_{slow}t + h, or

a_{fast}t^2 + v_{fast}t - a_{slow}t^2 - v_{slow}t - h = 0.

collecting terms

(a_{fast} - a_{slow})t^2 + (v_{fast} -v_{slow})t - h = 0

which is a quadratic, something for which we have a well known, pre-canned, solution for finding the roots.

The roots of ax^2 + bx + c are \frac{b \pm \sqrt{b^2 - 4ac}}{2a}

We can substitute the coefficients of our problem into the well known solution and instantly get the time at which the sprint rider catches the endurance rider.

We can note that the general formula for the roots of a quadratic has a couple of nuances. It may give zero, one or two results. If the expression b^2 - 4ac is zero, there is one solution (when the riders meet). If it is negative, there are no real solutions (riders never meet). Otherwise there are two solutions. Context can tell us how to interpret two solutions. One solution may be in the past, in which case it can be ignored. Or it may be that the riders actually do meet twice in the future. Depends upon the initial conditions.

Solving a quadratic like this is probably the simplest non-trivial example of why the roots of functions are so important. There are a huge number of functions for which we have either closed form or useful numerical methods of calculating the roots. This is bread and butter stuff for almost every facet of mathematics, science and engineering.

The question is not limited to 2D graphs. But it quickly gets out of hand.

Nitpick: You missed a minus sign in the quadratic formula.

Imagine that…

Getting to slightly more advanced topics:

If you have a polynomial, how do you find the roots? **Francis_Vaughan** gave the quadratic formula earlier. That has been known for about 1400 years. But what about higher degrees–i.e., x^3 (cubic), x^4 (quartic), x^5 (quintic), etc.?

Formulas for cubic and quartic polynomials were found, with difficulty. But are there closed-form expressions for quintic and above? It turns out, no (at least not in general). There are some quintics with roots that cannot be expressed with radicals (square roots, etc.).

Évariste Galois essentially invented modern group theory to prove this, as well as discovering relationships between other branches of math. Unfortunately, he died in a duel at age 20.

Darn it. Too easy to lose them inside the \TeX.

Story of my undergraduate life, losing minus signs.

The lesson I learned in my undergraduate computer graphics classes was *always have an even number of sign errors*.

This reminds me traumatically of those problems we had in First Semester Algebra (9th grade) in 1966:

They were called “uniform motion problems” (everything traveled at constant speeds; no dealing with acceleration), and the whole class had a bear of a time getting to understand them.

Dropping a minus sign was my favorite go-to error when I was in that class. It most often happened in contexts like this:

-(3x^{2} + 7x - 8) → (3x^{2} - 7x + 8)

I once had this real-world problem to solve: Researchers staked out a cliff top overlooking a bay where whales and boats were common. The question at hand was: Do the boats disturb the whales? The observers took sightings of whales whenever they surfaced, noting the time and coordinates, and the coordinates on any nearby vessels. With a series of such sightings we could determine the path of the whale and boat. The dependent variable to be computed was the *closest point of approach of any whale to a vessel*. These data would be statistically analyzed to determine if the whales tended to alter their path to avoid vessels. My task was to develop the math to compute this and write the FORTRAN program to do it.

(ETA: And then the PI said I had to modify the algorithm to take the curvature of the earth into account when computing distances, for no better reason than because other researchers were doing that.)

You are talking about solution of polynomial equations by radicals—but, as you mention, this is not a mystery today because you can compute the Galois group of a given polynomial.

You do not always need to write down such an expression, though (especially if there isn’t one); e.g., say you are doing some engineering or whatever where \varepsilon is small, in which case you can solve x^5-x+\varepsilon=0 as

which is surely good enough, especially if you can truncate to first order (all assuming x is also close to 0). Or, if you know all the coefficients, you can find roots numerically, etc.