When I first took Algebra I in 9th grade in 1965, this sort of thing was definitely drilled into us, but I and I think most of the class were perpetually confused by it at the time.
First of all, despite all the contradictory remarks already posted: Yes, there IS an element of pure arbitrary convention here: The notation sqrt(x) or √ is somewhat arbitrarily defined to refer only to the positive square root.
Thus, the statement: y = √25 can ONLY mean the positive root, simply because we’ve all agreed that the symbol √ means just that.
Yet the fact remains that, given the statement: y[sup]2[/sup] = 25, you can plug in y = 5 or y = -5 and it works. In this formulation, the artificial restriction is absent.
The same applies to the absolute value function, which was also a perennial point of confusion.
The convention gives us a way to specify exactly which square root(s) we are interested in using. If we just want the talk about the positive, or principal, root, the √x notation allows us to do that.
If we wanted to talk about the negative root, well, we have the notation -√x to handle that.
AND, if we actually want to talk about BOTH roots (which we sometimes do), we note with much joy and glee that we have the notation ±√x for that! So the notational conventions give us the flexibility to say exactly what we mean and mean exactly what we say!
At the level of beginning algebra classes, this is perhaps most famously seen in the quadratic formula which gives us both solutions to the general quadratic equation ax[sup]2[/sup] + bx + c = 0 as:
y = ( -b ± √(b[sup]2[/sup] - 4ac) ) / 2a
You’ll see a lot of the same thing, only even worse, with trigonometric functions and their “inverses”.