Well, in theory, you manipulate one equation until only one variable appears once on one side of the equals sign, then substitute what’s on the other side of the equals sign for that variable in every other equation. Repeat until you have only one equation left, which is in one variable. Solve for that variable, and then back-substitute that value into the second-to-last equation to get another equation in one variable, solve for that variable, and repeat until you have all the variables solved. But that’s unlikeley to be practical in this case, because some of the varaiables appear squared.
Now, I’ve writen programs to do this sort of thing numerically, but that was 25+ years ago and I’ve forgotten a bit … and it’s difficult to present the math without appropriate mathematical notation.
First reduce the number of equations as much as is reasonable. For example, use T1 + T2 = T to replace “T” by “T1 + T2” in each other equation. Then the T1 + T2 = T equation is no longer a part of the system, and you can set it aside (but you’ll need it after you’ve solved the remaining system, to calculate T). You might do the same thing using the Vrx = Vix + Aix*(T1 + T2) equation to get rid of Vrx, the Vry = Viy + Aiy*(T1 + T2) equation to get rid of Vry, and so on. At some point it will be impractical to reduce further, so go on to the next step.
Re-write the remaining equations so that all the terms are on the left side of the equals sign and each equation ends in “= 0”. If you were to plug in particular values for all the variables in these equations, you probably wouldn’t get zeros on the right side (unless you are particularly lucky), Instead, the left side of each equation would evaluate to some non-zero number. These numbers are called “residuals”. The goal is to find a set of values of the variables that reduce all the residuals to zero.
Now create a new function that measures how well a particular choice of values for all the variables makes all the residuals zero. Mathematically, the simplest function is just saying that the vector consisting of all the residuals must be all zeros, but this is often inconvenient for computers. Another possibility is saying that the sum of the squares of all the residuals must be zero; this function is zero if and only if each residual is zero, and it’s easily done in a computer. Also, it is a function that not only has a {i]zero* at the desired point, it also has a minimum at the desired point; and minima are often easier to handle than zeros. And its result is a single number; it’s easier to compare two single numbers than it is to compare two vectors.
Next come up with a set of initial guesses for the value of each variable. These could be just plain guesses.
Now come up with a method of taking a set of choices for each variable and deriving from that a set of choices that is probably closer to the real solution. One way to do this is to make a small change (maybe 10%) in one variable, evaluate all the equations, evaluate your “how good is this solution” function, and see if the change improved things. If it did, save the change. If it did not, try the opposite change. If the opposite change improves things; save it; if it does not, save the original value of the variable. Repeat for all variables.
(There are lots of other ways of going from one set of choices to a better set of choices, but they are real hard to explain in text).
Repeat until no change produces any improvement. If the solution is good enough, you are done. If the solution is not good enough, make the size of your change smaller and repeat the whole process until the solution is good enough.
This kind of procedure is usually guaranteed to converge to a real solution for the minimum of the “how good is this solution” function. Depending on yur initial guesses, it may converge to a non-zero minimum which is not a solution to the real problem; handling that is complex, but you can just try a significantly different set of initial guesses and repeat the whole procedure. There are many methods and tricks for making it converge faster. But maybe this will get you going.
jrf