There’s a distinction here that’s easy to miss, so let me be absolutely explicit about it. It’s the distinction between “dimension” and “rank”.
Tensors exist in something called “space”, or more to the point “n-dimensional space”. A vector in n-dimensional space will have n components. For instance, normal space is 3-dimensional, so a typical vector like the velocity vector will have 3 components. In Relativity, we often deal in 4-dimensional spacetime, so the relativistic momentum vector (called the 4-momentum) has 4 components. In the linear algebra of quantum physics, you’ll often encounter vectors of any dimension, 2, 3, 6, or even infinity.
Rank is something else entirely. As has been said, vectors are tensors of rank 1, which means that you can easily represent them by a 1-dimensional array of n components. For n = 3:
a[sub]1[/sub] a[sub]2[/sub] a[sub]3[/sub]
The subscript numbers are called indices, and they specify which component of a tensor you want. An index can take any value from 1 to n.
A tensor of rank 2 can be represented by a 2-dimensional array, also known as a grid or a matrix, of n[sup]2[/sup] components:
a[sub]1,1[/sub] a[sub]1,2[/sub] a[sub]1,3[/sub]
a[sub]2,1[/sub] a[sub]2,2[/sub] a[sub]2,3[/sub]
a[sub]3,1[/sub] a[sub]3,2[/sub] a[sub]3,3[/sub]
If you have a good imagination, you can probably picture how you might represent a tensor of rank 3. It would have n[sup]3[/sup] components, and you would need 3 indices to specify a single component. (When you get comfortable with tensors, you don’t actually think of them as grids of components, but this visualization is useful for learning.) In general, a tensor of rank p in n-dimensional space will have n[sup]p[/sup] components. I’ve run into tensors up to rank 4 pretty frequently, like the Reimann Tensor. I’m sure that higher-order tensors are used, but it’s probably advanced stuff.
So here’s an instructive question. What’s the difference between a tensor of rank 2 in 3-dimensional space, and a tensor of rank 1 in 9-dimensional space? They both have 9 components. Why couldn’t we just write down the 9 components of our tensor above in a straight line?
a[sub]1,1[/sub] a[sub]1,2[/sub] a[sub]1,3[/sub] a[sub]2,1[/sub] a[sub]2,2[/sub] a[sub]2,3[/sub] a[sub]3,1[/sub] a[sub]3,2[/sub] a[sub]3,3[/sub]
I don’t know how to explain it succinctly, but you lose structure and meaning when you do this. Just like when working with matrices you want to be able to do operations like swapping rows and swapping columns, there are certain things (like transformation that g8rguy goes into) that you want to be able to do with tensors, and these things require you to treat them as having structure beyond “set of numbers”.