Do people ever multiply by the 1 vector for summation?

I’ve been doing some math with a lot of vectors, and it occurs to me that in some cases it just looks notationally cleaner to use

y=v[sup]T[/sup]1

Instead of

y=Σ[sub]i=1->n[/sub]v[sub]i[/sub]

Is this commonly used in any fields of math? I’d only do it in my own notes, not anything anyone else is going to see, since I’ve never seen it before. I was just wondering if this is ever used.

I use that notation all the time in my class notes for students though I use ’ rather than a superscript T for transpose.

You could do that, yes, but I’ve never seen it done. My guess is that it’s not common because the 1 vector (I’m assuming by this you mean a vector containing all 1’s) isn’t basis-independent, and that property matters for some (but by no means all) applications.

If I wanted compact notation, I’d just leave the index and range implicit:

y=Σv[sub]i[/sub]

I don’t recall seeing your 1 notation in the wild.

It seems it is used

But it also seems to open the door to confusion. The first equation appears to be a dot product and should be invariant under a change of basis, whereas the second equation clearly is not invariant. The reason for the discrepancy of course being that which vector you use as 1 depends on the basis.

The “one vector” to sum them all?

Me, I haven’t seen it previously, and would probably be confused for a moment if I encountered it without warning.

Same here.

The zero vector 0 is so-called, not just because it contains all zeros, but because it acts as a zero—that is, the additive identity element. By analogy, the “one vector” 1 should be the multiplicative identity—except, what would that be? What kind of multiplication are we talking about?

1 would be an identity scalar with 1x=x. For matrix multiplication, the identity is, of course, I, a square matrix with 1’s on the diagonal and 0’s elsewhere.

This is the type of situation in which I use the 1 vector. Let x be a vector of random variables with variance-covariance matrix V. Then the weighted average of the x’s with the minimum variance is the solution to

min[sub]x[/sub] wVw subject to 1w = 1.

The solution is w = V[sup]-1[/sup]1/1V[sup]-1[/sup]1.

You can do this with summation notation, but this is much clearer to those who know vectors. In particular, the expression Σ[sub]j[/sub]V[sub]ij[/sub][sup]-1[/sup] consistently confuses some students as to whether it means Σ[sub]j/sub[sup]-1[/sup] or Σ[sub]j/sub[sub]ij[/sub]

I’ve seen it used productively in the context of deriving the normal equations for least squares. If you are positing a model of the form

y = a*x + b

for some unknown scalars a and b, and you have samples of the form (x[sub]i[/sub], y[sub]i[/sub]) with i ranging from 1 to n, you can pack you samples into vectors in R[sup]n[/sup], and then all possible values of your model form a two-dimensional subspace of R[sup]n[/sup] with basis vectors x and 1.

Your error is then:

e(a,b) = y - (ax + b1)

Which is then optimized when e(a,b) is normal to the plane defined by x and 1.

I’ve seen 1 used on a few other occasions, but this is the only one I recall where it really clarifies the proof.

This is, in fact, the exact situation. (Well, a slightly modified derivation of least squares to take into account some other noise, but still)

This abbreviated notation is quite commonly used in statistics.

ETA: Even more abbreviated: It would commonly be written simply as

Σv

where it is understood that v is a vector (actually, just a list of data points) to be summed.