Vector Calculus Problem (DIvergence, Curl, Gradient)

Can anyone tell me what the error is in this proof?

http://s2.postimg.org/j9w9q0o15/final_problem_2.png

My teacher posted a solution (which I can post too if you guys are interested) that’s much, much more complicated than this. He just marked this wrong and said something about mixing vectors and vector operations such that I was fundamentally wrong. Part of me wonders if my answer didn’t match his solutions manual so he just marked it wrong.

Anyhow, I am now unable to ask the professor what I did wrong.

Thanks

What he means is that you’ve treated ∇ as if it were a vector, when in fact it isn’t a vector (or even a vector operator), it’s just a symbol.

Maybe I am missing the nuance, but I am seeing quotes like this WIKI excerpt about computing the curl(F):

[SIZE=“1”]Source: http://en.wikipedia.org/wiki/Curl_(mathematics)[/SIZE]

The problem I have, if we are going to define the curl(F)=∇ × F and then use the cross-product notation (as quoted in the WIKI article above) to find the curl, such that we are using a vector cross product to find the curl, how is that fundamentally any different than what I’ve done?

Thanks for the help!

To expand: ∇(v), ∇⋅(v) and ∇×(v), where v∈V, are all functions going from V to V i.e. they’re operators on the vector space V (though even that’s not quite the full story). So whilst (if we omit the brackets as is usually done) ∇v, ∇⋅v, ∇×v are all members of V, it really makes no sense to treat ∇ as if ∇∈V.

And I should make it clear that V here is a vector tangent space at each point.

But the curl isn’t defined to be ∇ × F ; rather, it’s defined in such a way that “∇ × F” is a useful way of remembering how it’s calculated. Since ∇ by itself isn’t actually a vector, ∇ × F isn’t actually a cross product of vectors; it’s an abuse of notation. So you can’t just assume that things that are true of vector cross products (like your “recall” statement on line 3) are true of ∇ × F.

FWIW, googling turned up this proof. Does that look like your teacher’s solution?

D’oh and of course ∇(v) and∇v should read ∇[sup]2/sup ∇[sup]2[/sup]v in post #4

Of course ∇ is just a symbol. So are F, A, B, and C. ∇ is still a vector, though, just like all those others are (or if you prefer, the thing that the symbol ∇ represents is a vector).

The problem isn’t in treating it like a vector; the problem is in assuming that it commutes. It doesn’t: a∇ is in general different from ∇a. This means that many identities you’ve learned that were derived for objects that do commute will no longer hold for ∇.

Just to be very clear, curl(F) and ∇ × F are different notation for the same thing. Similarly for grad(phi) and ∇phi, and for div(F) and ∇⋅F. One isn’t defined by the other.

This. For example, in the second-to-last step, you said that F(∇[sup]2[/sup]) = ∇[sup]2[/sup]F, which is false. Moreover, the BAC-CAB rule implicitly requires rearranging the order of the vectors, which you can’t do with ∇. I’m a professor myself, in physics instead of math; we’re a lot less stringent about rigor over here, but even I would have marked this wrong.

I assume that your professor’s proof involved writing out the Cartesian components of each piece and re-assembling them into the appropriate expressions. However, there’s also a method involving something called the Levi-Civita symbol that allows you to do it in just a few lines, and the proof would even look a tiny bit like what you wrote down. So it would be possible to “rigorize” your proof, if you wanted to.

For the gradient of a scalar field, the reason that a∇ isn’t the same as ∇a is because a∇ is an operator and ∇a is a vector (field), which is because grad() = ∇ is an operator.

The difference isn’t because ∇ is an operator. All of the vectors in that problem are operators. Heck, scalars are operators. The difference is because, unlike many operators, ∇ is a non-commutative operator.

I recall once coming across a situation where a vector or matrix derivative operator needed to be on the right side matrix-wise, but on the left derivative-wise.

For example it had to be
F ⋅ D
for the matrix multiplication to be correct, but D was a derivative operator, operating on F.

Yes, but more specifically: for a scalar field, φ; ∇φ is a vector field, whereas φ∇ is a function that takes a scalar field to a vector field.

I would basically accept it, maybe with a couple points of, followed by an in-person discussion of notation and the relevant issues… Yes, there’s some futziness about order of multiplication, but if they’d just written the identity as “A x (B x C) = (the sum over i of A[sub]i[/sub] B C[sub]i[/sub]) - (A . B)C”, and then noted that the components of A and B commute in this case (reducing this to B (A . C) - (A . B) C), there’d be no problem.

I mean, I’m super not fond of the way multivariable calculus is usually taught, with all the implicit use of Hodge star and inner products all over the place (amounting to a failure to really track units properly), but if you’re going to work within such a perspective, you might as well at least take advantage of the things it brings to the forefront, like that identity.

I also suspect the professor’s reaction is basically unthinkingly saying something like “No, you can’t treat ∇ as if it’s a vector; it’s just a coincidentally convenient mnemonic device to sometimes write things as though it were, but it’s very important to all the same insist that it isn’t!”, when, in fact, there are perfectly reasonable senses in which you can imagine ∇ to be such a thing (everything taking place in a module over the non-commutative ring generated by scalars and differential operators), the realization of which is a great boon in fluently carrying out the relevant manipulations.

Better yet: working over a ringoid (in the sense of a category enriched over abelian groups), rather than limiting oneself strictly to rings.

For any objects A, B, and C in a ringoid, there are maps from Hom(B, C)[sup]3[/sup] x Hom(A, B)[sup]3[/sup] to Hom(A, C) and to Hom(A, C)[sup]3[/sup] corresponding to the dot and cross products, respectively, defined in the ordinary way. In the same way, we have generalized analogues of multiplication of 3-vectors by scalars, and so on. And these all satisfy all the ordinary identities as demonstrated in the ordinary way, so long as one takes care to write products with the factors maintained in sensible order.

Furthermore, given any ring R and module M over R, we can form a ringoid with two objects 1 and , such that Hom(, *) = R, Hom(1, *) = M, 1 is a terminal object, and multiplication is given in the obvious way.

In particular, we can take R to be the (commutative!) ring of linear combinations of differential operators of arbitrary degree, and M to be the scalar fields on which these operators act. Then we can interpret ∇ as an element of R[sup]3[/sup] = Hom(*, *)[sup]3[/sup], and any vector-valued field F as an element of M[sup]3[/sup] = Hom(1, *)[sup]3[/sup], and apply our dot/cross/etc. product identities as we’d like.

And we would genuinely have that curl(F) = the cross product of ∇ and F, div(F) = the dot product of ∇ and F, and grad(G) = the “vector times scalar” product of ∇ and G, in this framework. And the reasoning of the OP is perfectly fine (making the minor order of factors corrections noted above) in this way. And much clearer and more illuminating than what their professor probably demanded they write out instead.

Not that one need know anything about ringoids, or any other such concretely axiomatized algebraic structure, to write the OP’s proof. One just needs to recognize that the same formal manipulations by which one saw that A x (B x C) = … in the first place would, in the same way, establish the desired result were the terms in the appropriate places in the argument interpreted as the appropriate differential operators. The discussion of ringoids is only to soothe those who have been mis-trained to look away from this sort of thing.

Very interesting Indistinguishable! I have to admit I can’t entirely follow you’re category theoretic reasoning, but basically you’re saying that the vector fields form a module(?) over the scalar fields which is a submodule of a module which also contains a member that can be easily identified as ∇ and for which there exists generalized cross and dot products?

Of course the del operator must have some fairly strong vector-like qualities otherwise you wouldn’t be able to treat it as a vector, it’s good to better understand where these come from.

I agree with Indistinguishable’s use of additive categories (what he called ringoids), but then you’d better prove that the BAC - CAB formula is valid in that generality. After all, for vectors B(A.C) = (A.C)B and if B = del, the right hand side is meaningless. Or rather it is an operator, rather than a vector. While category theory would clarify the situation, category theory is not typically taught before advanced calculus.