integral of cot^2 x

Not a homework question, a test question that I already took.

After the test it suddenly hit me that I can convert it into Csc^2 but I did it in a totally convoluted way on the test

can someone check whether I’m right.

I did it via integration by parts

du=2cot csc^2

so xcot^2 - 2Sxcot(csc^2)dx

second integration by parts

v= S cot(csc^2)

I made u one csc, and du csc(cot), and I inverted the preceding sign.

By now it’s too hard to organize it in a textbook or transcribe it from paper so whatever answer I got from the above is what I got on the test. Maybe I commited an arithmetic error (like - instead of + in one or more instances but I won’t lose all credit for that)

Was at least my procedure correct?
Something tells me I’m not allowed to use the Power Rule with squares of trignometric functions but I hope I’m wrong.

Wolfram|Alpha is usually more helpful than this. Apparently, someone programmed in ‘-x - cot(x) + C is the antiderivative of cot(x)^2’ and the program thinks that is a single step, so it’s completely opaque how it got that result.

Anyway, you can use it to verify the answer you get by hand.

As a calculus teacher, my advice is to work it out both ways to completion and see if you get the same answer. It’s a rewarding exercise and practice with integration by parts.

Most calculus problems can be solved by more than one method. However recognizing which method is most straightforward for a given problem is a valuable skill. Developing that skill is never a waste of time.

I’m prettyy sure that’s the easiest way to do it.

I didn’t check it too carefully, but it looks circular—like you’ll just get right back where you started.

But be warned: It’s possible to work the same problem more than one way and get answers that look different but are really equivalent, especially when trig functions are involved.

True, that +C can really confuse things. Depending on how you do it, the integral of sin(x)cos(x), can be


All equivalent.

I used to solve this different ways get the three different answers and assign my students the extra credit problem of finding my “mistake”.

i don’t like memorizing antiderivatives so my approach would be to convert everything to sin’s and cos’s and go from there. i also never use the quotient rule, and turn everything into product rule.

Is there anything stating that you cannot use the power rule with trig funcs?

Like, what’s the deriv of cos^2? -2sin cos? Doesn’t seem quite right but it might work?

The rules for differentiation are quite general, and can be used to fairly easily find the derivative of almost anything you can write down. That won’t generally help you with integrals, though, unless you just happen to find something that differentiates into something closely resembling what you have.

No, there’s nothing preventing you from doing this, as long as you understand how to apply the power rule properly to such cases; luckily, it appears that you do understand how to apply the power rule properly.

It is right.

If you like, think of this as combining the power rule with the chain rule. The power rule tells you how to differentiate x^n [its derivative is n * x^(n-1)]; the chain rule lets you extend this to differentiating f(x)^n for more complicated functions [its derivative is n * f(x)^(n-1) * the derivative of f(x)], in precisely the way you did.

Each term is a basic integral, and integration results in -cot(x)-x

In reply to the original post, there are of course many ways to carry out that integral, depending on what exactly one happens to have memorized/recognize off the bat (e.g., one way is to happen to have memorized the integral of cot^2; or, essentially equivalently, to guess -x - cot(x), differentiate it, and see that it works [my preferred method of integration…]).

I’ll outline one way to carry out this integral for someone who doesn’t know directly about anything other than integrating/differentiating (co)sine, integrating by parts, and integrating by substitution. (Basically, for someone who doesn’t recognize off the bat how to integrate 1/sin^2. (E.g., me))

First, expand cot^2(x) into cos^2(x)/sin^2(x).

We’ll integrate by parts, taking cos(x) as “u” and cos(x)/sin^2(x) dx as “dv”. du, then, is of course -sin(x) dx. And as for v, well…

To figure out v, we need to integrate cos(x)/sin^2(x) dx. Making the substitution w = sin(x), this becomes the integral of 1/w^2 dw = -1/w = -1/sin(x).

Plugging this back into our integration by parts, we find that the integral of cot(x) dx is cos(x) * -1/sin(x) - the integral of -sin(x) * -1/sin(x) dx. That is, -cot(x) - the integral of 1 dx. That is, - cot(x) - x (up to an additive constant, of course).

Nope. You took a very straight-forward problem and made a mess of things using integration by parts. The idea behind integration by parts is that you end up with a more doable problem, not a worse one. The right approach is to just convert cot[sup]2[/sup]x to csc[sup]2[/sup]x - 1 using basic trig identities. Then it becomes a trivial problem. Better luck next time.

The OP didn’t end up with a worse problem than they started with, per se; they just, as Thudlow pointed out, made the classic blunder of immediately following up a first integration by parts by the particular second integration by parts which precisely undoes the first one, so as to end up back where they started; no progress made, but no anti-progress, either (except for the time lost).

Actually, I suppose you’re noting that their first integration by parts gave them a worse problem, which is fair. So, nevermind the previous post.

I have seen some integrals which are done using some trick involving integrating by parts twice to get something almost the same as the original integral, and then solving it algebraically. I don’t remember exactly where that trick comes up, though.

You may be thinking of integrating e[sup]x[/sup] sin x or e[sup]x[/sup] cos x.

What happens there is different from what the OP is trying to do, however: since his u and dv in the second integration by parts are just the v and du from the first IBP, his second IBP essentially undoes his first one.

Right; the trick Chronos refers to is useful when, for a general class of examples, integrating a product of two functions which both happen to be multiples of their nth derivatives; in that case, integrating by parts n times will yield a relation which can be solved for the desired integral*. But it’s vital that the selection of parts in each integration by parts builds on the previous one, so that you end up n-fold integrating one part and n-fold differentiating the other, rather than alternately integrating and differentiating the same parts over and over in exact oscillation.

[*: Though it’s worth observing that this general class of examples is just as well-handled by noting that in such a case, the integrand decomposes into a linear combination of exponential functions, which can be integrated straightforwardly. E.g., e^x cos(x) = the average of e^((1 + sqrt(-1))x) over both sqrt(-1), and thus its integral is the average of 1/(1 + sqrt(-1)) e^((1 + sqrt(-1))x) over both sqrt(-1), which is to say, e^x * the average of cos(x) and sin(x)]