Given \frac{\partial w'\Sigma_a w}{\partial p}, can \frac{\partial w'\Sigma_b w}{\partial p} be found?
\Sigma is a matrix, w is a vector. w is obtained by minimizing w'\Sigma_a w subject to Aw <= p.
The optimization package is used to calculate w also gives me \frac{\partial w'\Sigma_a w}{\partial p}.
My ultimate objective function also involves w'\Sigma_b w using the same w obtained earlier. For me to optimize my ultimate object function, I need to find \frac{\partial w'\Sigma_b w}{\partial p}.
Playing around with the chain rule I have come up with the following
The only part of the last bit I can’t wrap my head around is \frac{\partial w}{\partial w'\Sigma_a w}, not even sure if it makes sense.
After banging my head against this problem I am tending towards the belief that it does not have an analytic solution. Wondering if anyone here can confirm it.
To clarify, are \Sigma_a and \Sigma_b two different matrices? How, if at all, are they related? Or is the a (or b) an index, and if so, where’s the other index? And w' is the transverse of w?
I assume Chronos means to ask if w′ is the transpose of w.
Other things to know that might help. Are the two Σ matrices symmetric and/or positive (semi)definite? What do we know about the A matrix? Is it positive, for example?
\Sigma_a and \Sigma_b are both positive definite covariance matrices, each obtained form a different risk model. w' is indeed the transpose of w.
The A matrix, combined with p is a specification of linear inequality constraints. The first row of A, and the first element of p, are all 1. This means that the result of the optimization is constrained to be “fully invested”. In my case, every other row of A contains a single non zero entry of 1 or -1. The corresponding values of p represent upper and lower bounds for w.
Ultimately I am trying to find the set of constraints that lead to the most different minimum variance portfolios, where w_a is minimized according to \Sigma_a and w_b is minimized with according to \Sigma_b, and then both w_a and w_b evaluated according to \Sigma_a. In plain English, I want to find the set of constraints where these 2 risk models are at maximum disagreement.
The software I am using to minimize w_a'\Sigma_aw_a and w_b'\Sigma_bw_b also gives me \partial (w_a'\Sigma_aw_a)/\partial p and \partial (w_b'\Sigma_bw_b)/\partial p respectively. But I also need to know \partial (w_b'\Sigma_aw_b)/\partial p in order to optimize my ultimate objective function.
Isn’t your problem easily soluble using Lagrange multipliers? Why do you think there is no “analytic” solution? Anyway, I would try solving the 2x2 case by hand first.
I am not sure that it is as easy as that, since it is an optimization of an optimization.
The reason I don’t think there is an analytic* solution is because I haven’t been able to find one
I don’t have any professional interest in this question, more about satisfying my own curiosity, which means I have plenty of time to think about it. I think your 2x2 suggestion is a good one.
*By “analytic” I mean a simple closed form solution. Might be using the term incorrectly.
Ok after trying again, I think I have found the part that makes this impossible to derive.
Earlier I mentioned the following progression \frac{\partial w'\Sigma_b w}{\partial p}=\frac{\partial w'\Sigma_b w}{\partial w}\frac{\partial w}{\partial w'\Sigma_a w}\frac{\partial w'\Sigma_a w}{\partial p}=2\Sigma_b w\frac{\partial w}{\partial w'\Sigma_a w}\frac{\partial w'\Sigma_a w}{\partial p}
and I said that the only term I don’t have a solution for is \frac{\partial w}{\partial w'\Sigma_a w}
The problem is that f(w)=w'\Sigma_a w is not an invertible function. There is more than 1 input that can lead to the same output. So it doesn’t make sense to talk obout the gradient of the inverse, since the inverse does not exist.
So I guess this means the best I can do is some sort of evolutionary algorithm.
I am afraid I am not really with you. What I meant was, e.g., let your problem be to minimize 3x^2 + 2xy + 3y^2 subject to Ax + By \le p. Note that without the constraint there is a global minimum of 0 at (0,0). If your constraint is now, e.g., x+y\le p, if p\ge0 then the global minimum is achievable. When p is negative, let’s try to minimize the Lagrangian 3x^2+2xy+3y^2-\lambda(x+y-p), giving x=y=p/2 and \lambda=-4p. I am sure I screwed up somewhere, but the point is that you can figure out x and y and whatever function you want of x and y in terms of p and then differentiate it with respect to p.
After optimizing this with a genetic algorithm, I focused on a single element of p, and varied it’s value a bit and calculated the objective function at each point.
I’d like to paste a screenshot here but I am told I am not allowed to embed media items in a post.
But if I could you would see an extremely not smooth objective function. The genetic algorithm did manage to zero in on the minimum (given the values of all the other elements of p) at least.
So I think this settles it; an analytic solution cannot be found since a gradient function does not exist.
You can’t post an image directly to the board. But you can upload it to a photo hosting site (I think imgur is the most commonly used one), and then link it from there. If your link consists of just the URL by itself on a line, then Discourse will automatically embed the image.