# Efficient algorithm for minimal sum

If I have the sum
x(d_1,d_2,…,d_N) = sum_{k=1 to N} a_k d_k + sum_{k=1 to N} sum_{n=1 to N} b_{k,n} d_k d_n

where d_k = +1 or -1, and the various coefficients a_k and b_{k,n} are real and can be positive or negative, is there an efficient algorithm to find the smallest x(d_1,…,d_N), or is it provable that the only way to do so is to calculate all 2^N possible {d_k} sets and pick the smallest x?

I assume there must be some intelligent pruning that can be done.

Is anyone familiar with these sorts of problems? In general, is there some math message board on the web where these questions are better suited? In the old days, I think there was alt.math, but I’m not sure what the best forum for these sorts of questions is these days.

This is very close to being a nonconvex quadratically constrained quadratic program, which is NP-hard in the general case, so yes, you will need to consider approximation algorithms if you want a polynomial time solution. Problem 11.24 of Boyd and Vandenberghe deals with a convex approximation scheme for a very similar problem, so you might consider looking at that.

Edit: If you’re not familiar with semidefinite programming and you need an answer quickly, I suggest generating a bunch of possible solutions at random and performing hill climbing from each of them. You can probably get a pretty good answer with about N[sup]2[/sup] random starting points.

One more update after the edit window:

Note that your problem can be expressed in standard form as minimizing d[sup]T[/sup]Bd + a[sup]T[/sup]d subject to d[sub]i[/sub][sup]2[/sup] = 1 for all i. That will help if you do decide to go the semidefinite programming route. Also, that should be problem 11.23 of Boyd and Vandenberghe, and the problem considered in there is a special case of this problem. I believe that problem is NP complete, and since this more general problem is obviously in NP, that would make it NP complete as well.