Sometimes in a combinatorics course they will use the principle in inclusion-exclusion to solve problems like this. It’s pretty straightforward and it works but the computational complexity increases pretty quickly as the number of parts in the composition grows.
The method I prefer uses generating functions. Here’s the method.
Write down a generating function for each part of the composition.
Multiply them together.
The coefficient of the nth power term is the answer you seek.
3 x[sup]5[/sup] means the answer is 3. We also calculated a few other things.
For completeness:
There is 1 composition of 4 into 3 parts satisfying the given conditions.
There are 3 compositions of 5 into 3 parts satisfying the given conditions.
There are 4 compositions of 6 into 3 parts satisfying the given conditions.
There are 3 compositions of 7 into 3 parts satisfying the given conditions.
There is 1 composition of 8 into 3 parts satisfying the given conditions.
In other words, the generating function of interest is (1 - x)[sup]-k[/sup] times the product of (x[sup]min[sub]k[/sub][/sup] - x[sup]max[sub]k[/sub] + 1[/sup]) over all the various ranges [min[sub]k[/sub], max[sub]k[/sub]].
The (1 - x)[sup]-k[/sup] factor encodes what would happen if there were minimums of 0 and no maximums; the number of such decompositions of N into k parts would be (N + k - 1) choose N [the number of ways to insert k - 1 dividers among N items; this is often called the “stars and bars” argument].
The product of (x[sup]min[sub]k[/sub][/sup] - x[sup]max[sub]k[/sub] + 1[/sup]) factor encodes the inclusion-exclusion reasoning Lance Turbo referred to (each of these individual (x[sup]min[sub]k[/sub][/sup] - x[sup]max[sub]k[/sub] + 1[/sup]) factors says to first count all the compositions with a minimum of min[sub]k[/sub] at the appropriate point, and then subtract out those compositions with a value higher than max[sub]k[/sub] at that point, leaving only the ones we’re interested in).
The computational complexity in this inclusion-exclusion blows up, sure, but only in the same sense that the polynomial multiplications Lance Turbo uses can blow up as well: the resulting (thus, more importantly, the intermediate) polynomials can grow to have exponentially many terms in the number of factors. C’est la vie…
Although you may wish to count complexity from some other perspective than as a function of k alone (e.g., if the min-max ranges are all tight, then the resulting polynomials will be tight as well), in which case that particular kind of exponential blow-up may not be of relevance.
(In fact, this seems to me more natural now: we can take the computational complexity in an intuitive sense to be proportional to the sum of the sizes of the allowed ranges for each term (or, rather, to this + 1, but whatever…).)
Lance’s beautiful method is very general, but can get unwieldy if you want to do a relatively simple problem by hand. Here’s a method that can handle up to one exclusion. Forcing an urn to have at least x balls is handled trivially – just assume those balls are pre-placed (not free to relocate) and ignore them; . So OP’s two examples, nominally using five balls, are really about just 2 free balls and 1 free ball, respectively.
OP’s first enumeration of six is simply C(4,2) = 4!/2!2! where 2 is the number of urns minus 1 and 4 is free balls plus urns minus 1. – This is what Indistinguishable mentions as “stars and bars.”
Now you have 1 free ball instead of two, so the answer is just C(3,2) = 3. We can ignore the “k3 <= 2” constraint when there are only five balls total. With ten balls total (and ignoring k1<=3 and k2<=3) the answer would be
C(8,2) - C(6,2) = 10
where the C(6,2) excludes the cases forbidden by “k3 <= 2”. With more than 1 exclusion, you have the pain of “inclusion exclusion.”
Thanks very much for the information and explanation.
So the real problem* I need to solve is to compute these values for a sequence of N values and a fixed set of min/max values.
As an example, I need to compute this value for each value of N where 14 <= N <= 74 and k = 7 with the min/max values as follows.
Min: (2,2,1,1,1,2,1)
Max: (11,3,11,11,35,4,57)
I understand the explanation but as you’ve mentioned this is going to blow up very fast. Do you think or know of any approach where I could estimate the value?
In case you’re curious, I have an adaptive AI and I need it to adapt its processing based on the sum of the values produced for each N. I don’t actually need it to be exact, if I can estimate it that would be more than sufficient. I had been using just straight combinations as an overestimation and that worked fine but I’m working on a new set of problems (including the one above) and the overestimation is far too much.
Well, it blows up fast in terms of the size of the collection of min-max ranges, but not so fast that a computer couldn’t easily handle examples of that size. In that example, the relevant expanded polynomial has a little over 100 terms, which is easy-peasy.
I’m curious about this new wrinkle, though, that what you’re actually interested in is summing this function over a range of N values… But that simply amounts to another instance of the same problem (summing over N ranging from A to B is like adding a new variable which can range from 0 through B - A, and demanding a total sum of B on the nose).
Today’s computers are blindingly fast. I whipped out a C program, running the most obvious algorithm, which produced the 61 answers almost immediately.
(Given my recent track record, it’s probably at best even-money that the program is bug-free. :rolleyes: )
Thanks septimus. I’m sticking with my it is very early in the morning and I have not had caffeine yet story. I really should have thought about it a bit more before replying.