I’m working on a simple 3D Bin Packing algorithm and I’m trying to figure out how well it performs compared to existing algorithms.
Most of the articles I read compare their algorithms to “optimal” number of containers used, but I would need to know what “optimal” is for my test set (which is a real world set of item and container sizes) to make that comparison.
Questions:
1 - Anyone know fill percentages of resulting containers for the various algorithms?
2 - Seems like “optimal” is very dependent on the ratio of the sizes of the items to be packed to the sizes of the containers - thoughts on how I can easily factor this in when trying to determine if the current algorithm is “good enough” (short of actually packing hundreds of containers)?
Sometimes it’s very easy to prove that a packing is optimal: You divide the volume of your container by the volume of your objects you’re packing, round down to give an integer, and that gives an absolute upper bound on the number you can fit in. If you can come up with a packing that manages that number, you know for certain that it’s optimal.
Usually, though, you’re not so lucky, and “optimum” in practice just means “the best we were able to do with the best algorithms running for as long as we could afford to let them run”
Seems like 2) is going to involve the Golden Ratio somehow.
Size of item relative to bin size is going to be a huge factor, of course.
Are they all the same shape at least? And different size boxes have the same proportions?
Just curious- is this a school project, or are you building a real-world bin packer?
Chronos:
We’re not so lucky to be able to do that, we’ll have wasted space even with optimal solution based on manual packing experience.
aNewLeaf:
Real world - cubing for shipping: footwear, candles, boxed items, etc.
All different sizes and proportions mixed in all combinations.
My other constraint is that the algorithm needs to be maintainable by people that don’t do this type of stuff very often (or ever), so trying to figure out if simple is good enough.
Well, or else the best packing available from trying every possibility. NP complete doesn’t mean “can’t be computed,” it just means there’s no (known) polynomially efficient way to do it.
If it were me, I’d start with some small sets, exhaustively solve it, determine how long it takes your algorithm to solve it (compared to others), and then start scaling up. With bin packing, it’s likely noisy for really small containers (where you may be able to get only one object in each), but after you start really “packing” a pattern is likely to emerge.
If I recall my operations research days correctly, this was called the “back packer’s problem” or “knap-sack problem” or something like that. There is a lot of published research on the subject, particularly in the field of dynamic programming.
aNewLeaf:
“Are there a fixed number of box sizes?” - 6 shipping container sizes, about 30 footwear item sizes and about 1,000 non-footwear item sizes
“Can you simply pre-compute each packing option and store them in a table?” - For footwear, could probably do something like this by pretending each item is one of a smaller number of sizes and then having some pre-calced configurations - I may end up doing this for footwear items, but can’t do it for non-footwear due to too many different sized items
“75% isn’t bad. Getting that up to 86% will be a challenge.” - That is exactly the type of info I am looking for - if the next 5% can’t be achieved without a more elaborate algorithm that might be beyond whoever maintains it, then I may just stop at this point. Where did you get the 86% number? My googling didn’t find too many people discussing fill percent.
IIRC, the classic knapsack problem is only one-dimensional. This is three-dimensional, which makes it that much worse.
And NP-hard problems aren’t absolutely impossible to solve perfectly, but for any reasonable input size, they’re generally so close to impossible that it makes no practical difference.
86% is the next bell curve bracket. But I really just pulled it out of the air
As Tim R Mortiss said, this is a variant of the knapsack problem, and there’s a lot of published analysis out there.
Time vs problem size vs computational power vs code frailty vs cost vs running expense, yada yada yada.
I’ve had good luck solving some similar problems using Simulated Annealing. Never thought about applying it to a knapsack situation, but it seems appropriate at first glance.
I’ve used that method for similar things also (pick path optimization), but in this case the algorithm has to be relatively simple due to the skills of the people that will maintain it.