Consequentialism is the least bad system offered. None are that useful for deciding big questions, since one can find inconsistencies in the simplest Mickey Mouse thought experiments used to justify their own existence. Now try scaling that up to actual real world issues. Great, everyone’s a consequentialist. Is interventionist war good or bad? Always, never, sometimes? If it’s gray, how do you decide? You can’t predict the future. The fog of war says good friggin’ luck even understanding the present. And we can pull a Zhou Enlai and say it’s still too early to decide about the French revolution.
I think moral frameworks are reverse-engineered so as to agree with our pre-existing subjective moral intuitions. Because everyone dips their toe in each one they are practically useful tools for creatively attacking enemies (X is always wrong! or He should have known Y would happen if he did X!) or defending allies (suddenly everyone’s a relativist constrained by their unique situation and they couldn’t predict anything).
And there aren’t surprises, or discoveries. No one sits down and says “wow, I used to think X was immoral, but after becoming a negative utilitarian I guess it’s peaches and cream.” Maybe a robot would.