Is Harari right about AI being mostly negative?

I enjoyed reading Hurari’s views on anthropology. But after reading Graebar, which was far more comprehensive, it was obvious he was adept at summarizing widely held views but that some of his ideas were overly simplistic.

To that end, how can people prevent the problems Hurari delineates? Indeed, it is highly likely financial companies will seek to benefit from algorithms they don’t really understand, despite problems with quants. But many will eschew such investments if they can.

As for war and weapons, are material restrictions powerful enough to make much difference? If a major problem with AI manifests, how long might this take?