Aschenbrenner is a wunderkind (for lack of a better word) who worked for OpenAI on the superalignment team.
The article is very long, about 160 pages. So I don’t expect people to read it.
A short synopsis is this.
AI is growing in power by orders of magnitude for multiple reasons, it basically went from a preschooler level with GPT-2, to a middle school level with GPT-3 and a a competent high schooler level with GPT-4.
He feels this trend will continue and around 2027/2028 we will have AGI and entry level ASI. He feels by this point the worlds western democracies will ally and have a manhattan project like goal of reaching ASI first before China does. The fear is that AGI and ASI will provide such dramatic military supremacy that it becomes an issue of national security. He feels trillions will be spent on AI research and military security to reach ASI first and stop nations like Russia or China from stealing AI secrets, and to prevent China from reaching AI supremacy first for fear of them having military supremacy due to it.
He feels the 2030s will result in about a century worth of scientific/technological progress compressed down into a decade since that is the first decade we will have ASI. There are obviously still real world bottlenecks.
I know this isn’t much different than predictions made by people like Kurzweil, but I’ve been wondering for years if there will be a manhattan style project to develop AI by the worlds wealthy democracies to stop nations like China from getting it first. the CHIPs act and limitations on silicon chips being sold to China is just the start of this (hopefully non-violent) war
But he feels we are in the calm before the storm, and that governments will be forced to take AGI seriously a few years from now. Right now AI is still kind of just a neat tool, but its not something that can dramatically alter the balance of national power like it will be in 3-4+ years.
Unless we discover cold fusion in the next year or two, the energy demands of high-level AI are going to far outstrip any nation’s ability to generate electricity, and that’s gonna be a hard ceiling to break through IMO. We’re already at the point of literally reactivating Three Mile Island to try to keep up with Microsoft’s need for electricity - there’s only so much blood you can squeeze out of this turnip.
Aschenbrenner claims that there may be attempts to build data centers in the middle east due to the abundance of natural gas there. He has issues with it though, because of the area being full of authoritarian dictatorships. He also discusses how China’s capacity for generating new electricity is far beyond what the US has shown since our electricity demand has been mostly stable for decades.
The report projected that US data centers will consume about 88 terawatt-hours (TWh) annually by 2030,
While current energy usage by Crypto world wide is around 120 to 240 TWH annually.
A whole lot, but not insurmountable in the near term. It also may be that new algorithms and dedicated chip designs may make AI more energy efficient over time. But that will probably just be met with increased usage. In the end it will settle in the sweet spot of whatever the market will bear, while probably burning our planet to a crisp with global warming.
The idea of Manhattan scale effort and international cooperation are mutually incompatible. Period. Human politics will prevent that.
Now would such an effort be desirable? Probably. Will it happen? No. The US will do one thing, the EU will spend a decade debating workshare and siting, and meantime the Chinese will be doing their damnedest to steal it all.
AGI is Artificial General Intelligence, IE an AI that can do as broad a range of tasks as a human can.
ASI is Artificial Super Intelligence, or AI that is better at some task than humans.
We have ASI. We have had ASI for decades. In 1997 computers first beat the world’s top chess player, and they’ve gotten much better since then. The best chess AI in the world is an ASI, in the narrow field of playing chess. ChatGPT is an ASI in the narrow but perhaps somewhat wider field of spitting out Facebook spam articles.
Just because our specialized AI is getting more and more Super(ior) to human intelligence in those narrow fields does not mean we can assume that Generalized intelligence is coming any time soon. That seems to be a difference of kind, not scale.
All of this is just BS.
Tech company CEO’s have always wildly overstated the capacities of their products.
I anticipate an AI (really, a neural-net) that is assigned to open doors.
And will slam them on peoples’ noses.
I wonder about AI use in warfare, it seems fairly obvious it can revolutionize things like drone warfare, take over flying jets stuff like that.
But other things it could do perhaps new forms of biological warfare, nanobots, maybe new types of nukes or different delivery methods, I mean maybe the AI could conceive of a lot of things, but for obvious reasons the U.S. might ban their use, of course China may not have those same scruples, or if they or whatever enemy thinks we are about to surpass them forever it seems like it might be tempting to preemptively flip the chess board over and press reset with nukes.
Huh. I use it almost every day in my life. Today I used it to research an SDMB question about baseball (it pointed me in the right direction so I got the answer I needed in about 30 seconds after fact checking it versus the 5-10 minutes it would have taken me); I used it to help remind me of some tip-of-the-tongue words; I used it to help me out with a bread recipe and scaling it properly; and I used generative AI while Photoshopping client’s images. Chat GPT-4o has gotten so good versus just a year ago. I use it even to analyze my photography and help talk my way through my images and how to build on them and expand on them creatively. It’s a fantastic tool if you choose to learn how to use it.
I’ve noticed in the last couple years Google AI has gotten much better at digging up information from scientific papers now, which has helped me out quite a bit.
Yeah, it’s pretty incredible technology. I’m especially impressed by its ability to analyze pictures. I’ve done stuff like take a picture of my bookshelf or someone else’s bookshelf and have it guess what those books imply about the person and what their profession is; or I’ve taken a picture from the car and it was able to guess that I was in Chicago (I gave it a hint by leaving the Sears tower way off in the background of the photo) on the Dan Ryan heading north; or do translations of pictures. (Yes there are apps specific to that, too, but ChafGPT 4o will do all of that.)
And this is just infancy stage for “retail” AI or whatever you want to call it. Three years ago I couldn’t do any of this. It blows my mind, and I love pushing the AI and seeing what it can and cannot do. I’m not sure I believe in AGI in 2027/28, but it’ll be interesting nonetheless. I did not expect to be living in this future. These kinds of capabilities I thought might be something my kids will eventually see, not grow up with.
And why do you think this is a good thing?
After all, America’s 1% totally control AI. Those people are, to be blunt, dead to all human feeling and compassion.
Far more than AI, I fear the fundamentally evil men of the business community. And what they will use AI for.
I don’t really think that this approach to “AI” (LLMs are more glorified chatbots than actual AI) has anything close to the potential being talked about . It’ll never come close to human intelligence, much less exceed it. I’m quite sure that human or better level AI is possible, I just consider this method a dead end. More a gimmick than a real advancement; “artificial hallucination” not “artificial intelligence”.
Yeah, even if I bought the idea that this so-called AI is capable of even approaching human intelligence (I don’t), it’s far, far too energy hungry to advance much farther.
Whatever it does, it makes my life easier and more enjoyable in the use cases I use it for. Your mileage obviously varies. I don’t particularly care if it’s “true” artificial intelligence (whatever that means)–it’s really useful to me, increases my productivity, and is fun to play around with, to boot. And it’s hallucinating less and less as time goes on. The current LLMs are much better about being factual and offering citations than the ones even a year ago, much less two years ago. They still have a long way to go, but for “glorified chatbots,” I am absolutely gobsmacked by what they can do. But the pushback is understandable. It happens with every new technology.
For the purposes of this thread it does matter if it’s “true AI” or not, since it’s rather important to the issue of whether it can equal or exceed human intelligence.
My gut feeling is that with the current state of technology and the approach being used that it’s going to need another technological breakthrough to get from the state of being a good simulacrum of human intelligence to something even more complex. As I’ve stated before, I don’t believe these near future predictions of AGI. I still think that’s decades away, at least. But this is just gut feeling, not based on any particular expertise in the industry. (I have worked a little bit in helping train and refine models in RLHF (reinforcement learning from human feedback), but that doesn’t give me technical insight into the models – I’ve just spent many many hours reading, rating, critiquing, and refining LLM responses. And they have gotten so much better even in the few months I’ve been involved on various projects.)