I subscribed for the $20 to test out the GPT-4o, and I must say, I was impressed with what little I’ve played around with. It was able to figure out this photo, for instance, which many of my human friends were not able to do:
I agree that AGI is probably still decades away, but what current generative AIs can do with text, music, and graphics I did not expect to see in my lifetime. Hell, if you told me this type of technology was just a few years away when I was holed up with everyone else during 2020 Covid, I would’ve thought you were insane. Not a day goes by that I’m not absolutely flabbergasted by the amazing technological leaps in this field of the last two or three years.
For funsies, I sent Chat GPT-4o over to look at this thread (giving it only the URL) and come up with an illustration. Still needs a little to go, but I’m easily amused:
What’s fun is that you can actually feed it URLs now and ask for it to summarize threads and the like. I haven’t done a deep dive into seeing how accurate it was – it won’t provide quotes, for instance – but it seems to get the sentiment of various posters correct when I quiz it.
yes it’s very impressive, which is why it’s unsuprising that people suspect it to be intelligent when they haven’t used it enough to see its limitations in reasoning and adaptation.
Have you tried suno? Some of the music that generates is better than a lot of human made music imo. Its gotten that good.
i expect to see a lot more cool stuff with AI. Specialized applications, but a general intelligence isn’t going to come from this current tech. It may come from a brain simulation project, but progress is slow on that with many hard problems.
GPT 4o is also available for free, but the ratelimit is very low for the free tier
What’s most amazing to me is how sudden this all seems to me as a consumer. In 2020, I would have found an AI that simply generates a lead sheet with plausible melody and harmony to be amazing, yet we seem to have completely skipped over that and gone straight to fully orchestrated arrangements with singing and even AI generated lyrics (or use your own!) It’s absolutely wild!
While we’re worrying about AI and what it COULD do, we might worry a lot more about humans because of what they HAVE done and continue to do despite all of the dire consequences. Pollution, environmental destruction, unchecked aggression, greed, and the thirst for power might some them up. We just can’t seem to help ourselves.
Wouldn’t it be ironic if, instead of destroying humanity, AI saved humanity from itself instead.
“The Day the Earth Stood Still” - 1951
Klaatu - “We have an organization for the mutual protection of all planets and for the complete elimination of aggression. The test of any such higher authority is, of course, the police force that supports it. For our policemen, we created a race of robots. Their function is to patrol the planets in spaceships like this one and preserve the peace. In matters of aggression, we have given them absolute power over us. This power cannot be revoked. At the first sign of violence, they act automatically against the aggressor. The penalty for provoking their action is too terrible to risk. The result is, we live in peace, without arms or armies, secure in the knowledge that we are free from aggression and war, free to pursue more… profitable enterprises. Now, we do not pretend to have achieved perfection, but we do have a system, and it works. I came here to give you these facts. It is no concern of ours how you run your own planet, but if you threaten to extend your violence, this Earth of yours will be reduced to a burned-out cinder. Your choice is simple: join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer. The decision rests with you. Gort, berenga.”
My understanding is that the free version of GPT-4o is being rolled out gradually. I got an invite to try it once, but it’s still not appearing as a choice in the drop-down.
Yea i can see AI helping us solve the pollution and climate problems by generating better tech. It can help us eradicate many diseases, make it easier to end poverty and achieve abundance (i think if you just redistributed current wealth today, no one would, not even the poorest africans, would starve or be homeless) and a bunch of other harder problems once it becomes advanced enough.
Reminds me of the Deus ex ending, where JC denton chooses to merge with a benevolent AI to rule the world fraught with human made problems. I don’t know, i might actually prefer an AGI that I know has been made with sound objective functions (that maximize good outcomes for humanity and minimize suffering) to power hungry psycopathic or narcissistic politcians that we often end up with today.
in the near term i can also see AI being used by militaries to generate deadlier and a more devastating military arm. Although if wars end up being fought between armies of robots that might mean far fewer deaths and suffering than wars between human armies
Oh, if only I had a nickel for every time I’ve seen a Doper expressing their views by sitting on a stage with a stop sign growing out of their shoulder, waving their seven-fingered hand about while others shouted for poinrree porestuel and FUTURE.
At any rate, I find it goddamned amazing we live in a time where we can just point an AI to a thread and tell it to draw it and it does so, even if there’s still a lot of silliness involved. Fucking incredible.
AI is not going to solve the problems of human selfishness or greed. Because any such answers will never be implemented. Due to selfishness and greed. There is not going to be some aha moment that solves those things. Selfishness and greed are features, not bugs.
I would be more concerned about malevolent AI that is doing the bidding of someone that is selfish or hateful. All you need to do is tweak the inputs just a bit. Salesperson AI, Nazi AI. All eminently achievable. Swamp people with opportunities to click on the philosophy of their choice. With people’s intelligence being further dumbed down by AI doing content swamping, it’s not something I foresee working out well.
I’m reviving this particular AI thread (as opposed to other options) because I brought up open-source a few time in it. More and more powerful AIs are becoming open source, which makes it harder and harder to regulate (or “pause”).
Seems like we might end up in an ok spot, at least in the short term. On one hand, LLMs do seem to be running out of steam. They are getting better, but there’s been no obvious superintelligence breakthrough, except in the sense that no one has as much across-the-board knowledge as the top LLMs. But in any one niche, they don’t match humans. No SkyNet, at least for the time being.
On the other hand, open-source models are closing the gap, and even better, can be run locally. So those of us that care will be able to run good models on our own systems with minimal snooping or restrictions. That is also a good thing. We may actually reach a point where, aside from speed, you really don’t gain much advantage with closed models.
We could easily end up with a StupidNet however, that causes some major disaster because so-called AI was given a job it couldn’t handle.
Really, the relentless Terminator-inspired focus on “evil rebellious AI” is somewhat dangerous, given how it makes people outright ignore the much more likely possibility of AI causing harm because it obeyed orders, screwed up, or both.
True, but stupid humans get put in those positions already, and when they screw up, there’s rarely any recourse. Stupid human does stupid thing. News at 11. Or, maybe they aren’t even stupid, they just decided to flip off the fuel supply for both engines a few seconds after takeoff for some unknown reason.
An AI model has at least somewhat repeatable results and can be trained not to do the stupid thing in the future.
Who, or what, is regulating anything? For example, you know how a robots.txt file is supposed to regulate the constant, ubiquitous massive, massive data scraping? It is ignored. Also, many people (have to) use “AI” in their job all the time, including downloading, training, and using generative models, usually completely legitimately, the occasional news agency/lawyer using it to make up bullshit notwithstanding. There is no regulatory form users have to fill out. Perhaps someone is asking researchers to go through an ethical committee like they [are supposed to] have to in order to experiment on animals? Only that does not apply to computer science.