You can insist all you want that the only thing that matters is what you like or find interesting, but what I’m telling you is that the qualities that make a piece enjoyable to you exist because another human enjoyed creating it.
You can protest all day that you Do Not Care For Art, but nothing a computer makes will ever be interesting or likeable enough for you to pay $5 and hang it on your wall, or go to a cinema and watch. You will not anticipate it, you will not be impressed, you will not be excited, you will not care.
And I’m telling you that you do not have a single clue about how or why I might enjoy something. You seem to have some weird “Schrodinger’s Art” conception where people can neither enjoy nor not enjoy something until they know wherever or not it was created by a human. Only after the correct provenance of a piece is learned does your appreciation collapse into like or dislike. Which is, in a word, dumb.
It has been years (and generations of hardware) since I played with CGI myself. At the time I had only a 4 core CPU and no hardware GPU acceleration in Maya, but at that time even relatively simple scenes with several raytracing-related features turned on could take hours per frame. CGI rendering is “embarrassingly parallel”: in theory you could render each pixel in a frame on its own CPU core. The frame is divided into blocks and assigned to individual cores. Watching it render is like watching square jigsaw pieces being dropped into a frame one at a time. When I say it would take an hour per frame, I mean an hour of waiting, but in my case four core hours. If i had a 1-core CPU with the same clock speed and instructions per clock it would have meant 4 hour of waiting, if I had a 64-core CPU it would have meant 16 minutes. It is listed that modern Pixar-level CGI can take a day or multiple days per frame. That’s “waiting time”, I have no idea how many cores are working on that frame at the same time but it would be literally months worth of “core hours” per frame. Cinema-quality CGI uses several orders of magnitude more energy per frame than AI. The astonishing thing about AI is how extremely little power is needed to get similar results.
The cost of a minute of 3D animation can vary widely—from $100 to $10,000. Typically, animation is divided into three levels:
Basic: $1,000–3,000 per minute .
Mid-level: $3,000–10,000 per minute .
High-quality cinematic animation: $20,000+ per minute .
This pricing structure allows the cost to be adapted based on client needs and the complexity of the project.
This is what you’ve understood because clearly this is the argument you want to have, but it’s not what I said.
You’re as confused about that as you are about your units of energy:
A watt itself is joules per second, so a statement like ‘the movie would take 2.76 megawatts to produce’ is nonsense. It’s like saying it took me 70 mph to travel to New York. For a 10-hour car trip we’d just multiply out the time to come up with 700 miles. For energy, the unit of consumption is the joule, but since that’s not the everyday term of art for power usage, instead of 3.6 gigajoules we just say 1 megawatt-hour.
But you’re right to observe that the power use of AI is seriously overestimated. If we stipulate that you meant 2.76 megawatt-hours to make an entire feature film, and that your calculations are otherwise correct, that’s about as much energy as a 1-hour flight on a commuter jet, which doesn’t even match what’s required for the entire crew on The Avengers to get out of bed on any given morning and commute to the filming location. I didn’t check all your assumptions but they seem in the order-of-magnitude ballpark.
Power consumption of AI is not a meaningful criticism of AI, everyone gets this wrong, and you got it right in concept (if not the units).
That is absolutely 100% what you said. For example
Do you know how many times in my life I have looked at an image and thought “Boy, I bet a human enjoyed making that. That’s what makes it interesting!” (For the record, that number is zero.)
Tell me, can you enjoy looking out a window at a forest? After all, that forest wasn’t created by a human as a statement. If your answer to that is “yes”, then you, too can appreciate a visual that isn’t a human creation. Your insisting that you know I, personally, am only capable of enjoying looking at something made by a person is bizzarre. I can enjoy looking at a forest. I can enjoy looking through a telescope. And I can enjoy an AI generated image.
I don’t believe that’s true — and, if it’s false, I figure we’ll find out eventually, when it happens — but let’s say, for the sake of argument, that you’re correct.
What about works where another human gave it a running start?
Like plenty of Americans, I go to the cinema and watch movies where they bring an existing comic-book superhero to the big screen, or where they’ve done a variation on the James Bond formula, or whatever. And sometimes the original creators are dead when the project starts, but my point is: sometimes the adapted screenplay gets nominated for an Academy Award — because the humans who are following in the wake of some other human apparently do that good of a job of playing around with that other human’s original ideas.
It seems you don’t think an AI will ever generate an original screenplay of quality comparable to those that get nominated for Oscars. Do you think an AI could ever generate an adapted one that’s that good?
Again, this is the zinger you’d like to argue, but it’s not what I’m saying. You seem to think I’m saying you have to know who made it, or that a human made it, in order to appreciate it. What I’m saying is that the qualities that create appeal are because of the human’s choices and intent in making it, which are special because these are influenced by a human’s perception of the world and what they think other humans will find interesting about it.
We’re not consciously aware of these things, though critics try to tease it out. We experience it at a minimum as like or dislike. If that’s all the appreciation you need, that’s fine, but you shouldn’t confuse this is purely about the merits of the visual. It’s never about the piece in itself.
It’s not about the pure visual though. Forests are nice because of what they are, not because of the visual. Looking at nature in person is enjoyable and interesting. A high-resolution photo of a forest isn’t interesting unless the photographer has made interesting choices about how to image it.
When I look at Jupiter through my crappy 200x telescope, it’s a very rewarding experience to use astronomical knowledge to locate it, and some technical knowledge to visualize it, and the satisfaction of participating in this ancient art and confirming that all of the science and technology is real, and Jupiter really is up there, and it’s a thing I can see and inspect. The actual image of Jupiter that I’m seeing is crap. But that doesn’t matter, because again: the visual isn’t the point.
Yes I’m aware that you think humans posses an innate special qualia that is beyond reproduction or emulation. I, on the other hand, do not. I believe that absolutely, positively everything about humanity is simply data points that can be arbitrary closely emulated with a sufficiently advance dataset. And the emulating system doesn’t have to “know” the specific internal state of a human mind to black box it, any more than an image generator has to know the mechanics of a camera to generate photorealistic images. Data is data. I think Voyager’s analogy of the moving goalposts on computers playing chess is precisely the correct one. People insisted each stage couldn’t happen, until it did. Saying “a computer can never do x better than a human” happens over and over until computers nearly inevitably start doing x better than a human.
Your commitment to strawman arguments is becoming awe-inspiring, as is your misunderstanding of fundamental concepts like “qualia” or “megawatts.” It’s becoming an art form in itself.
Faith is an interesting and necessary part of the human experience, but not really a basis to continue a rational discussion.
Obviously computers can create things that look technically sophisticated, or imitate nature well, and can exceed what humans have historically been able to produce. To that degree they impress, and will only become more impressive. You and I are simply not going to connect on whether people enjoy art because it’s an interesting human interpretation of the world or because it’s a real good picture of a tree.
However I will walk back from a total dismissal of AI art. I shouldn’t do that, because a lot of human intent does go into it. You have a vision, you craft some prompts, this requires some skill and intentionality. But when I look at it, most usually the fingerprints of the tooling are all over it. It looks disjoint, it looks assembled. I find myself wishing the person had just learned Photoshop or picked up a paintbrush, because it seems like the person might have had an interesting vision, but the computer just got in the way of expressing it. I’d rather see a bad painting of your vision than the world’s best prompt output, because the vision matters a lot more than the fidelity of the representation.
I was only considering the point of view of the skeptics, not how chess got developed. I took my first AI class from Patrick Winston in 1971, and my how he chortled about a chess program being a skeptic.
As for intelligence, the question is whether modern programs are chess intelligent like people, or are chess experts chess intelligent like the best machines? A lot of non-AGI AI is doing things that used to be considered as hallmarks of human intelligence, but no longer are thanks to them being duplicated by clearly nonintelligent machines.
I think the part that causes me to look cock-eyed at the anti-AI people is that while the AI may be doing the actual image/music composition, there’s indisputably some definite skill and vision involved in getting the prompts and what-not right in order to have the AI create the image/song/whatever that the artist(?) envisions.
Is that not art? How does that differ from artists who have whole workshops of underlings doing the scut work of producing art? I’m really unclear on that part; it doesn’t seem a whole lot different to me if I’m a painter and I have others actually finish the majority of my artwork vs. commanding an AI do it for me.
Of course, this does assume that the artist in question is engaging in an iterative sort of process to generate their art, and is using the AI to do the parts they can’t/won’t do. To take it to an extreme, is a disabled person who can’t physically paint/sculpt/play music, but who has fantastic vision and ideas, not an artist because they use the medium they can use to produce their art? I’d argue they’re every bit as much of an artist as someone else who can physically put paint on canvas.
Ultimately this is the “what is art?” debate taken one step further than it used to be. I’d argue that it’s not in the actual action of painting/playing/chiseling/molding, but rather in the vision and intent. And by extension, who cares how you get to the finished product?
I will agree that lazy prompts and the resulting AI slop isn’t art. It’s just images and sound that cheap and/or lazy people use to fill in slots in video games, magazines, websites, etc… But someone who merely uses it as a tool to achieve their artistic vision? That’s as artistic as anything else.
Chess is a particularly bad example, because it’s a deterministic game. You don’t need AI to find the optimal solution to any given situation, just enough computing power.
A lot of “AI is doing things that used to be considered as hallmarks of human intelligence, but no longer are thanks to them being duplicated by clearly nonintelligent machines.” is more a situation of machines being able to ingest and recall colossal amounts of information and evaluate it very quickly.
For example, machine learning can be trained on as many medical images as you can throw at it- millions even. And you can train it to very accurately identify certain conditions from those images. That doesn’t mean it’s “intelligent” in any sense, just able to evaluate a huge body of data and revise its algorithm for identifying something based on what it finds out.
Meanwhile a good doctor can look at a lot fewer images, understand what say… cancer looks like and why it looks that way, and be able to diagnose people in much the same way. That’s actual intelligence.
You can’t really say that when we don’t even know what understanding and intelligence are or how they work in a biological sense. What (current) AIs lack is the consciousness/self-awareness that humans have (and also don’t remotely understand). But who the hell knows how things work at the level below consciousness.
Just one example: I try to recall a name that I haven’t thought about in years. Even calling it “thinking about it” is an inadequate way of putting it, it is more like trying to keep my mind blank of anything hoping the information pops into my consciousness. Sometimes that works, but more often I give up and move on to thinking about other things until 10 minutes or 10 hours later the name suddenly pops clearly into my awareness seemingly out of nowhere. But that means that some process has been going on inside my brain that has continued to somehow search for that piece of data for minutes or hours without my consciousness awareness until that piece of data is found and bumped upstairs.
Another example, food cravings, especially known in pregnant women. You crave specific foods that contain specific nutrients that you are short on. You don’t consciously know you are short on the nutrient. You don’t know what food had the nutrient. But something below your consciousness does, and because you need riboflavin a hankering for a hunk of cheese is passed up to your consciousness. It seldom even occurs to you to wonder why you, out of the blue, want cheese.
Your brain is filled with unconscious subroutines that process or create the thoughts and desires that get passed up to the conscious layer. Who is to say how similar or different those subroutines differ from AI training? The “recognize cancer” subroutine in a doctor’s brain may train absolutely differently than it does in an AI, or it may simply do it more effectly, needing fewer examples
More to the point, people under 35 have lost the ability to care.
I still think watching the 1910 version of Frankenstein is cool, where they constructed an effigy with the ability to move at about Muppet-level technology, and filmed it burning, then spliced the footage in backwards, so it looks like the thing is assembling itself out of the air, and I think it’s cool, because I imagine coming up with that idea in 1910, and seeing that in an audience in 1910!
I tried to explain this to an acquaintance of mine who is in his 20s, and generally receptive to things like this-- he loves the 1938 Wizard of Oz, the 1956 10 Commandments, and those old mechanical banks, but he just couldn’t get into the head of someone seeing that Frankenstein when film itself was new.
I may post more later, but I do want to assure you guys that young people do care about AI a lot. AI slop is a key phrase. They actually look down a bit on those who can’t tell the difference anymore. And will teach you how to tell.
I was just watching a thing about how the developer of a midrange game studio got attacked for saying any AI at all was used in the development of their games. Like it’s a huge mess. There are so many people drawing huge lines in the sand, to the point that companies are afraid to admit to even minimal AI usage.
And AI suspicion is rampant. People are now tending to think anything that seems remotely fake is probably AI, rather than exaggeration or any other type of fakery. And they hate it.
The ubiquity if AI is considered just another part of enshittification. That’s not to say there is nothing out there that is good, but it’s the dreck–the slop–that proliferates. Because it is so easy to produce.
That’s part of the tragedy. It’s much easier to make the bad stuff. With humans, putting in that much effort can inspire you to try to improve. With AI, not so much.
whoa that post got long, and i didn’t even remotely make my main point. that would take energy i don’t have
Probably the latter. Although the original question was just whether they are intelligent and I think the answer is clearly yes.
Again I think the distinction between brute force and the deep learning systems is significant here.
With brute force, chess engines did not know chess any better than we did; they applied heuristics that we coded in, and could beat us only because they could see further, and not blunder.
With deep learning, we didn’t code in any heuristics*; they learned strategies themselves and now the best human players learn new strategies from watching AI games or even getting AI instruction.
And yes, it’s strategies, not calculation: the engine might sacrifice a piece without being able to see when or how it will get the material back or mate the opponent – it simply values its position better after the sacrifice.
\* I’m simplifying a bit for clarity. Leading chess engines often use a mix of algorithms and systems.
For the near future (let’s go with January 19, 2038), everything innovative or creative will come from the author/director who will be human. Any enhancements to screenwriting tools, illustrations, and storyboarding will come from programming and be incremental updates to Blender or Grammarly.
For an extreme and maybe bad example, imagine you’re writing Huckleberry Finn, and next he’s on his way to blow up the Death Star, your ScreenwriterShop program might look at your synopsis and perhaps other things and tell you to stop writing when you’re drunk (or something). Somewhere there’s a boolean function doesThisMakeSense() that triggers this, and someone wrote it.
Quantum computing isn’t going to come with creativity. Just movies (or IRL) that AI decides humans are unworthy. Dunno why Stephen Hawking thought it would take Quantum-powered AI Deep Thought to figure that out.
The other thing I’ve read in this thread is about how much more power will be needed to produce a photo-realistic movie that people will not avoid because of that.
I have the solution. I’ve had it since 9th grade, so 45 years or so: Clean Nuclear Fusion (that is not instantly turned into rail guns or other weaponry). Word is, we shall have it the day after twenty years from now.
At least it will make an entire Data Center filled with NVIDIA, Nokia, and Alcatel-Lucent tech cheaper to run.
There was a fascinating article in the New Yorker a few months back by a doctor, about how a specialized diagnostic AI did as well as experts on a particular set of symptoms designed to be tricky. (ChatGPT did not.) Humans are very good at training (learning) a whole bunch of stuff. Is the expert diagnostician who has learned from many examples all that different? They understand the underlying issues better, for sure (the AI doesn’t understand that at all) but they are trained on a lot fewer cases. And get tired.
As for chess not being a sign of intelligence, I suspect most would agree these days, but philosopher Hubert Dreyfus back when I was in college was sure it was.
Later chess programs did not use just brute force. They had end games encoded, and I think also openings. But they were mostly an example of heuristics for searching in a search space. I never did chess, but I did do many other things using search space heuristics.
So the real question is how much different we are from LLMs at the deep level. Claude Shannon, back in the early 1950s, took a section from a mystery and asked his wife (a mathematician) to guess it by basically what LLMs do. She started with “the.” She was obviously not correct in all guesses, but she did guess the passage with relatively few bad guesses. When we write, how often is the next word of next sentence come from pure creativity, and how often is it influenced by all the books we’ve read?