I'm missing something about AI training

Sure, so what’s your point? We know it’s in the database that was trained for the model. We know the model can’t create it via prompting for it. Knowing how the training works, it’s easy to deduce why it’s not creating it. You have access to the code if you want to look at it.

If you’re trying to get at something you should just say it.

You made a categorical statement about how the image generation works. The images it’s trained on has nothing to do with how it works. That is the code.

I have a basic idea of how machine learning works and how the image generation works. @griffin1977 's statement is probably at least as close, if not closer, to my understanding. Unless you can demonstrate that you have better understanding of how it actually works than others - by having some familiarity with the actual code or at least the theory - his statement is as valid as yours.

Another painting strongly represented in AI is Starry Night. So much so that asking for something in the style of van Gogh very often results in something blended with Starry Night in preference to any other van Gogh work. Back in late 2023 I made a number of images on various AIs asking for van Gogh’s Starry Night. As you can see, they very often know what it should look like in broad details, but never get it exactly. (The one with the chimp was asking for something like a chimp and a giraffe in the style of van Gogh, not a chimp holding a copy of Starry Night.)

(Incidentally, today’s conversation lead to the creation of another new image I quite like: Starry Starry Shark.):

It looks like you got hit with one of Dall-E’s traits that I don’t like. Or perhaps the interface that you are using. In my playing around, I found that if you ask for something with a particular detail, like a shark, it decides that you want sharks, so it keeps putting them in.

In my first use, I had a couple of details that kept popping up, even days later. It didn’t clear out completely until that particular interface disappeared.

No I did the shark on purpose after getting a total failure at the original painting.

Another example from late 2023: I prompted for something like a 1970s stereo system or something else in that general area. I didn’t ask for specific albums. But Bing elaborates on prompts in the background so I got images with four covers for ELO’s Out of the Blue. All similar to each other but not identical, and not identical to the actual album art.

That’s what lossy means its just a technical expression for “wrong”. Yeah it doesn’t look like a traditional lossily compressed image with lots of noise and fuzziness but thats because the algorithm used to reconstruct it from the encoded version is very clever, and very different from a traditional image compression algorithm.

But it’s no less the result of a computer program that has reconstructed an image from a encoded version and done a bad job because the encoding is too heavily compressed

We with high degree of confidence know that much about how the human brain/mind works. The brain stores some key features extracted from examples of what has experienced and when asked to recall it pulls up those key features and backfills in the gaps creating close enough approximations. A simple illustration: your brain will store an image of a square by focusing in the corners in some detail, and recreate the sides. Yes brains are predisposed the store some key features over others, and learn others to prioritize with experience. The brain also does much more than that and consensus on how all that much more gets done does not exist.

But in this limited sense computer neural networks and brains are similar: they create an abstract compressed archetype and follow rules to recreate an imperfect approximation of the original.

So it could have given me a picture of a dinosaur eating ice cream and it would have just been a lossy interpretation of Renoir-TwoSister.png? Ok, then.

Lossy is a technical expression for losing information, not changing information.

Frankly, if you lose enough information that rebuilding the file results in a picture with completely different composition, it’s silly to even call it the same file. These Two Sisters pictures suggest that the original photo was compressed down to “two girls wearing hats, a railing, a basket of flowers, Impressionist style”. Yes, impressively compact, but it’s not really the same as a .png of the actual painting.

First, a story

When I was a teen, Mom took a stained glass class. She commented that if you dropped a square of glass on the ground so that it broke, then put the pieces back together - that was not art.

OTOH, my father was into photography, and took a couple of photography magazines. In those magazines, they said that the key to photography was to take a lot of pictures and choose the best. There are photographers who achieve art via this process.

I had a bunch of different ideas bouncing around in my head about software and statistics and such, but I realized that it all boiled down to initiative

Emphasis mine. You are initiating the transaction, not the AI.

Show me a computer that does anything unprompted and then we can talk about true AI.

IOW, if a human is involved

IOW, when a human is involved

The reasons I think images will be bland:

Every AI image that I’ve seen that wasn’t the result of someone putting a lot of effort and input into creating the image has been bland.
Think about this: the image AIs need a huge database of images. You know who has a huge inventory of images? Hallmark.
And I still think that what the AIs are doing is averaging the feed images into a new image. That tends to rub the irregularities out.

The reason I think the could be discordant is because I saw the results of crochet patterns that were created with a neural network. Now, 90% or more of all crochet patterns are symmetrical on at least one axis. But every item resulting for that neural network looked more like coral than crocheted items.

Apparently they are getting better, enough so that there is a problem in the crochet community of people creating a pattern with ChatGPT, which when actually used, doesn’t work at all, has to be modified in some way, or at least doesn’t make what the seller claims.

Sure, I said before that I assume a human is involved as a default. Someone is prompting, after all. When people are debating if AI art is “art” (be it music, images, etc) it’s all stuff that a human prompted for.

As for “bland”, that’s totally subjective anyway.

Everything that humans do is based on copying the work of countless other humans before then. Even if you just bang away randomly on a piano, you’re using data gathered from centuries of Western music: All Western music contains only notes in the same ratios as the notes on a piano keyboard. Pianos are made the way they are, in fact, to match the notes of umpteen-zillion previous pieces of art. Remove any single song from the training set, and that remains true, but the aggregate effect is the same.

Even more so if, for instance, you play only the white keys. Those keys are white because you can, using only those keys, reproduce the frequency ratios of most Western songs. Or there are other (though less common) songs whose frequency ratios can be reproduced entirely using the black keys.

Likewise if you put a meter on your song. If you’re trained in the Western musical tradition, there’s a 99+% chance that any song you write will be in one of about five different time signatures, because those five time signatures, between them, account for over 99% of your training data.

And if your training data had been different, you’d be producing different music. Lots of places in the world feature music with different tone ratios, or different time signatures, or nothing we’d recognize as a time signature at all.

Human-written music, in other words, is undoubtedly copied, in concept if not in details, from other songs the human is trained on. How is that different from what a computer does?

Meh.

Yes kids come pre-wired to ask “why” and the refine the level of their questioning through development. The AI would have to be prompted to start the process and repeat with new questions based on the results of the last series of inquiries.

If the AI was hardwired with that initial prompt in it then it is very comparable. Okay its designers are its creator rather than evolution or a deity if such is your belief. So?

No it’s not. Humans can copy music, but they can also produce music that is inspired by other music but does not copy it. As a society we’ve spent centuries coming up with a well defined line for when a human writing music crosses over from one to the other (Oasis singing What’s the Story Morning Glory, does not constitute copying the Beatles even though clearly inspired by them, Oasis singing I am The Walrus does)

If we say a computer program (again a completely deterministic series of simple instructions that predictably carry out operations on their input data) has that ability to be inspired by other music but not copy it, what is it about those series of assembly language instructions that imbues it with that ability? And what is it about my MP3 encoder that does not give it that ability?

Can you answer what it is about human brain cells interacting and messaging each other that gives it that ability?

Can you explain to me how how those physical processes are not deterministic?

Sure, that line was mentioned in the court cases referenced way back when: When you can point to where Song A took from Song B you potentially have a case for the line being crossed.

If an AI or mp3 encoder or magic fairy rock makes music where you can point out segments that Song A took from Song B, then you have a case. If the AI/mp3 encoder/fairy rock writes songs that might have come from training but you can’t actually point out “This part was taken” then you don’t.

Nope not even slightly, and neither can anyone else. But that’s irrelevant, we do think humans are special and can be responsible for things (like copying or being inspired by music, or committing crimes). that inanimate objects cannot. If you are assigning these human qualities to series of machine instructions then you’ll need explain why and why other collections of machine instructions don’t have those.

Again no one can. But regardless of where the philosophical debate on whether the universe is deterministic ends up (almost certainly it will never be settled). The human brain, unlike a computer program, is not deterministic in any practical sense, a human brain will never react the same way to the same set of inputs no matter how carefully you set them up.

What you are asking for is a master’s level thesis on the nature of intelligence and creativity.

And an explanation of how neural networks work, with the code behind it.

I have a 40 year old degree in psychology, and am a programmer, but I don’t know either of those subjects in the depth that you seem to need.

I know this - intelligence is about far more than the ability to pattern match. And that the “learn ai” portion of the w3schools section on python is all about statistics.

Your honor, whatever gives you the idea I’m responsible for these pirated MP3!? Sure I encoded the entire Rolling Stones back catalog as MP3 and copied them to a public server, but I’m not responsible, why would I be? That server only sent those MP3s out because some criminal reprobate entered the search terms “Rolling Stones Let It Bleed”. They are responsible, I had nothing to do with it!