Artificial Intelligence in storing phone numbers but not in reactive conversation

Odd thread title, I know, but I didn’t know how else to describe it. My college success (which is more like a poor man’s philosophy class) teacher (lacking any comp sci experience), recently went on a tangent in class about Artificial Intelligence being “intelligence” (the definition meaning information, rather than any cognitive process) that is no longer used by humans, GIVING the intelligence to machines. As such AI is also relative.

He used the example that storing numbers on your cell phone and not memorizing them is AI because we no longer use the “intelligence” that is remembering phone numbers, we’ve handed that job to machines (those of us who DON’T use the feature as such, don’t have artificially intelligent cell phones in that regard). On the other hand a conversing robot is NOT AI because it requires our input to converse with us. Therefore it is not “replacing” our own intelligence and is therefore AI.

Now, this thread is in GQ because this isn’t about the merits, that’s another rant and/or thread. I’m simply wondering if he’s mostly unique in this regard or if there’s actually some school of thought or psychology society (he’s a Dr of social psychology) that adopts this mantra. Meaning, is there a name for this school of thought and where was it founded? Or is he just a strangely non-conventional thinker?

If anyone wants to argue against theory start a GD thread, I don’t advise it though considering I have a feeling any opposition to this will meet little resistance (though the Dope has surprised me before).

Sounds to me like less of a strangely non-conventional thinker, and more of a dumbass. If this is a school of thought, it’s extremely obscure.

You could use that argument against countless machines as well. I got tired of counting seconds all day so I bought a watch. I got tired of waiting for the roosters to crow so I bought an alarm clock. Most importantly, I don’t use my cell phone enough to program in phone numbers. I write them into a simple notebook just like people have used since the phone was invented. Ask him to explain the fundamental differences between a small electronic screen versus paper that does the same thing.

There is a famous story about Albert Einstein, in the early 1950’s, being asked for his own phone number, and having to look it up in the phone book. He said that he didn’t burden his mind with remembering such minor items. (Presumably he had more important things to think about!)

Under your professor’s thinking, would this mean that Einstein was partially an AI construct, his mind combined with the local phone book? Bah!

I hope you aren’t taking too many courses from this “professor”.

It’s known as ‘Opposite Day’. I’m not sure where it was founded, but it was popularized by Spongebob Squarepants.

It’s been explored for decades in sci-fi, at least. I can remember reading at least four separate stories exploring the idea of externalizing memory or even thought to computerized devices. (Different from AI, as previous posters noted).

I wish I could recall more, but the only name I remember specifically is a young-adult book Devil on my Back. They typically revolve around the loss of such devices, and how it affects the protagonist.

In one, mental access was available wirelessly to city inhabitants, one of which decides to visit a nearby city (with detrimental effects to his confidence and abilities). In another, the idea of PCs and automated programs has been expanded so powerful programs can be run embedded in, say, glasses. If you have automated stock-market, email-replying, scheduling, dictionary-lookup, face-recognition-and-name-lookup, etc. etc. at your disposal 24-7, someone talking to you on a day you’ve had your glasses stolen is going to be talking to a completely different person.
And in yet another, destruction of such external devices is a punishment for crimes that the newly-deprived person might not even recall.

I’m making my attempt at being nice! (I agree with you however)

Well, no, he would say the phone book is partially an AI construct. I think his argument was we get the phone to dial the number without ever seeing it. I don’t know, once it stopped making sense his arguments became hard to retain in memory.

And no, I think this is all I’ll have him for. Funny thing was we weren’t even supposed to have this class, it was canceled because last year pretty much every single student that took it complained the entire course was utter bullshit. They pulled it and then sneaked it back in at the last second. The only other thing I may have him for is Media and Society, which at least sounds like something in, you know, his field. I still hope I don’t, but it’s the only other class I can find that I would.

Hmm, it’s an interesting concept, when not trying to argue that it’s AI.

Anyway, I’ve looked and looked and finally decided that this does not exist anywhere else on the interwebs. The closest I could find in including that sort of knowledge as AI was computationalism, however the dichotomy of simple information vs consciousness strikes me more as dualism (albeit reversed). I dub it the still completely wrong and non-descriptive name Computational Dualism. Until anyone gets any insight on the question that is.

Thanks for the replies.

I believe by “AI” he means “augmented intelligence.” It’s a real term, it means exactly what the professor is getting on about, but I don’t think it’s ever abbreviated AI.

No, he made a point of the word “artificial.” (Machines storing stuff and allowing us to use it without our access is “artificial” because we don’t have it anymore) He even started the conversation saying “most people thinks artificial intelligence is…”

Maybe he saw Augmented Intelligence abbreviated AI once and decided to rationalize some BS explanation?

What he’s onto is Extended Cognition.* He’s wrong to confuse this with artificial intelligence, though.

-FrL-

*To be clear, whether there is such a thing as extended cognition (in any scientifically or technologically useful sense) is very controversial. I say yes.

Nice one. My contribution was going to bring up how the idea sounds like Andy Clark’s work, but you beat me to it. Squarely in the realm of philosophy, I believe.

While related to AI, I think it’s totally incorrect to define AI in those terms – certainly, it’s not a common definition.

That looks like it, he got a little farther (machines outright replacing humans), but over all it reads pretty similarly. In fact, I agree with the concept itself, but I was a little jaded when he presented it as an (incorrect) alternate definition of AI.

I’ve requested on the message board we’re required to post on for the class in a small complaint thread that he rethink his views on AI to prevent muddying existing terminology with the revelation that there is an existing term for the ideas he was presenting.

Either I’m fighting ignorance or making an enemy of an instructor, oh well. Nice catch though, that was an interesting link. Out of wonder though, are Augmented Intelligence and Extended Cognition relatively equivalent concepts? They sound similar, but Wikipedia doesn’t have an article on either for reference purposes so I’m curious.

No. Extended cognition is a philosophical idea that criticizes the notion that our mind is inside our head. It basically says such a view doesn’t make sense. We’re not necessarily “one with the universe” but we are one with everything we interact with. The brain is just a hodge-podge of cells that don’t even touch each other, made out of subatomic particles that don’t touch eachother either. The essence of the mind is a pattern of cause and effect, and that pattern encompasses things around us almost as much as those inside us. (Although obviously we have a pretty lower-bandwidth link to the outside compared to all the network traffic inside.)

Augmented intelligence isn’t really about philosophy. It’s just about augmenting intelligence.

Don’t know what your professor was saying. Sounded like, “only things that augment intelligence, and only things that replace/improve the intelligence that we have but don’t add anything new, are AI.” I think he was making some smart-ass point about definitions (as in: everyone keeps using this word is wrong!), except he was the one wrong. I think he’ll be pretty embarassed and then pissed if you correct him. Don’t hold your point too hard and make it worse. If you see that he realized he was mistaken, don’t remind him of it again. Don’t tell anyone else (like on your class message board). Ahh, face it, you’re getting a D.

Just in case you need a source to fall back on, the standard AI reference is Russell and Norvig. The table of contents alone should make it clear that AI has more to do with conversing robots than storing phone numbers.

I’m not completely sure whether Alex_Dubinsky is joking here. Also, you know your prof better than I. But I want to note that I’ve only ever encountered one Academic* who I would suspect would engage in this kind of behavior. Academics by and large (my experience is mostly in the humanities) do not mind being corrected at all. Many, in fact, welcome it. Even from undergrads.

YMMV, it depends on how you do it, I don’t know your prof, and all other caveats apply.

-FrL-

*This was the instructor for a class my wife was taking, so that’s second-hand information, and grains of paranoid salt should be added into the mix as well.

I was mostly joking. But it’s one thing to correct a professor. Another to correct him of an embarassing assinine mistake, publicly. Depends on his self-esteem, really. Or if he’ll even see he’s wrong.

Okay, just wanted to know, with just the terms they sound similar, or related (augmenting intelligence via, say, implants, may extend the things governing your cognition through machines).

And the “everyone is using the word wrong!” thing wouldn’t surprise me though, he IS the developmental English teacher (luckily, I tested out of the developmental classes… WAY tested out of them).

Actually, it’s everyone else that’s doing the chastising, my post looks like a beacon of kindness compared to the other stuff. I just wanted to give him a reason in a nice “this is a request, you’re entitled to your opinion but due to this evidence…” sort of way. It’s hard to explain without seeing the stuff (it’s private and all), but trust me that I was very cordial in doing so, don’t want to fail the class. (Actually if we get a D we technically get the credit but they make us retake the class anyway, it’s annoying, so “D” is definitely not an option.)