Under what conditions would you be open to have an AI assistant with the same level of access to your life as the movie Her? (or your google history)

I’ve been thinking a lot about how current Large Language Models and Generative AI in general (artificial voices, images, videos, music, etc) will impact our lives in the next decade or two.

This is an exercise that was popular when the internet was popularized as well, as it similarly inflamed the imagination with a tornado of possibilities. And sure enough, most came to pass. The internet is as ubiquitous as air nowadays. In fact, it is in the air, thanks to starlink and 4/5G :slight_smile: . Remote work, online payments, free calls, social media and apps have indeed reshaped the world.

Similarly, in this new genAI age, it is only logical to expect scams to harness such powers as cloning voices, text to speech to hide bad english and LLM emails to hide poor grammar, the creation of audio and video fake news (which has already been used), etc. the list will go on endlessly as I’m sure you can easily think of a dozen other scenarios that will prey especially on the most vulnerable among us.

The only reasonable solution will be to have our own good guy AI to fight the bad guy with an AI.
Such an AI can watch your youtube with you and tell you when a channel is full of shit and repeating falsehoods. It can monitor your phone calls and warn you the caller is likely a scammer who cloned your child’s voice. And just generally do fact checking for you so you’re not taken advantage of.

So, assuming the world forces you to use one of those AI assistant to protect yourself, what would be your conditions? Would you want to run it on your own hardware in your home to prevent your Data from going anywhere over the internet even if it’s more expensive and slower?

Would you need to have the choice of picking your own provider and your own AI model, or would you rather trust Apple like you might already do with your intimate photos or Google with your intimate gmails?

Yes it will have a big potential impact on our privacy but generally we, as a society, have decided that trading our privacy to corporations for convenience was A-OK so I don’t necessarily see it as a deal breaker?

As I get older the idea of starting to train an AI “buddy” to help learn my habits and preferances for things while I am still able to define them would seem to be a good idea. Something to remind me to pay bills, remind me of discounts on items I may be missing and just general decent advice and conversation. Trouble is the type of AI that is likley to be available would be out for monetary gain for someone else. This place kind of serves some of those needs and purposes now.

But absent a wife or old friends or (Og forbid) this place shitting down, an AI helper may be the only choice at some point.

These would be the two biggest factors for me. Such an AI assistant would go far beyond just dealing with e-mail or photos, as mentioned in the OP. Anything close to a true AI with that much influence over my life will have to demonstrate clearly that it is loyal to me, not to some other party. I’m not sure how that could be proven, but that’s what I’d need. The opportunities for abuse are far too many and far too obvious to go any other way.

I would have to be extremely disabled and alone with no other options. I refuse to even read the google AI results. I loathe the entire idea of AI, find all brain-mimics to be somewhere between distasteful to horrible. Sorry, 90% of the Dope who are computer nerds. I’m not one of you.

For those unfamiliar with the movie cited in the OP title:

Agree w the folks upthread that it could be a very useful adjunct to failing cognition in older age.

As to this:

The idea you’d have a practical choice is laughable.

No matter how hard you might try right now in 2024 to “keep your data to yourself”, the mere fact you own a mobile phone or use the internet says you’re only controlling your end of every interaction in every medium. The folks at the other end have an equal say in what gets stored where by who. And they’ve never had your interests at heart.

The same thing would quickly become true if you integrated an AI helper into yourself. Whether you liked that idea or not.

None. No conditions. Zippo.

Woah! That seems too broad?

There are many different kinds of AI:

    • Machine Learning (ML): AI that improves with experience. Introduced in the 1950s, it gained prominence in the 2000s. Used in recommendation systems like those of Netflix (implemented 2006) and Amazon.
    • Deep Learning: A subset of ML using neural networks with multiple layers. Became prominent in the 2010s. Powers image and speech recognition in services like Google Photos (introduced 2015).
    • Natural Language Processing (NLP): Enables AI to understand and generate human language. Modern NLP techniques emerged in the 1980s. Used in chatbots and translation services like DeepL (launched 2017).
    • Computer Vision: Allows machines to interpret visual information. Developed since the 1960s, with significant advances in the 2010s. Used in autonomous vehicles and facial recognition systems.
    • Reinforcement Learning: AI learns through trial and error. Formalized in the late 1980s. AlphaGo used this technique to master the game of Go in 2016 (Silver et al., 2016).
    • Generative AI: Creates new content like images or text. Gained prominence in the late 2010s. Examples include DALL-E (introduced 2021) for image generation and GPT-3 (released 2020) for text generation.
    • Expert Systems: Rule-based systems that emulate human expert decision-making. Developed in the 1970s and 1980s. Still used in medical diagnosis and financial planning.
    • Genetic Algorithms: Inspired by the process of natural selection. Introduced in the 1960s. Used in optimization problems and machine learning.
    • Fuzzy Logic Systems: Handles reasoning based on “degrees of truth” rather than binary true/false. Introduced in 1965. Used in control systems and decision-making under uncertainty.
    • Swarm Intelligence: Inspired by collective behavior of decentralized, self-organized systems. Concept emerged in the 1980s. Used in optimization and robotics.
    • Bayesian Networks: Represent knowledge and reasoning with probabilities. Developed in the 1980s. Used in medical diagnosis and spam filtering.
    • Support Vector Machines: A set of supervised learning methods used for classification and regression. Developed in the 1990s. Widely used in bioinformatics and text categorization.
    • Convolutional Neural Networks (CNNs): A type of deep learning algorithm particularly effective for image processing. Introduced in the 1980s but gained prominence in the 2010s. Used in facial recognition and medical image analysis.
    • Recurrent Neural Networks (RNNs): Neural networks designed to work with sequence data. Concept dates back to the 1980s, with significant advances in the 2010s. Used in speech recognition and language translation.

By brain mimics, did you mean that you hate #6? Generative AI? Aka ChatGPT, DALLE, Udio and their ilk?

Are you fine with the other 13 kinds of AI?

Never. No way. No how.

I’d be okay with it, but only if the company has demonstrated an extreme high record of integrity with privacy and cybersecurity.

:rofl:

Sorry, your reply made me LOL immediately. “Company…integrity…” :grin:

It’s not whatever “A.I. assistant” that has access to all of your personal information, it’s evil corporations. They use and profit from it, too. Hope you are not using any “apps”, surfing the web sans anonymizing browser, etc.

First and foremost, the ability to turn it completely off at my discretion. I don’t need to be “protected” from AI results if I’m doing most stuff on the internet. If I’m watching a bad movie roast on Youtube or music videos or reading the rules to Dungeons & Dragons or checking sale prices on socks or ordering a pizza or even (gasp) looking at pretty girl photos on the internet, I don’t need a nanny monitoring my activity to protect me. And, to the extent that someone can trick me with AI generated music, rules or pretty girl photos, it should be up to me if I care enough to have a counter-AI working over my shoulder.

No, I’m not. It is completely lost on most humans that every single piece of technology without exception has an unintended consequence, and it is very rarely a happy surprise. Tools change us in ways we cannot predict and cannot alter once it happens.

People who are technology and “progress” lovers can’t grasp this. There really is no such thing as a free lunch. Even if the beneficiaries aren’t the ones paying for it, someone or something is.

I see this – the pattern, not necessarily the specifics – so clearly, and no one seems to think it’s real or it matters. So yeah, the reason I ever embrace any new tech is because there is no choice whatsoever, except to resign from human society.

I think it depends on what you mean by ‘AI’.

The LLM things like ChatGPT make up complete nonsense out of thin air. So no way.

But something that takes care of routine secretarial tasks like paying bills, reminding me of appoinments, fielding emails etc, maybe. I’d want it to have a fairly low threshold for escalating decisions to me, though. And certainly would not give it any kind of power of attorney.

So your objection is not necessarily to AI specifically, but technology in general?

Mankind and tools go hand in hand. Without technology, we would be extinct.

Spears, knives, slingshots, fire, the wheel, planting fruits and veggies, bowls. All of those were radical technologies with enormous unintended consequences.

I don’t know where you get the evidence that we’d be extinct without tools. I don’t think we would be exactly human without tools, but I don’t know that we’d be extinct. Not that the world wouldn’t be infinitely better off without us.

It isn’t so much that I object to technology as a concept. It’s that I feel the Amish have the best way to think about it: does a tool facilitate, or damage our values of community, unity, humility, sustainable lifestyle? If it is damaging, then choose not to engage in it.

But we NEVER choose not to engage in new tech. In fact the choosing is done for us and we have to adapt to it whether we want to or not. In this way, the fabric of society shreds, and shreds again. And so does our autonomy, so do our life skills, so does the biosphere. AI is just the next iteration of the trajectory, and I object to it purely on that basis although there’s plenty of other reasons.

I’m holding out for Janet from The Good Place.

Out of curiosity, was that written using an LLM? It sure looks like the typical formatting and style of one.

Some LLMs will give you citations (like Bing/Microsoft’s Copilot) and they are getting better and better in general in avoiding hallucinations, in my experience. I definitely wouldn’t trust them without doing a sanity check, but that’s being worked on and is definitely improving.

If all of humanity decided tomorrow to live as the Amish do, over 90% of us would be dead in 6 months. Including me, although perhaps not you.

That’s not a realistic option for most of us. We have built this new and improved bed that permits our vast numbers, and we’re stuck sleeping in it.