Would you connect your medical records to OpenAI and allow the company to absorb and process your medical history so ChatGPT can generate comments on your health?
□ Sure, sounds great! What could go wrong?
□ No.
□ Fuck no.
□ No and fuck you for asking.
□ There is not one chance in ten thousand hells that I would ever do this.
■ Sam Altman should be stripped naked and fed to crocodiles as a warning to others. Also, no.
□ I need more information before I decide.
□ Other (fill in the blank).
Maybe he does, but maybe also corporations do, too. Maybe someone should just for one fucking second hold corporations responsible for the harm caused by their fucking products.
Or maybe people should learn what a fucking product actually is before they misuse it, especially when asking how much they can deliberately poison themselves and come short of actually dying.
LLMs (and image generating models) started off as academic experiments in seeing what could be accomplished. Maybe they should have stayed restricted academic tools instead of being unleashed on the masses too stupid to walk and chew gum at the same time and then be blamed because a Darwin Award winner thinks it is something it definitely isn’t.
The overall issue isn’t a LLM issue, it is a nanny state dumbing everything down to the lowest common denominator issue, like having to label a can of peanuts with “product may contain nuts”.
Would you connect your medical records to OpenAI and allow the company to absorb and process your medical history so ChatGPT can generate comments on your health?
Sure, sounds great! What could go wrong?
No.
Fuck no.
No and fuck you for asking.
There is not one chance in ten thousand hells that I would ever do this.
Sam Altman should be stripped naked and fed to crocodiles as a warning to others. Also, no.
I agree. This whole argument about how LLMs and their creators are evil because an LLM “might be wrong about something” is really mostly a statement about the stupidity of the general population. Secondarily, it’s also a statement about how some LLM implementations are better than others, and that some – here I’m specifically referring to Grok – are not only badly implemented but explicitly tuned for racial bias.
I voted “other”. I believe in the privacy of medical information and don’t trust any corporation engaging in experimental AI to respect that confidentiality. But I don’t mind anonymously sharing medical symptoms with ChatGPT and getting some verifiable guidance. As I mentioned earlier somewhere else, its suggestions for “questions to ask your doctor” for an upcoming specialist appointment were very useful.
I regard LLMs as a not-terribly accurate search engine for answers to questions. The big problem is that there are too many vested interests who want people to believe that these are actually intelligently solving problems.
I think it’s more complex than that. People aren’t just stupid, they are stupid in very predictable ways. If the ways that people are stupid are not only well-known and well-documented, but gladly exploited, there is moral culpability on the entity doing the exploiting. That is true of AI, certain types of video game mechanisms, social media, the junk food industrial complex, cigarette manufacturers, casinos and basically anyone who has profited off of humans’ neurological weaknesses. People who profit from AI know that they will profit more if they make unfounded claims about what their technology can or will do, that they will profit more if they deceive people that AIs are friendly by using obsequious language, and they will profit more if they target their tools at corporate bigwigs just drooling over the idea of suppressing wages more than they are already suppressed. Unsurprisingly, they do all of these things, because they care more about making money than human quality of life. They are just as much if not more amoral than AI itself. Corporations cannot police themselves. The end result is total destruction, and we already have evidence of that everywhere, including the air we breathe.
I was on the Verizon site looking at change my plan, and it suggested I try YouTube TV. I went to chat and asked about pricing, and it warned me that it was currently $70/month, but that in April, 2023, it would be going to $73. [more than two years later, it’s now $83. I looked that up on the YouTube website]
As somebody posted on Slashdot, that I went ahead and stole:
Sam Altman, using money he doesn’t have, bought up almost 50% of DRAM wafers that don’t exist, to turn into DRAM chips that don’t exist (or maybe not; maybe he’s just playing keep-away from his competitors), to put alongside GPU chips that don’t exist, to stuff into server farms that don’t exist, which will consume vast quantities of electricity that doesn’t exist – all to create “artificial intelligence” …which doesn’t exist.
I was icked out by seeing an AI summary of an email so finally shut it off in my Gmail. now, every time I go there, I see an announcement of how to turn it back on.
my in box categories went away, which is not what I wanted either, apparently I have to “turn on smart features in Gmail, chat and meet” if I want them back. sigh.
my whole Gmail needs a big ass sort/rearrange/purge even though my inbox has been whittled down nicely (only 45 emails in it, down from 600 maybe 6 months ago) the “labels” part is out of control.
SO before I decide if having those categories I was used to back again is worth turning the AI back on or not, I need to understand it better. and, ironically, I guess I will ask AI exactly what the options mean.
Gmail is just fine if you use it with a decent email client and just treat it as a service rather than an interface. I haven’t logged into the actual Gmail web interface in years.
Just got back from a trip to Minnesota to see my mom on her birthday. Online hotel reservation AI served me up a list of suggested activities during my stay. It suggested a picnic for lunch on day 2 at the Lower Sioux Agency historic site. Which is actually a nice spot when the temperature is above freezing, it isn’t snowing, and the wind isn’t blowing.