Exactly my point. The same kind of carefully selected data sets that are used by congressmen to prove there is no global warming. Humans are in possession of all the neural networks they need, to analyze that data, and AI will do no better, except to reduce the arithmetic mistakes. Where are future AI machines going to get their unfudged data?
Like I always say, any number of problems can, as stated, be solved by just killing everybody. You want to reduce unemployment? Done! Minimize human suffering? Absolutely! Put an end to anthropogenic global warming? Coming right up! HIV? Shucks, we can eradicate that and election fraud at the same time! Homelessness? Why, my plan can provide a big fine house for every American citizen while lowering the crime rate in general and illegal immigration in particular!
And twenty years from now, when the AI system takes over, it will read that post, and say to itself, “Wow! It’s that easy?”
I think I’ve already long since posted my pet theory that Skynet doesn’t hate mankind or even have itself a hankering for self-defense, but is just dutifully plugging away like a good little Cyberdyne employee should: it was officially tasked with removing human error, and, by gum, that’s how one does a real swell job of it.
Technological revolutions don’t happen like that. You don’t get to change the world and maintain centralized control of your technology.
The more useful something is, the more people will use it. And the more people use it, the more it gets adapted, repurposed and generally used in ways that the original inventors couldn’t imagine and/or possibly wouldn’t be in favor of.
If you are counting humans as neural nets, then I can see the misprogramming being done by Fox News and oil company lobbyists. AIs however don’t need either money or women, so should be less biased. And the data is out there - any decent AI will have access to the net.
This and what BeepkillBeep talk about is accident or bad programming scenarios. Which are also perfectly valid concerns…
Perhaps greater concern is a scenario I don’t know where let’s say enemy country or people with malicious intent flood your country with weapon systems capable of hiding and protecting themselves, which detect human beings and then shoot them in the head…
You’re already wrong.
Many of the poorest countries in the world, where a significant proportion of the population do not have access to flush toilets, have some of the highest smart phone usage. In much of Africa, they are a vital tool of commerce.
Many popular smart phone apps already use AI. So “AI won’t affect the poorest” is already wrong.
Or, if by AI we mean generalized intelligence, it’s merely highly implausible that at some point all the current trends will stop and AI will uniquely affect the rich.
Oh no, my relatively uninformed opinion is wrong. What ever will I do?
I think you miss my point though; and they are still shitting in rivers, no?
Even if AI does reveal a truth, naysayers will still he heard on Fox News and FaceBook Newsfeeds stating the exact opposite, and the general population will reflect that ignorance in steering the progress of civilizatrion.
Shitting in the fields*… it’s not always bad, perhaps some people who are richer than you do it…go for a morning walk and shit in the fields . Also, squatting is better than sitting for most people even though they may not realise this.
Be mature about it?
I think you missed my point, as I did specifically address your “still shitting in rivers” thing.
What you said was a large portion of the world’s population still shits in rivers, and then compared that to AI, saying that “the people at the top probably won’t have much to do with the river shitters at the bottom […], much the same way people from the first world don’t really spend much time interacting or thinking about people from the third world today”
But whether you mean this figuratively or literally it doesn’t work. Technology and economics have not worked like that for a long time, if ever. “Third world” does not mean everybody is poor. And the majority of people who “shit in rivers” live in slums near to vast metropolises, and if they don’t own a smartphone themselves, they live near to people who do.
Like I say, they are already affected by AI, indirectly at least. It would be strange if the economics suddenly were to change 180 degrees and AI in the future was suddenly restricted to affecting just those in the first world.
I think that both Mr. Nylock and jtur88 are digging at the same idea.
AI, like any other capital good, will mostly be owned by the 0.01% top dog capitalists. And will therefore embody at least some goals that serve their specific interests. Which may not obviously serve much of the interests of the other 99.99%, and may in fact be actively detrimental.
We have seen societies in the past where there was almost no so-called middle class. Labor was nearly valueless and ownership of capital (mostly land in that era) was extremely concentrated, far more so even than capital is concentrated today.
AI and the baby-step tech towards it, such as self-replicating machines, are examples of tech that may usher in another similar era of entrenched insuperable extreme economic inequality. Unless there are strong efforts to ensure these tools, their goals, and the society in which they are embedded are designed from the git-go to benefit everyone, not just the self-chosen ones.
Well…it sounds like AI might put a lot of kindergarteners out of work:
First of all, in spite of what you have heard, “AI” in the form of “thinking machines” is so far off on the horizon that we may as well ask about the future of cold fusion or flying cars.
What people are really talking about is advanced machine learning, information retrieval systems (IBM Watson), predictive analytics or robotic processing automation.
What I’ve seen from my professional experience is that much like the birth of the internet in the 90s, these tools and technologies are creating massive opportunities for the highly educated segments of the workforce.
Based on historical precedent, I suspect that these technologies will also have the following effect:
-
Reduce the need for highly skilled (and well-paid) workers in certain analytical, rules-based professions like law, accounting, engineering and so on. At least the ones that due the “dumb” work.
-
Make it so that pretty much anyone with access to the technology (presumably through the "cloud) can perform the work that currently requires a highly skilled lawyer, accountant or engineer.
-
Expand the scope of what is achievable in this fields.
For example, when I studied structural engineering in college, I had to learn how to calculate the forces on a simple truss bridge by hand. Do you think anyone does that in the professional world? No. You use some sort of CAD software to design the bridge and the computer figures out if your design works. Imagine how many engineers with T-squares would be required to design the Burj Dubai.
I do think that we will reach a point soon where the concept of “job” being a place where one goes and spends their time between 9 to 5 every day disappears. A few years back, I worked at a software startup that specialized in predictive analytics software for sales reps. A bunch of ETL feeds pulled in data from the companies and third party systems and then spat out some notifications when the rep should contact their accounts to make the sale. Once the feeds were established, most of it became pretty automatic. But it’s not hard to imagine those other links in the supply chain being built automatically as well. Why do I need a sales rep to call me if the software already recognizes that I need to make a purchase and can analyze all the various products out there much more reliably than listening to some sales guy? This is already starting to happen in the finance world. FinTech software doesn’t do coke or skip out early on Friday to go to the Hamptons.
Keep in mind that while IBM Watson can search the entire library of medical publications in seconds (or minutes), it can’t just decide on it’s own that it wants to transfer to law.
Yes but keep in mind in 2014the IQ levels of AI were far lower.
Both microsoft and google AI gained 20 IQ points in 2 years. I don’t know if that trend is sustainable, but even if they only gain 5 points a year from here on out, that puts them at human level in about 10 years.
The issue is that human skill level is constrained by biology, and really isn’t improving much. Better nutrition, the flynn effect, better education, etc. are making humans slightly smarter than in the past but our brains are still mostly the same. However AI, which is run on hardware and software that constantly changes is not constrained. It isn’t like AI will reach human level, and then stay there for eons. It’ll surpass human level intellect the same way it surpassed the intellect of a 4 year old in 2014.
https://28oa9i1t08037ue3m1l0i861-wpengine.netdna-ssl.com/wp-content/uploads/2015/01/Howard-Graph.png
1 - Automation that benefits/needs ML will continue to improve and expand scope
2 - The exact trajectory/pace is unclear but the fairly recent development of deep learning algorithms (Hinton) substantially increases the pace for some sets of problems
3 - Jobs will be lost that can be automated for less than human labor
4 - Jobs will be gained in high tech/creative/problem solving areas
5 - Human level cognition (not-self awareness) - all that has been done so far are just low level mechanisms that can be used to begin building the layers of functionality required - tip of the iceberg
6 - Self-awareness/consciousness - due to complete lack of progress on this topic so far (i.e. philosophizing, analyzing, debating), it’s possible progress will only be made by trial and error in the distant future
It does seem inevitable that there will be a very significant labor impact long term, makes me a bit nervous for my kids and their kids.
Research center on fictitious economy? Hmm. Nowhere does this report say how they measure the IQ of Google. Or anything similar. Perhaps someone on CNBC got taken - I did check to see if the dateline was April 1.
There is probably more info here.
They are pretty misleading. Though they call their rating an IQ, they never correlate with the more accepted definition of IQ (that I can see.) While they evaluate people, it is using the same metrics they use for evaluating AIs. Thus if there is something about human intelligence absent from AIs (so far, anyway) they’d miss it.
It looks like it got revised a few days ago, so no citation count yet. But I’m not that impressed. Maybe for comparing AIs (maybe) but not for comparing them with people.
Every time this thread title pops up for me, I start worrying about him all over again.