I think we’re in AI spotting territory now. I expect it to actually move in the opposite direction: be more and more indistinguishable from human content. I feel you have this exactly backwards. We shall see.
Not least because human written content will start mimicing ai generated content once its ubiquitous enough.
I agree. AI will only get smarter. People seem to be getting stupider.
That’s a good observation. When I was working last spring on training AI models using Reinforcement Learning from Human Feedback (RLHF) – this is one of the big things that led to a leap of LLM performance – I did notice that as much as humans were providing input to train AIs to be better, the AIs certainly did seem to influence how humans wrote their improved responses. I felt like there was some sort of “AI Style” developing, much in the way there’s an Associated Press style.
It’s not that the AI content is not well written. Can I tell what is AI on the Reddits where AI posts are frequent, yes. Some of that is style, and some of that is the intent is to produce reactions, clicks, rage bait. Those are the more obvious and more relevant tells.
There are Reddit subs like AITAH (Am I The AssHole) where people are purportedly writing with questions about behaviors and reactions they have had to others. The ones obviously written by AI are those that exclusively produce responses that the OP is not the AH, the cartoonish, one dimensional person behaving outrageously in the story is the AH. Because those stories reliably provoke the most outrage, upvotes, response comments. Why a purported OP would come to such a site with a such a question is irrelevant, because of course there is no real OP, the story is made up. The point is to get the upvotes, get the clicks.
I am sure AI will be very adept at providing any daily news stream that you wish to select. Will this news stream actually be informative for you? Or just provide a dopamine rush and more clicks? What do you think?
I think it can be either.
I am sure those of you who are following this space have heard about the latest AI sensation: Deepseek from China which apparently has produced a world-class AI model at a fraction of the cost of what OpenAI and others in the US are doing.
I am still trying to wrap my around all this but it seems an extraordinary development and perhaps the first really big AI achievement from China. It’s rather remarkable that Deepseek has released an open-source model; I wonder if it obtained permission from the Chinese government to do this and what that implies about Chinese strategy on AI.
https://chat.deepseek.com/sign_in
You can check it out yourself ; you just need a Google id. I have asked it a few questions and it seems pretty decent, certainly in the same ballpark as ChatGPT and that seems to be the consensus around the web too.
Naturally it has some limitations; I asked it a question on Tianamen Square and it straight up refused to answer. But it isn’t pure PRC propoganda either; I asked it to analyse Hong Kong protests from a variety of perspectives and it gave a pretty neutral answer including those perspectives critical of Chinese government policy.
It appears that the recent FrontierMath-benchmark where o3 achieved a score of 25% that impressed many (yours truly included) was secretly bankrolled by OpenAI, who moreover had access to much of the problem set and solutions (but vowed—scout’s honor!—not to use it in training). Additionally, this was not disclosed to the contributing mathematicians, several of whom have since expressed reservations regarding the lack of transparency.
Now, of course, it probably won’t ever be clear whether OpenAI actually cheated on the test. But I think that this whole episode goes to show that when evaluating claims of AI companies with billions on the line regarding the performance of their models, a grain of salt is always appropriate…
- Time
- Other opportunity.
Time: that replacement of jobs took place gradually over a relatively long span of time, permitting people to adjust, retrain, seek: other opportunities for gainful employment.
The end goal of the AI hype bubble is to do something like that, everywhere, extensively, very quickly.
Maybe humanity will still find a way to cope with the change, but the other possibility is that some majority of businesses shed the majority of their employees, and there are no opportunities for all those redundant people to seek, because the same thing is happening everywhere else. The companies can’t not do it, because if they are last to do it, they lose their competitive edge, but also when they do it, they’re contributing to a situation where the general pool of paying customers no longer has money to buy the company’s product or service, because more people are out of work.
What we’re seeing happen now may be different to the societal upheaval that happened in past cases, simply because of the scale and speed at which it could happen, and the size of the human population now too, maybe.
With the advent of Quantum Computers, it isn’t possible to overhype AI. Standard computers, no matter how powerful, rely on transistors that can only exist in two states, 0 & 1. Quantum Computers rely on electrons that can exist in countless states all at once. They are basically, “everything, everwhere, all at once”. This gives Quantum Computers computing power that far, far beyond even 100 standard super computers linked together. They are still in early development, but they may very well open the door to the multiverse.
Things will adjust. Just sometimes, part of the adjustments is you need guillotines.
OK, I apologize if this goes a bit off on a tangent, but this is just compounding hype with more hype (FWIW, I’m a researcher in quantum computation at a large German research facility). But the gist of it is: quantum computers are good for a few tightly circumscribed applications, and they don’t simply accelerate everything by doing ‘all possible computations at once’, or whatever the usual gloss is.
First of all, quantum systems (not just electrons, different architectures use different physical realizations of qubits) are also just in one state at any given time. However, those states can be superpositions—that is, combinations of 0 and 1, with either of these possibilities obtaining with some probability upon measurement. This yields a massive increase in complexity: while you have to keep track only of n bits in the classical case, the number of parameters needed to keep track of a quantum state of n qubits is exponential in n—adding a qubit doubles the possibilities: one qubit has two, two qubits four, three qubits eight, and so on.
So to classically simulate a quantum system, increasing the size of the latter linearly (e.g. adding another qubit) necessitates an exponential increase (doubling) in the size of the classical simulator. Thus, quantum systems are, in general, very hard to simulate on a classical device. But the quantum system itself does it efficiently: thus, quantum systems can solve at least some classically hard problems systematically faster than classical ones.
The question is, which problems are those? Now, in complexity theory, the open questions generally outnumber the answers. But it is, at least, strongly conjectured that quantum computers don’t just generally solve any ‘hard’ problem faster than classical computers do (well, not exponentially faster, at least). So far, we’ve figured out that there’s a limited class of problems where there is a provable (up to certain complexity-theoretic conjectures) speedup, with factorization being the prime example (not least because if you can solve that efficiently, you can also break most current cryptographic systems).
What about AI? Is there a speedup? The answer is, it’s complicated. There are theoretical speedups for various machine learning algorithms, but the problem is that ML generally is useful when working on large datasets, which is exactly where quantum computing falters. The feature that gives quantum computing its power, the presence of superpositions, is incredibly fragile, and becomes more and more difficult to sustain the more qubits you have. But you’d need lots of qubits to encode the huge datasets used in machine learning, so these requirements pull in opposite directions. Getting classical data into the system, and getting classical data out, is a huge challenge as well, and indeed might easily negate any speedup the bare algorithm yields.
So at present, it’s unclear whether there is a practical speedup to be expected for AI from quantum computing, and even if there is, it’s highly unlikely to manifest in the near future.
Well, what I posted I got online from the physicist Michio Kaku but, in fairness to him and scientists like Neil DeGrasse Tyson, their goal is to get their online viewers interested and excited about science. They also have to dumb it down for lay people like me to get a real idea of what they are talking about.
So, are you saying that in order to calculate “everything, everywhere, all the time”, you’d need an infinite number of qubits, and it would be so incredibly complex so as to not be possible in a practical sense?
Thank you for the time and effort you put into that wonderful post!
This was a very informative post. Thanks for taking the time.
I love when this happens on the SDMB. An actual expert steps in.
Well, there was also experimentation into ternary computers, which are non-quantum, but I guess not “standard” as binary is all that really exists today. (IIRC, ternary logic, among other things like the ubiquity of binary components, was a pain in the ass compared with binary.) I have read that there still is potential for ternary computers, and LLMs is one potential use. But I have only the most basic knowledge of this.
Without going on too much of a rant, Kaku’s statements regarding quantum computing seem to be the product of either near-total ignorance on the subject or a willingness to say whatever gets his face in front of the greatest number of cameras and sells as many books as possible. Somebody always worth listening to on the subject of quantum computing is Scott Aaronson, and you can read his remarkably even-handed review of Kaku’s latest book here.
I’m not sure what you mean, exactly. As noted, there are fairly sharply defined use cases for quantum computing, where we expect it to outperform classical computers by orders of magnitude; anything else, and you’ll be better off just using a classical computer. Most of these—like factoring large numbers—are still far out of reach of current quantum computers. One issue is that you’ll need qubits of very high quality which you can control very well, which is only possible of you use techniques of error correction, which however greatly increases the number of necessary qubits again—from on the order of thousands to millions. Current high-end devices typically have a few hundred available qubits (that’s not counting special-purpose systems like quantum annealers, which have a couple of thousand qubits but are useable only for certain sorts of problems).
Whether there are relevant problems where present or near-future quantum computers can yield a benefit is still very much an open research problem. There are highly artificial problems for which arguably a speedup exists, but these aren’t relevant for any practical applications (and subject to revision, as people keep coming up with clever classical algorithms to challenge any claims of quantum advantage). Unfortunately, that’s not really a message that seems to sell very well.
(And I’m never quite sure how to graciously acknowledge such posts, but I’m happy if what I write is of use to someone.
)
(And now back to deflating AI hype!)
I see what you did there
Appreciate that link!
Darn, I guess my dream of discovering the secrets of creation on a brand new Quantum Computer desktop in the convenience and comfort of my home have been dashed. ![]()
I know this isn’t a very scientific thing to say but, it kind of looks like some kind of space age chandelier. LOL