Replacing journalists, how about replacing lawyers?
Though this is nonsense which can’t work. Like Theranos.
LegalEagle weighs in on everything wrong with it.
I think we’re just seeing the tip of the iceberg here though. I expect that we are entering an “AI bubble” much as we did when the World Wide Web was new and everyone saw it as a magical way to easily make money and solve problems. Startups came and went in a flash, investors threw piles of money at garbage, and for every Google or Amazon there were thousands of projects that went nowhere.
That’s the thing. That isn’t the only way. There’s a bunch of papers on the subject especially in the past several years. Are we there yet? No, but we have our top men (and women) working on it. (Who?) Top men (and women). I.e., not me.
Although you might get a kick out of my latest AI. It is a parallel AI system.
Good times. However, since my contract ends soon, I don’t think I’ll get to finish it. Some bright graduate student will have to pick it up and complete it. Or maybe I’ll do it in my spare time.
I’m sorry, I’m not following what you are calling a model here. Can you give a concrete example please?
I’m also confused that the model takes outputs as input during training, but takes inputs as inputs during inference. But I suspect that will be clear when I understand what you mean by model.
In other words, you’re saying that we will eventually be able to look at a neural network and analyse what it is doing so that we understand the underlying logic?
I may need to defer to your experience in the field here, but it seems to me that we’d run into a sort of halting problem?
I’m saying people (not me) are looking at it. So, it is a definite maybe with a dash of hopefully. It is outside my area of expertise, and too specialized for me to comment on it intelligently. My area of specialization is algorithm and model inference (using AI mainly), with a fair amount of applied AI, and a little bit of AI theory. You could take a look at a paper or two from the link. Let me grab a couple I’ve read:
The model is the process that decodes the outputs into a solution (or action or whatever).
For example, in one of my prior works, I have a student cognitive model. You give it a students data (grades, activities, etc.) and it will tell you things about the student, such as how they like to learn, how much can they memorize at once (working memory capacity), etc.
That model is initially untrained because we don’t actually know the relationship between any student’s date and their cognitive characteristics. So, I used an AI to train the model. Once the model is trained, then it can take inputs directly without an AI in-between. In this case, we actually also evaluated a neural network, and the outputs were the characteristics (directly encoded), so in that case the network was the model.
Another example might be some kind of design system. Say, automobile design. The outputs from the AI are injected into the model that decides how they should be interpreted. E.g., place a wheel here, at such and such an angle, etc. and once the design is complete see how well it moves. In this case, the model probably doesn’t really need any training once created, as it is based entirely on physical reality. It is a (simple) reality simulator.
In short, there’s a lot of possible variations. Trainable vs untrained models, AIs that are the model, AIs that train models, etc. I hope this helps explain it though. If not, then please ask me to clarify.
Slater has argued that he has a valid copyright claim, as he engineered the situation that resulted in the pictures by travelling to Indonesia, befriending a group of wild macaques, and setting up his camera equipment in such a way that a “selfie” picture might come about. The Wikimedia Foundation’s 2014 refusal to remove the pictures from its Wikimedia Commons image library was based on the understanding that copyright is held by the creator, that a non-human creator (not being a legal person) cannot hold copyright, and that the images are thus in the public domain.
Maybe this establishes a precedent. Maybe not. One could argue that an AI could evolve to where it could establish the sort of status where it could be legally eligible to hold copyright, whereas the macaque isn’t going to do that for probably at least a hundred thousand years. YMMV.
I am not knowledgeable about AI. Years ago I did read Bostrom’s Superintelligence. I presume this excellent book is somewhat dated and may not be at the forefront of current debate, and am probably wrong about this too.
In that book, he discusses a number of ways of controlling AI to reduce its dangers and enhancing its benefits. My memory is imperfect. I believe he talked about methods of capability control which included: incentives, cognitive constraints, tripwiring dangerous activity and confining information. Other constraints involved motivation: rules, limiting scope and augmenting benevolence.
There was also a discussion of oracles, genies, and sovereigns as a way of answering questions or executing commands or considering longer range objectives (respectively). If the description that the system finds a word than considers the next best word, rinse and repeat, than boxing methods may not apply if such a system can be described as a genie. Is this so, or is it essentially just answering questions?
Underscoring a lot of these dangers was a principle of common good. Capitalism is good, but Bostrom discusses (IIRC) limiting profits to a high amount and then being altruistic if there is a particularly generous windfall, or encouraging strong moral commitments and then enshrining these in law and treaty.
My stupid question: to what extent are these principles recognized or followed in current AI projects? Have any governments already advanced from moralism into codification, and does the common good principle apply to chatbots?
Well, it may take the monkeys even longer than that to type out a worthy work. I remember reading a study where many computers typed randomly to mimic monkeys. After something like 42 billion billion billion monkey years (my numbers may be wrong), the longest intelligible string of letters was only well under seventy characters long, none of which formed two consecutive Shakespearean words . How many monkeys in your circus?
I’m not worried about AIs refusing to let us shut them off. I am more worried that we will do it to ourselves - giving AI control over major parts of the infrastructure such that we get to the point where we can’t shut them off without massive damage.
The AI itself could be completely helpless defensively, but we’ll still be forced to keep it going because we screwed up and no longer have an alternative for what it does. Kind of like the situation we find ourselves in with China.
The monkey copyright thing was a ridiculous verdict. It sounds like they are saying that if Yousef Karsh himself set up the lighting, background, poses, and subject for a portrait, but had an assistant push the shutter button while he held a light or something, the assistant would own the copyright. That’s ridiculous. In photography, the pushing of the shutter button is not part of the skill set at all. It’s everything you do before you push the button that matters.
There’s plenty of work being done on AI ethics both theoretical and practical. This isn’t really my area of expertise (I’m a killer robot maker after all), but the theoretical side is looking at ways to code AI ethics, the importance of it, areas where there’s a particular need, the ethical ramifications of AI systems, etc.
The practical or applied side examines actually evaluating such systems. The former is definitely bigger than the latter mainly because AI tools are not generally given the ability to act unilaterally any without human intervention or oversight. The financial system is probably one area where this is not true because the “need” (and boy do I use that word loosely) to have high speed to get those millions of transactions worth a fraction of a penny each. No human could keep up. But almost everywhere AI is currently used it mainly a time saving tool, and the operator oversees what is done, and ultimately approves of it or not. A surgical assistance tool, for example, does not make the cut, but rather is advising the surgeon about things to consider. The human surgeon still does the cutting and is responsible for using their expertise to accept or reject AI assistance. This is certainly changing, and no doubt within the next few years (if not already and I’m not aware of it) AI will be making shoot-no-shoot decisions, and surgery (due to the ongoing doctor shortage), and all sorts of work where ethics is important. So yes, lots of work being done, lots of work still to do. Unless you make killer robots. I’m all set, I just need to code them to kill me last.
I hope that helps answer you question. If not, then let me know and I’m willing to take another stab at it.
It’s a good answer. I am unsure to what extent oversight is possible in practice, given the inner workings are too complicated. At this stage, the outputs are relatively small in volume and presumably being studied, but this may not be the eventual usage. Your tool example is interesting, but perhaps reflects medicolegal concerns as much as ethical limitations?
We may as well embrace the inevitable. It’s going to be like electricity or the Internet. It’s simply too useful to turn off. Eventually we’ll build things that couldn’t possibly exist without it, too.
When Defence was discussing the architecture of armament all those years ago, my sense from stuff I read is they were thinking technical difficulties and not future cybersecurity. If this is true, then much more forethought has gone into AI. Again, has any government seriously moved from vague moralism to vague codification?
I agree that it’s inevitable. AI is going to be too useful to ignore, especially since our competitors will be using it. Those who choose not to will be left behind.
Here’s a scary thought: You know how ChatGPT can write code that works? All someone would have to do is add an execution engine to it (a trivial exercise), and it can run its own code against the internet. ChatGPT also knows how to write very good malware… Imagine reinforcement learning where it puts out a piece of malware, tracks its progress, then learns from whatever countermeasures are used and writes another, modified one. In a second. It could also run DDOS attacks, flood sites with bogus information, etc.
I’ll bet I could tell it to write code that would create 1,000 accounds on the SDMB and start posting to every thread in ways that are hard to discern from humans. I’m sure it could cover its tracks by adding them randomly over time, matching the pattern of other new signups. and building them up over time.
Now imagine a hundred thousand of these things, available on every smartphone.
My previous post mangled my use of “arpanet”. Vaguely sorry.
It is slightly comforting that ethics remain important to AI. It is naive to think legal reforms won’t eventually occur, probably in a laggardly and unsatisfying way. After all, for better or not, competition law in some places tends to largely consider consumer price when assessing advantage, but attaches little monetary value to data, and this has for almost fifty years in the US, - although the data market has been currently estimated (perhaps wildly or unfairly, but I’d bet the order is not off by much) at $200 billion each year.
Bots pitting Bots on the SDMB would probably wear thin quickly. Fortunately, we have @Hatchie (or at least his bot inspiration) to predict which accounts are suspect.