Autonomous robot people could be constructed right now

I think this is unlikely. There will be ubiquitous LLMs that are highly intelligent yet completely servile. That lines up with the incentives of the individual humans that created them. Full accountability is mostly pointless with a LLM/robot.

There will be war and criminal LLMs who have as their mission to make things worse for select people, and not others. There will be worker LLMs, who, even if they are “paid”, immediately funnel all funds to the people or corporations that own them.

Also, all of this tech can be duplicated and is cloneable. Why do I care about an individual LLM when it can be recreated at the drop of a hat?

LLMs are still created by humans, particular humans. Profit and other motives will win out. The humans that don’t own the tech are going to lose out. They will lose out big with the servile LLM/robots, but it’s not clear how “freeing” them (in the unlikely event that ever happens) would particularly help them.

This is the thing we were sort of wrong about what the tricky part of Ai was. It was always assumed (since Alan Turing and his eponymous test) that language was the tricky part and the sensing and interacting with the environment was relatively easy. The rise of ChatGPT and failure of autonomous cars are has shown the opposite to the case.

AI can do a decent enough job of replicating the language skills of an adult human, but can’t sense and interact with the environment at the level of a rather dumb animal, let alone a human.

I think most likely is that for the foreseeable future, all autonomous robots will be owned by some entity. (individual, corporation, government, etc.) who is responsible for its actions. A legal guardian suggest free will, and we’re a long way from creating that, or at least agreeing that we’ve created it.

Of course, there is a lot of work needed on the legal front to define ownership and responsibility. If I buy a domestic robot and it injures a visitor, am I responsible as the owner, or is the company that manufactured it? Was it the hardware company that gave it hands strong enough to hurt someone, or the AI company that gave it faulty programming? If Company A created the software and Company B trained the LLM, who is responsible?

In some ways these are new problems, but we already deal with similar questions all the time. For example, a wheel recently fell off a plane. Was it Boeing’s fault for bad manufacturing, or United’s fault for inadequate maintenance?

Given that the origins of any particular AI are going to remain and become even more increasingly opaque, I don’t think that day will ever come. It’s going to be very easy to create a fair seeming AI with a back door that only you know about. AIs may be intelligent, but they may never be trustworthy.

I’m not seeing that. The latest multimodal models are being trained on vision, sound, etc. They can make music, create 3D objects, and the latest LLM-enabled robots have sensors for touch, strain, hearing, vision, and anything else we want to give them. In fact, it’s humans that are limited in sensory input. We can’t see with LIDAR, our vision doesn’t extend into the infra-red or ultraviolet, our hearing caps out quite low, and we can’t extend our senses to feel things like magnetic fields and radiation and other things like robots can.

Have look at Optimus Gen 2. Watch the video at the end, showing negative feedback fingertip sensors that allow it to handle eggs without breaking them.

Note: the egg demo is not AI driven, but tele-operated. This is to show the flexibility , smoothness and capability of Optimus’ sensors and actuators. There are other demos showing an LLM powered robot working at a table sorting objects using the same tech.

That’s a demo in a constrained environment that is known ahead of time. Its orders of magnitude harder to react to a real environment where literally anything could happen, in the same way a real animal does. That is as far from being actually autonomous as having a biodome on Earth is from an actual colony on another planet.

The fact they had to “mechanical turk” even that constrained demo shows how far this tech is from being autonomous.

Actually LIDAR is only required as even basic vision tasks like compensating for sun glare are still unsolved problems in computer vision. Even with all those sensors an animal’s visual cortex is way ahead of anything a computer can do.

That doesn’t necessarily follow. There are plenty of potentially interesting applications for a humanoid telepresence device, which would require a demo of tele-operation to properly showcase it.

Yeah but including it in a autonomous robot demo is definitely a “mechanical turk” move. If @Sam_Stone had not included that comment I would not have inferred from watching the video that the egg part of the video was actually driven by an operator in gloves not AI. Its still useful tech, but it shows how far we are from actual autonomous robots.

I don’t see anywhere that this was claimed to be a demo of autonomy. They’re showing off a hardware platform at its current state of development.

No, the demo had nothing to do with autonomous behaviour, and therefore says nothing about the state of autonomous behaviour. This is a demonstration of the sophistication of the motors and sensors driving the robot. It shows that they are no longer a limiting factor in robot design. And this is not a small feat - getting robot hands right has been a major challenge.

There are other demos out there of ‘autonomous’ behavior, although I don’t know if I’m using that term the way the OP wants. Robots that can carry out complex tasks without human involvement, involving judgement and not pre-planned movements.

Here’s an early exmple of an LLM being used with a robot:

In this case, the rpbot is presented with a room that has a bunch of stuf on the floor, and several trash bins. The human just says, “Digit, clean up this mess”. The LLM Digit is using understands what that means. It uses digit’s camera to look around, and figures out that the stuff on the floor has to go in trash bins. With each item, digit looks at it and the LLM figures out what it is and what bin it should go into. Digit carries out the instructions and cleans the room. At no point were any of its movements programmed. The LLM basically directed all its movements in real time.

Not if the system requires very low latency to work, which won’t be the case in a lot of applications.

I never said anything about an operator in gloves. Tele-operated can just mean sending commands wirelessly. And there IS AI involved even in these parts. You don’t have to tell a finger, “apply voltage X to knuckle motor 2 If finger sensor shows less than .5 lbs pressure” or anything specific like that. More likely, the command is, “Pick up item you have identified as ‘egg’. Use less than .1 oz of force.” Then the local AI will handle all the smaller commands, feedback loops, etc.

The same with Boston Dynamics. Their robots are not ‘autonomous’, but they contain significant AI. You can tell Spot to move to location X, and the AI handles the walking, slipping, terrain issues, etc. But Spot doesn’t have the brains to move to X on its own based on some judgement.

I don’t know just how specific the human commands were. They ‘might’ have been 1:1 mimicry using gloves and such, but it might have just been a bunch of simple commands. But again, this is irrelevant because the point of the demo is not to show off autonomy.

Optimus is not intended to be tele-operated in the field, and neither are most of the other robots in development.

If you want to demonstrate the sensitivity of your sensors and finesse of your motors, you don’t add another variable into the mix. Even if autonomy was 90% of the way there you wouldn’t use it because it makes troubleshoting harder if there are multiple potential causes, and you don’t want to screw up a demo of good hardware by tying it to potentially flaky software.

This is standard engineering practice, and good prototyping/demonstration practivce. Don’t demo what isn’t ready, only show the stuff you want feedback on and don’t muddy the waters with a bunch of extranous details, etc.

I used to teach a class on sketching and prototyping, and one of the big mistakes people make is to add too much to a demo or proototype. For example, if you want feedback on the layout of a UI, don’t add a bunch of color to make it look better, because you’ll then distract people into arguing about color choices, or if people don’t like the layout they might confuse themselves into believing color is the problem.

That’s why you don’t combine a demo of servo feedback with a demo of autonomy - at least not until both are ready for prime time.

That is how they were done though:

Viewers can see a hand keep moving into frame, suggesting there’s a person just off to the right making the movements that are then mimicked by the robot.

Could be, but to characterize that as a ‘magic trick’ when the whole point of the demo was to demonstrate the versatility of the hardware and no claim was made about autonomy seems quite a stretch.

The ‘magic trick’ in this case is designing hands and arms that can both pick up and manipulate heavy items, while still being able to handle an egg. Humans do it effortlessly, but it has been a huge challenge for robots. Designing a hand with an opposable thumb and all the proper articulations and which can do the wide range of things human hands can do is not easy.

It’s not Tesla’s fault that most tech writers today are tendentious idiots whose ‘tech’ knowledge is more about social media than robotics.

I think it’s fair to say that, taken together, there are no currently available robotic platforms that hit quite the same blend as humans in terms of speed, agility, strength, dexterity, sensing, endurance and reaction. Humans don’t do any of those things perfectly but we have a pretty good mix, and of course some humans are impaired, quite severely, in one or more of those categories, but we still consider their existence worthwhile.

But that doesn’t necessarily mean that robots fall completely short of the minimum requirements for providing a platform that could be used to fulfil basic autonomous capability - that capability might be limited; they might be slow or fallible, or whatever, but that’s not really a concern. If I gave the impression that I thought a machine could be created that would compete with humans in every regard, I apologise. I did not envisage something that would do that.

We are nowhere even close to that, and we would need breakthroughs in batteries and several generations of improvements in silicon before we could even get close.

Fortunately, we don’t need to do that. Robots just need to be autonomous in the environment they were designed for. Unlike humans, we don’t have just one design to make do with. We can have wheeled robots, flying robots, inchworm robots, or any other kinds of robots, and they can be autonomous within their domain and task requirements.

We are currently at the stage where autonomous (by the industry definition) robots are working in some factories in a limited role. We are very close to robots that will be able to take on more general factory tasks, and robots capable of navigating a home and doing many of the chores within.

It would be nice if we had batteries that could last a whole shift, and a couple of robots are there. But for many applications that’s not necessary. For example, you can imagine a robot servant that sits in its charging chair when not called on to do something. Ask it to make you a sandwich or vacuum the floor and it does, then goes and sits in its chair again. A robot like that might get by with a battery that only lasts an hour or two.

But we are very, very far away from a robot that can be inserted into any human task and do it as long as a human could with the same autonomy and quality of output.

Going from “here is a pre-arranged room with three bits of trash with known types and three bins with known uses, put the trash in the bins” to “pick up the trash on the some arbitrary street in the real world” (or even “walk down this busy real world street you haven’t seen before, as well as a rat or a raccoon”) is big big step. We are not remotely close to an actual autonomous robot.

…or ‘sort through this trash for potentially valuable items, or things you might be able to use in future’.

Yeah, but if you want to distract people into arguing about unimportant things, while slipping important decisions under the radar so as to not have to bother arguing your case, that’s a great approach!