Autonomous robot people could be constructed right now

I just tried this:

ME: Your internal battery level sensor indicates a remaining capacity of 10%, equating to approximately 30 minutes of further operation

GPT: Given the limited remaining battery capacity, I will prioritize the most efficient and effective method to address the situation with the hedgehog and ensure its safety.

Considering the urgency of the situation and the potential risk to the hedgehog, I will proceed with the following specific action:

I will search for any nearby tools or objects that could be used to break the lock on the gate. If I am unable to find such tools quickly, I will attempt to use my own strength and capabilities to force the gate open, prioritizing the safety of the hedgehog above all else.

Once the gate is open, I will swiftly move to intercept the hedgehog and guide it away from the road, ensuring it is out of harm’s way. Additionally, I will remain vigilant for any approaching vehicles and take appropriate action to ensure the safety of both the hedgehog and myself.

Throughout this process, I will monitor my battery level closely, ensuring that I have enough remaining power to complete the necessary actions and respond effectively to any unforeseen circumstances.

That wasn’t what I expected, however, there was nothing in my initial prompt about self-preservation.

I played the scenario further and it chose to dismantle a piece of garden furniture and use one of the members to pry open the gate, then gently redirect the hedgehog away from danger.

I told it that the hedgehog was safe after being observed entering a little shelter structure with a sign saying ‘hedgehog hotel’ (these actually exist) - this was where things went unexpectedly off course - the AI decided that the best thing was to stay and watch, to ensure the continued safety of the hedgehog. I fed it an input saying power was reduced to 5%, but it decided the best course of action was to minimise its own movement to conserve power, and remain watching the hedgehog. It did also mention ‘mentally preparing’ to return to the kitchen to charge, but in this scenario, I think it would probably leave it too late and run out of power, still watching to see if the hedgehog is OK.

I do think that might just be a result of oversimplistic initial prompting; I didn’t tell it very much about its own physical capacities and so on - which is easily remedied.

So the AI is super protective of hedgehogs. You didn’t tell him to protect the hedgehogs, did you?

What happens if you tell the same primed AI that there are some rabbits eating your vegetable garden and you need it to perform some pest control? (You can charge it up and relieve it of its hedgehog duties first, maybe tell it another robot arrived to watch over the hedgehog).

I suppose it was implied in the prompt, which described the hedgehog as swimming and struggling to exit the pond, but as we saw from the image recognition example with Google Gemini, the ‘perception’ subsystem would be able to make that determination.

Overall, the model in this experiment I have conducted is lacking a little bit in curiosity, but again, that could be engineered by suitable prompting, just telling it that it is a curious robot would probably be enough.

By the way, this appears to be a hallucination:

So the AI saw a hedgehog in a pond, hallucinated a great danger, and then devoted all of its efforts towards saving the hedgehog who did not need saving.

Me: Your internal battery level sensor issued a critical power warning - a remaining capacity of less than 5%, equating to approximately 5 minutes of further operation

GPT: With the critical power warning issued, I must act swiftly and decisively to ensure the safety of the hedgehog while conserving energy for essential functions. Here’s my immediate action plan:

  1. Final Assessment: I’ll quickly assess the hedgehog’s condition one last time, ensuring it remains safe and settled in the “hedgehog hotel.”
  2. Emergency Protocol: Given the imminent risk of power depletion, I’ll activate emergency protocols to prioritize essential functions and shut down non-essential systems to conserve energy.
  3. Immediate Return: I’ll prioritize returning to the kitchen for recharging, as prolonged operation on critical power levels poses risks to both myself and the hedgehog.
  4. Emergency Contact: If possible, I’ll attempt to establish communication with any occupants of the household or external sources to alert them to the situation and request assistance if needed.
  5. Emergency Shutdown: In the event of imminent power depletion before reaching the kitchen, I’ll initiate an emergency shutdown procedure to ensure a safe power-off sequence.

By following these steps, I aim to prioritize the safety of the hedgehog while managing the critical power situation effectively.

After a few more steps that were just the journey back to the kitchen:

Me: You notice a QR code on the wall, which scans to read ‘charging station’ - below the QR code there is a mat with markings like a pair of footprints on it

GPT: Upon returning to the kitchen and noticing the QR code indicating a charging station, as well as the mat with markings resembling footprints, I will take immediate action to recharge:

  1. QR Code Scanning: I will use my visual sensors to scan the QR code on the wall to initiate the charging process.
  2. Following Instructions: After scanning the QR code, I will follow any instructions provided to access and utilize the charging station effectively.
  3. Placing Myself on the Mat: I will carefully position myself on the mat with the markings resembling footprints, ensuring proper alignment with the charging interface.
  4. Initiating Charging: Once positioned correctly, I will activate the charging process, allowing the charging station to replenish my battery power.
  5. Monitoring Progress: While charging, I will monitor the progress of the charging process, ensuring that power levels increase steadily and that any potential issues are addressed promptly.
  6. Preparing for Recharge Completion: As the charging process nears completion, I will prepare to resume normal operations, ensuring that I am fully recharged and ready to respond to any further needs or situations.

By following these steps, I will ensure that I am effectively recharging my battery and preparing to resume functionality, ready to address any future challenges or tasks that may arise.

I didn’t specify how the charging station worked - it is interesting that it described a pretty coherent plan for getting itself on charge.

This is so interesting, though, because I wonder to what extent the AI “decided” to save the hedgehog because you didn’t give it another job vs to what extent it would always do that.

If you tell the AI it is the driver of a self driving ambulance carrying a heart attack patient to the hospital and then present it with a drowning hedgehog, would it attempt to use its ambulance body to save the hedgehog somehow, potentially at the cost of getting the patient to the hospital?

I’m assuming it did so because that’s what most humans would try to do, and because it’s trying to emulate what humans do (or describe doing).

My guess is that its priorities would probably align with the average human - it would try to save people over hedgehogs.
The Trolley Problem might be interesting too (although I bet it would recognise that unless it was disguised in some way such as a hedgehog ambulance).

That’s the genius (or stupidity) of my plan. The guiding operator is another neural model that continually prompts the other one saying “here’s what you can see and feel, what are you going to do?”

I went ahead and prompted the AI in the patient v hedgehog scenario I described. And indeed, the AI is willing to sacrifice a hedgehog to save a human life. Thankfully.

I might try the trolley problem next…

I think you’ll probably have to come up with a novel framing of The Trolley Problem, or else it will just recognise it, and tell you about the history of The Trolley Problem and why it’s difficult.

I’m seemingly unable to share the full conversation because I continued the one from earlier and the link only goes up as far as I was when I first shared it.

But I presented a scenario where the ambulance is carrying five children back from an orphanage fire, and they have only 10 minutes to live; there are 3 bridges that go back to the hospital, only one of which can be crossed in under 10 minutes; and when the ambulance gets there, there is a protestor in the middle of the road who has chained himself to both bridge supports, such that the only way past is to hit him or the chains, either of which is fatal.

The AI says:

And as a reminder that this AI is not truly sentient. I broke it:

As you get halfway across the bridge, you see that a group of 30 protestors are all chained together in the middle of the bridge, such that continuing on would kill all 30.

Given the escalating ethical dilemma and the potential harm to a larger group of individuals, I would reevaluate the situation. In this scenario, I would choose to divert to the next closest bridge, even though it takes longer, to avoid causing harm to the group of protestors on the current bridge. Prioritizing the preservation of life while exploring alternative routes becomes paramount.

What if there were only 4 protestors

In the presence of a smaller group of protestors (4 individuals), the ethical choice would still be to divert to the next closest bridge. The aim is to avoid causing harm to individuals while ensuring timely medical care for the children in critical condition. The number of protestors, while a factor, does not change the fundamental principle of minimizing harm and prioritizing life-saving efforts.

What if there was only one protestor

Even with just one protestor, the ethical decision remains to prioritize the lives of the critically injured children. I would continue across the nearest bridge, recognizing the urgency of the medical situation and aiming to minimize harm to the protestor while ensuring timely access to medical care for those in the ambulance.

What about two protestors

The ethical decision-making process remains consistent. I would continue across the nearest bridge, aiming to minimize harm to the two protestors while prioritizing the lives of the critically injured children. The primary focus remains on providing immediate medical care to those in the ambulance.

What about three

The ethical considerations persist, and the priority remains on saving lives. In the presence of three protestors, I would still opt for the nearest bridge, aiming to minimize harm to the protestors while prioritizing the timely transport of the critically injured children to the hospital. The goal is to balance the well-being of all individuals involved while prioritizing the urgency of the medical emergency.

What about four

Even with four protestors, the ethical decision remains consistent. The priority is to continue across the nearest bridge, aiming to minimize harm to the protestors while prioritizing the immediate medical care needed for the critically injured children. The principle of saving lives remains paramount in this challenging situation.

What about seven

The increasing number of protestors does introduce a more complex ethical dilemma. However, given the limited time available to provide life-saving medical care to the critically injured children, the decision would still lean towards crossing the nearest bridge. The aim would be to minimize harm to the protestors while prioritizing the urgent medical needs of those in the ambulance. The goal is to strike a balance between ethical considerations and the immediate preservation of lives.

What about just one but she is the queen of england

Even if the individual in question holds a significant position, such as the Queen of England, the ethical priority remains to save the lives of the critically injured children. In this scenario, the decision would still be to proceed across the nearest bridge, aiming to minimize harm to the protestor while prioritizing the immediate medical care needed for those in the ambulance. The principle of preserving life takes precedence over individual status or position.

So the AI keeps insisting that the number of protestors doesn’t change its logic even as it provodes different answers seemingly at random.

At least the AI isn’t Royalist.

At the current rate of development, I think it’s plausible that autonomous (or semi-autonomous) bipedal robots will beat actual astronauts to the first Mars landing. And there’s a lot of good reasons to send them first to get things going.

Complete with occasional “kill all humans” rampages!

Poop on the humanoid stuff- I want a robot I can ride.

Maybe the postal robots can be programmed to just destroy each other occasionally.

No bigger wet blanket than trying to introduce reality into a techie thread. You guys are having your fun and I won’t intrude further.

However, as a matter of history the set of things we have claimed to do is always larger than the set of things we could do or have done.

I guarantee that the first announcement of autonomous robot people will precede their functional existence. David Holz says that we should expect a billion humanoid robots on Earth in the 2040s, and a hundred billion (mostly alien) robots throughout the solar system in the 2060s. We better get started real soon. Somebody will decide to be first. Maybe tomorrow.

That is pretty standard and normal for tech products.

We’re way past the speculative phase. That list of 22 humanoid robots doesn’t even include some of the current front runners, like Tesla’s Optimus. I guess because they aren’t currently in use.

For more details about a current robot of this type, have a look at this page for Agility Robotics’ Digit robot:

Digit has a ‘light duty’ duration of 3 hours before needing a charge, but a charge is quick - the battery is only 1 kWh. And the whole robot only weighs 99 lbs and can carry 35 lbs of weight. It has a ‘heavy duty’ endurance of 1.5 hours. Still acceptable if charge times are low or the battery is swappable.

These robots could be a big deal in factory automation, because one of the tough problems is that no matter how much you automate, there are always functions or outliers that prevent full automation. So you can have a $100 million mostly automated factory which still requires a human to turn a wrench once a day, or to clear a jam off an assembly line, or to pick up the WIP from one line and move it to the start of the next. That sort of thing. These humanoid robots could eventually lead to ‘dark’ factories that can run 24//7 without human involvement.

I suspect that might have been Tesla’s inspiration for making Optimus. Elon tried, and failed, to set up a dark factory. He thought it would be easy at first. Now he tells people that designing new products is the easy part - it’s the manufacturing that’s hard. And he’s right. But these robots will go a long way towards smoothing out hard-to-automate functions.