If you can't remember it then it didn't matter?

It could be very difficult as a reverse engineering problem. But what if the mechanism were built into the design? It doesn’t have to be limited in what it can do, only that it must do it in a way that can be undone.

ETA: And there’s also the part about pulling the plug. Use the same hardware reload it with a different OS.

You’re still assuming that’s it’s possible to build a meaningfully intelligent agent with a system that allows this. My argument is that this may not be possible (or might be possible but intractable in practice). I think this is the probable scenario.
ETA: I know I’m just sounding contrary. I’m a bit busy right now, I’ll elaborate on the whys later.

It’s ok. I’m interested in your reasoning. I take the approach that sentience is possible now with conventional technology (capability, not speed and memory capacity), and it’s just a matter of software. Not necessarily an efficient approach, but whatever can be done with better hardware could still be simulated with a Turing machine. And given a long enough tape, a human could be simulated with a Turing machine.

But you bring up a point. Is there a difference between a machine which can be selectively erased and one that can’t? After all, we’re just machines of the latter type.

No, to accept AI being possible at this point you pretty much have to accept that a Turing Machine (or more precisely, a computer with a von Neumann architecture) can simulate the human brain. The trick is that even with the current “old news” technology, and just assuming a modern computer, there’s a bazillion ways to go about it (in theory) and whether or not easily erasable, traceable memory and behavior is possible in a sentient computer heavily depends on what AI models you think can, in principle, be sentient.

Part of the problem is Bonini’s Paradox, which essentially says that the more complex your model or system it gets, the more incomprehensible it is. Another potential problem is the Frame Problem, and more specifically the relevance problem – building a detector to determine what memories are relevant to a specific heinous situation is considered intractable right now – not to mention the issue that if you could build such a detector you’d probably be able to use said detector to kill switch the robot before said heinous act anyway.

Like I said, I can go into a lot more detail later, but the gist is even if you assume current technology your specific philosophy of AI and what methods you use will inevitably determine whether such a solution is even possible.

I think it is, and apparently I started it, which I’d forgotten…oh irony, you are a cruel mistress… :smiley:

Thanks for the answers everyone, reading with interest.

Total agreement; I’m another “Strong AI” proponent.

Heck, a group at MIT showed that you can build a working computer out of Tinkertoys! They built a working contraption that resolved “Tic Tac Toe” game configurations. It could certainly be scaled up to emulate a modern Pentium chip – just a hell of a lot slower, and taking up more space!

I’ve never been convinced that there is anything in the human brain that cannot be emulated with existing electronic components. (Roger Penrose says otherwise, but I think he’s full of hoss poop.)

Okay, to avoid this post taking 17 pages of text, I’ll illustrate the trivial case:

We make a physics simulation that perfectly recreates the physics of a human brain, complete with a system for mapping certain relevant bits of i/o (camera to sight etc) as the correct percepts – at a physical level.

This wouldn’t be satisfying, from an AI perspective, we’ve learned nothing about intelligence, but it IS a computer that can think exactly as well as a human (imo it’s the trivial case of Strong AI). We likely would not be able to nuke this system’s memories in any meaningfully targeted way.

If this doesn’t satisfy you on what I mean, I can start going into the non-trivial down-the-AI-rabbit-hole cases, but we’ll start with that.

I love the example… But I’m lost as to where you’re going with it. What point, exactly, are you trying to make with it?

If we had such a construct – a working emulation of a human brain – and we did horrible things to it, then re-set it to its conditions prior to our actions – then it would not “remember” the horrible things. I guess we would all agree that, given the emulation, we could do this.

To my way of thinking, the suffering is gone. It happened – and that imparts moral shame to the people who did it. But there is no victim! The emulation has no memory of the experience. To the world at large, it is indistinguishable from never having happened at all. There is no objective test by which the wrongdoing can be established. It’s a “perfect crime,” with no traces.

Suppose I break in to your house. But I don’t take anything, don’t do any harm, don’t leave so much as a fingerprint. You never learn of it. The neighbors don’t see it happen. Your home security system is never triggered; no camera takes my picture.

I’m the only one who knows… And I have a heart-attack and die…

Did a crime happen? How, exactly, does your epistemology establish this?

Is the distinction here one of omniscience vs. limited knowledge? Are we purporting some kind of “God’s Eye View,” in which, under “absolute objectivity,” an event happened? In contrast, am I depending too much on an interpretation based on subjective experience?

There are thousands of historical events which we will never, ever learn of. Spies in WWII. Clever thefts and lucky murders. Novels written but then burned before anyone could read them.

What does it mean to say “these things happened?” What does it mean to say “they didn’t?”

Actually, I was trying to make a point that it was a system so complex that we couldn’t delete the memories, because I assumed it would be intractable to store that much state information :p.

I’m not sure where I stand about the issue. My intuition about actual torture is that there are always side effects. If you torture someone and remove the memories, you’re still potentially corrupting the perpetrator into someone who likes torture or thinks it’s okay. If you do horrible things to a person or a computerized brain, there’s still side effects to the body. I’m not sure where I stand about purely psychological torture, though.

As for the breaking into the house example, I think it’s bad, but only mildly bad. I think my argument is one of chance more than anything – if somebody HAD come home and discovered you, it likely would have caused them an immense feeling of unease. I think that’s what makes it an immoral act, though as stated it’s victimless. (I suppose if you want to be super nitpicky there was wear and tear damage to their floor and the point of entry).
Anyway, my posts in this thread were mostly to TriPolar that I wasn’t convinced that we could practically make an AI where we could just delete a memory related to an event. I still don’t think “just save state and reset it” is something that will ever really be able to happen, but who knows.

Oops! I took away exactly the opposite; apologies.

If it helps any…I feel morally uncomfortable torturing characters in video games, or writing fiction in which seriously horrible things happen to entirely fictional characters. My empathy threshhold is crossed, even for “persons” who exist only in my imagination.

(And horror movies are right out!)

So, FWIW, “It didn’t happen” isn’t a sure-fire defense against a moral reaction.

Even if you erase the memory of the act, the act itself still had consequences–including the consequence of erasing someone’s memory. We don’t remember the dinosaurs. That doesn’t mean they didn’t exist.

But we have evidence of the dinosaurs… As I understood it, the question involved events for which there is no evidence. The memory is erased, and the guys who did it aren’t talking. Where is the victim?

If we show the victim videotape of his torture, even though he doesn’t remember it, he now sees evidence of it having happened, and has the right to resent it.

If no videotape was made?

It’s another approach to Cartesian doubt: how do you know you’re not dreaming, or insane, or in a sim? How do you know you weren’t a super secret agent on a Mars mission…and then made to forget?