Downloading Your Consciousness Just Before Death.

I work with them too and of course we choose specific sequences because we know that we can map our problem onto that sequence of transformations and gain value. And yes, when we run a program we have an intended interpretation, absolutely.

For example, I once wrote a simulation of a distribution center and even though the output at each time slice was just a bunch of 1’s and 0’s, when I interpreted the set of 1’s and 0’s in just the right way, I could see a bunch of dots on the screen that I happened to know represented cartons on the conveyor and people moving stuff around etc. (even though the dots only kind of looked like the things I knew they represented).

But that doesn’t eliminate other interpretations that could be applied. Which means one of the following:
1 - All sequences of 1’s and 0’s represent consciousness
2 - No sequences of 1’s and 0’s represent consciousness (it’s created by something else)
3 - The act of interpreting 1’s and 0’s creates consciousness

The argument isn’t that we can’t create programs that mimic the learning and problem solving that humans do to interact with their environment. nobody is arguing that can’t be done (I’ve created artificial life simulations where the creatures learned how to find food and avoid dangers).

The argument is specifically around consciousness, which “feels” to most people like something other than just a machine performing input+transformation+output. We don’t know how to describe it in the same way we know how to describe the details around my creatures or building a bridge etc…

That is the part that is up for debate: is consciousness merely transformations of symbols? is it only created by biological systems? etc.

No it’s more like saying:
“All we need to do to create REAL wheels on the car, is to click on specific buttons on the in-car display in just the right order and when you do that wheels will suddenly exist. Don’t ask us how it works, we can’t explain it at all, and we’ve never even seen it done, but we are certain that the wheels will pop into existence once you hit the right sequence on the display.”

That is what is being proposed by the idea that computation alone will give rise to consciousness.

Who said it’s impossible to do?

The entire point of the working through the problem is to be able to do it. The only way to do something is:
1 - Figure out the way to accomplish something
2 - Don’t figure it out, but just keep trying random things and hopefully there is a test that can be done to see if you arrived at your destination.
For example, every time someone set out to build a new tallest building, they first worked out all of the details that they knew would result in a building that would not collapse on itself or be unable to withstand the force of the wind, etc.

They did that by solving all of the problems required to arrive at the final product, not by randomly building tall buildings and hoping one would stand.

That is the exact point of working out what is consciousness and how can it be created.

Sure, if you duplicate a brain, using the same medium, so you can be sure you don’t miss any detail that may be important (because again, you don’t know which details are important and which ones aren’t), then you should get a good result.

But the goal is to try to duplicate it using a different medium, and for that, you do need to know which details are important.

I can tell you the general approach the Watson team took to getting Watson to where it was:
1 - They started with an understanding of the desired solution:
Provided a topic+word/phrase clue, find the data element from general human knowledge that fits the games rules/patterns for an answer.

2 - Design the program
break the problem down into sub-problems
create programs that solve the sub-problems
Using the solutions from the sub-problems, perform higher level processing to find final answer

3 - Iterate through training, testing and adjusting their system
Test the system, compared desired result to actual result (at multiple levels)
Adjust components so that that the actual result matches the desired result more frequently

For Watson, there is a clear and deterministic path from idea to solution.

But, if they had set out to create consciousness, please answer the following questions:
1 - How would they do step 2 when they don’t know how to create consciousness? Creating a system that answers questions is a problem they could do, I could do and so could begbert2. We can write down the specific steps, the sub-problems to be solved and things that need to be built to achieve the result.
2 - This is the most important question to answer about the comparison:
How would the Watson team do step #3 where they test the system and adjust it where it’s not working well enough?
How would they measure whether their system is conscious?
How would they know whether it worked or didn’t work?

You didn’t actually answer my questions. You described in a rough outline how Watson might have been designed, but that wasn’t what your original questions – and my transliterated version of them for Watson – were about. It was about your challenge of determining from observation where certain key qualities or capabilities reside, and how they relate to strings of 1s and 0s. I can’t answer that about consciousness, and neither can you about Watson, and that parallel is precisely the point of my previous response.

Compounding this logical error, you then challenge me to define how to create and test an artificial consciousness. This just rehashes old territory. We don’t know at this point how to do either, but a few decades ago we wouldn’t have had a clue how to build Watson, either, or specifically, its DeepQA engine. All we can say about consciousness is that it must be an emergent property of the mind, and hence of its physical instantiation, because there’s nothing else it can be. And to re-iterate once more Marvin Minsky’s comments about it, consciousness is probably overrated, because we see it as some profound self-insight, but it’s mostly delusional and wrong insofar as it reflects the actual workings of the mind. Thus it may turn out to be more of an evolutionary biological curiosity than anything we care about in AI. But to the extent that we can fully emulate the brain, yep, there it will be, for all it’s worth – which probably isn’t much, beyond biological survival value.

4 - When there’s an ongoing execution loop that processes input and generates responses while updating an internal state it creates consciousness. Because it’s the act of doing this that causes the consciousness rather than magical spells cast by the mere existence of the substrate, the material that the substrate is composed of is utterly irrelevant, be it neurons, cells, atoms, or digitally simulated objects.

Reason #187 why the box example is stupid beyond belief is because it’s talking about a single, one-off calculation. Odds are extremely likely that it resolves and terminates in less than a second. There is no chance whatsoever that any meaningful form of consciousness operates like this; consciousness is ongoing. It has internal state which is updated based on input from the senses, and its only outputs are actions it triggers in the body controlled by the consciousness. It’s pretty darned self-contained, all things considered.

As I’ve noted repeatedly, consciousness is a behavior, not an object that sits static on a shelf. A “sequence of 1s and 0s” is not a behavior; it’s static. Of your three options the only one that has a hope of emulating a consciousness is the third, but it’s not as simple as a random observer interpreting the 1s and 0s any which way they like - the 1s and 0s have to be interpreted as program code by something that can operate a continuous execution loop. Why? Because consciousness is continuous and ongoing. It’s a behavior, so something has to be behaving. 1s and 0s don’t behave on their own; something must be running them.

And that’s the only interpretation that causes cognition. It doesn’t matter if the binary printout of the cognition code makes a lovely work of art when interpreted as a jpeg - it doesn’t matter if when dumped as hexadecimal input into a weather simulation program it results in a tornado emulation. Doing that is like using a physical brain for sweetmeats - if doesn’t matter if it could have been doing cognition if used right; the way you’re using it it produces no cognition.

Consciousness is an ongoing behavior. And in a physicalist universe it’s nonsensical to claim it can only be done by biological systems - biological systems are just arrangements of nonbiological atoms dancing around to a specific tune based on specific rules. Which are just arrangements of generic subatomic molecules that move around because of specific simple rules.

Physical reality is just a giant stack of “emulations”. Human bodies are actually just a bunch of organs and such interacting based on physical rules. Organs are just a bunch of cells interacting based on physical rules. Cells are just bunches of molecules interacting based on physical rules. Molecules are just bunches of atoms interacting based on physical rules. Atoms are just bunches of protons and neutrons and electrons and such interacting based on physical rules.

It’s all just simple rules acting on astonishingly simple stuff. Why shouldn’t that be emulatable? Emulate the particles and get atoms (and everything else) as an emergent behavior. Simple. Sure you need umpty-kajillion terabytes of memory to store all that, but that’s just a hardware detail, and not a problem to a theoretical proof.

Let’s be really clear - if you believe that there is any physical matter that has behaviors that can’t be emulated, you’re positing that that matter doesn’t obey the laws of physics. Because by the laws of physics matter does what its particles tell it to do, and the interaction of particles is emulatable.

I’m getting really, really tired of people not getting that cognition is a behavior. And I’m getting really, really tired of people not getting that by emulating the entire brain from the ground up you get the baby along with the bathwater unless brain matter is breaking the laws of physics and doing literal magic.

I’m pretty confident that that’s the whole point of HMHW’s argument. It gets a bit lost in our discussions of the other thousand ways his example is the stupidest thing ever, but his whole thrust is that due to this multiple interpretation bullshit recursive loops occur and cause…something…to be contradicted and disappear in a puff of logic. What he’s disproving I’m not sure - the structure of his argument really has one thing available to disprove: all calculations anywhere. His argument, if it works, (which it doesn’t,) proves that there are no such things as calculations. Sure, he’ll tell you that only cognitive calculations are disproven, but his box example doesn’t mention cognition at any point, so he clearly doesn’t know what he’s talking about.

In any case, he’s very specific that there’s something explicitly magical about physical matter that makes cognition possible, which emulation can’t do - calculations and conventional brain activity have nothing to do with it, it’s all about the magic. Which is why emulating minds, according to him, is impossible.

Funny, when I want to figure out how to do something, I just google it, find an example, and copy it. Sure I may have to translate it into a different language from the example, a different medium if you will, but I can still translate it without knowing why it works.

You do indeed have to make sure you get all the important behaviors of your substrate emulated - this is why the arguments about smooshing the frames of a movie together hard enough to generate an emulation was so stupid. You gotta get the important stuff.

But you don’t gotta get the unimportant stuff.

Which raises the question of what’s important?

Now, normally this would be an incredibly stupid question - I’m proposing emulating subatomic particles to the fullest of their behavior for goodness sake; clearly everything up from that comes along as the emergent behavior of their interactions. However there’s one thing that that wouldn’t include: sub-subatomic particle behaviors, like the quarks and gluons and who-knows-what that is simmering under there. Normally we could be quite confident that these things have no significant effect on cognitive function beyond what they impart through their effects on the behavior of the protons and neutrons and electrons, which we’re simulating directly - however there might be some lunatics out there that claim that random events from the sub-sub layer are captured by the brain and used as a random number generator of sorts to permute cognitive behavior.

I happen to think this is stupid, though, so I ignore that crap.

Going back to the emulation, it’s worth noting again that if we did know which aspects of brain function drive consciousness, we could save a lot of effort and not emulate all the subatomic particles, instead only emulating what matters. But for the purpose of this discussion I need only note that if we do emulate all the subatomic particles, then we must automatically get all emergent behaviors, which must include cognition, unless cognition is caused by something in the brain that breaks the laws of physics and does literal magic. Continuously operating yet completely undetected physics-breaking magic.

Your question was a tangent from my question about sequences of 1’s and 0’s in the first place.

The simple answer to your tangential question is the exact same answer I had already provided begbert2 when I described a specific instantiation of mapping 1’s and 0’s to an interpreted problem and that was my dc simulation.

So your question didn’t really make any sense (I’m not clear what point you are even countering or trying to make).

Having said all of that:
“Which sequences of 1s and 0s caused Watson to answer Jeopardy questions better than Ken Jennings or Brad Rutter?”

The sequence of 1’s and 0’s that represented Watson in its final state. That seems pretty obvious, not sure why you are asking that question.

Exactly. Yet you maintain that consciousness can be created by this mechanism without knowing how to do it.

That sounds like an alchemist.

Of course we did.

We didn’t know how to do it cost effectively or very fast but it was very much an engineering problem, not a problem where we didn’t know how to get from A to B.

The simplest solution involves lookup tables and brute forcing around tricky areas.
It’s true that neural networks have allowed us to effectively work around many of these problems where the logic based/step by step approach is not known, but it doesn’t mean you can’t use tables and brute force if you wanted to.

So, if our computer is 9,000,000,000,000,000 sets of playing cards being moved around by a mechanical arm according to our instructions, all of those playing cards will be conscious?

Are all your atoms conscious?

Emergent.
Property.

What is your point?

That sets of cards and atoms are identical?

That anything made out of the atoms in our brains can also be made out of sets of playing cards?

Or that it’s all so easy that it doesn’t matter what we use, we could just use neutrons to fashion our conscious brain and that that would work as well as any other ingredient?
Simplify something to that degree isn’t a logical analysis or argument, it’s just ignoring the problem and operating on faith that you can create your cake by mixing 17,000 photons together.

My point is that you asked how things that aren’t conscious could be made into something that is. The answer is, the same way that lots of bits that can’t propel themselves can be assembled into a car that can: when things are working together in concert they can do things that none of the parts do individually.

Again, just like creating a tall building, or building your car or creating watson, we know how to solve those problems.

We know how to describe the end result (car that rolls, building that doesn’t collapse), we could draw up plans and show them to someone else and they could understand what the machine would do.

We can take the final goal and break down the problem into sub-parts, just like in Watson (parse language, search relevant data, etc.) and design how each of those sub-parts will work and we can design how they come together to create something at the next layer of abstraction (e.g. a car vs the components of a car).

The entire point is that we don’t know how to do any of that for consciousness. We don’t know how to take components at any level of abstraction and describe how the combination of those components will result in consciousness (short of your one examples which is to exactly duplicate our biology, but that’s not really making any progress with creating it on a computer).

So what? I’m not arguing that I have the solution to creating demonstrable AI, I’m just arguing that it’s provably possible if the universe is materialistic and nonmagical. Different issues.

Agreed that if you duplicated the biology to some level of precision then you would get consciousness (most likely), it might need to be duplicated at the lowest physical level.
But downloading into something like the computers we use today, that’s where it’s not provably possible. Nobody really knows if it’s possible and we can’t even describe it in a way that would allow us to prove it one way or another using math.

wolfpup likes the theory that says computation alone can give rise to consciousness, and HMHW countered that with a logical argument by pointing out that there is no link between the computation a system performs and the interpretation of the computation, because the very definition of computation states that it is not based on the meaning of the symbols.
So, in summary, nobody is saying it’s impossible to duplicate consciousness using some other medium, the point is that nobody even knows which mediums are possible to create consciousness. Furthermore, if someone proposes that they “know” that medium X with attributes Y+Z are adequate to be able to create consciousness, then they need to be able to show the path from the components to the end goal like we do with literally everything we build including bridges, tall buildings, Watson and everything that we designed and built.

And it might not - as in, almost certainly not. A neuron can probably be treated as a unit object, and it’s far above the the lowest physical level. The Chemical solutions and such that (I assume) the brain is steeped in would have to be emulated at a lower level, but could certainly be dealt with at a far higher level than subatomic. It seems likely to me that they could be abstracted as well.

Now, there are around 100 billion neurons in a human’s brain, so you probably won’t be able to save that to a floppy disk, or even a cd-rom. That’s still only 0.1 teraneurons, though, so it’s not outside the realm of possibility for larger storage devices - and that’s assuming that you need all those neurons to stay conscious. I suspect not - that’s the whole brain, including memory storage and other sorts of sensory processing and such as well that would be incidental to the operation of consciousness itself. Heck, maybe it really is just all about execution loops, and the ‘consciousness’ part of the brain is negligible as compared to the data handling parts.

Since when are we talking about computers we use today? I thought this thread opened referencing Star Trek.

I’m pretty sure at least some people here are arguing that you can’t mechanize a mind, no way, no how, specifically because magic. Just my opinion.

If you were going the route of simulating an existing brains details, you would need to go down to a pretty low level. For example, a single neuron is really an entire neural network all by itself. There is nonlinear pre-processing in the dendrites, localized spiking in the dendrites signals flowing forward and backward, it’s really a very complex object performing many computations.

In addition there are significant things happening at an even lower level, for example: the circadian clock in mammals is based on a feedback loop of gene expression within the cell.

It’s pretty interesting reading this stuff, scientists are discovering new complexity constantly.

I said “like” today, as in same basic principle. Because Star Trek isn’t real, it’s tough to logically discuss whether one of it’s fictitious computers could support consciousness unless we qualified it and said “it’s just like today’s computers but might be faster/bigger/etc.”.

There is a distinction between these two things:
1 - You can’t “mechanize” a mind (in general)

2 - Specific theory X about how to mechanize a mind is not a valid theory due to logical flaw Y, therefore, if we are going to be convinced that we can mechanize a mind, we better find a better theory, one that doesn’t have a fundamental flaw.
Those are not the same thing. To me, it seems like most of the posting has been focused on the second one.

How? I see no indication that it is a behavior. Behaviors are observable cause/effect response patterns. So far, we can find no reliable way to observe consciousness or determine that does anything significant. Sure, we could program a computer to exhibit the signs of consciousness, but that would just be scripted output.

Please explain in what way it is a behavior.

Alright, I have a bit of downtime and a spot with WIFI, so I’m going to try address at least what I’ve seen so far being directed towards me. If anybody feels I’ve overlooked their argument, feel free to point me towards it, and I’ll try to answer once I get some time!

No. I mean, not even remotely, and it’s bizarre to me how you could honestly think that’s a fair statement of my argument. Whether something is a person, or a process in the universe, is obviously not dependent on any kind of interpretation. The behavior of the box does not change with the interpretation; only what we take that behavior to mean. This isn’t different in any way from interpreting the word ‘gift’ in different ways, and nothing in any way threatening for the existence of the universe, or things beyond particles and the like, follows from it.

No, that would be panpsychism. While IIT is compatible with it, it doesn’t depend on it. The point is that there’s a well-defined, calculable quantity—the integrated information—on which the theory says conscious experience depends, or which is at least correlated to conscious experience. You can calculate that quantity for a brain, and for a (conventional) computer: the brain has a high value, the computer doesn’t. Thus, the brain supports conscious experience, the computer doesn’t.

It’s simply a perfectly ordinary physical quantity; and, as we have seen with the example of mass, one that doesn’t carry over to a simulation of a system.

Well, I gave the argument that a really well crafted compression algorithm will be able to deduce the basic laws of physics behind what a movie shows, and use them for compression—after all, a law is really just a way of fitting a formula with few parameters to a set of data points. This has already happened—I gave the examples above.

So if that sort of thing happens—a compression algorithm compresses a movie down to a single ‘key frame’—the initial conditions—plus the laws that suffice to deduce the further states of a system from that, would that then count as a simulation? If so, at what point will that simulation spontaneously give rise to the features that we take the movie to lack—things like mass, consciousness, and so on? When does compression of a movie give rise to a simulated universe?

We’ve really already got a perfectly cromulent word for the former—it’s called ‘physics’. But of course, you can choose to label things all you want, it won’t change the argument a bit.

It’s an odd line of argument for you to take, though. After all, you, as a computationalist, should consider ‘interpretation’ to be computation, as well—as all minds do must be computation. So, ‘computation*’ thus ought to be ‘computation + computation’—which I would think makes it rather ‘computation’, as well.

So once more—what is it that happens to take what you call ‘computation’—what I call ‘physics’—to make it into something like my f? And how, since after all, f must be a computation on your terms, too—otherwise, you would have to hold either that minds do something that makes your ‘computation’ into my computation that doesn’t boil down to computation, or that somehow ‘computation + computation’ produces something that’s not computation—could you build a machine that computes f?

So there is something that isn’t computation, but rather, that is computation*; but if computation* does not boil down to computation simpliciter, then, of course, computationalism is straightforwardly false, since there are non-computational facts about the world—facts which would, for instance, not show up in a simulation of it. Else, if you could create a simulation of computation*, then it would just be computation (as, obviously, it in fact is, absent silly attempts to obfuscate matters).

You are, amazingly, after all this virtual spilled ink, still completely missing the point. First of all, there is no fact of the matter regarding which number is produced, absent an interpretation; hence, that number is neither even, nor odd. Of course, you can add a light that only comes on for particular configurations—say, (off,off,on), (off,on,on), (on,off,on), and (on,on,on), for example. But to claim that this indicates the odd numbers would just be another layer of interpretation—without any interpretation, the light just comes on whenever the final lamp does, for example.

Nothing needs to be interpreted by the box for this light to come on. You just wire it in parallel with the final one of the original box—and done!

And of course, any attempt to get the box, or further computations, to make some interpretation, will fail in exactly the same way: without the supposedly interpreting additional mechanism being itself interpreted, it’s not doing any interpreting at all.

Thanks! It’s been awesome, so far.

Well, there’s low and there’s low. It seems there’s about a hundred trillion atoms in each human cell, so you can get a teeny tiny bit of savings by operating just at the cellular level rather than the subatomic level.

I’m pretty sure that Turing completeness pretty much says that if you can do something on one computer then you can do it on pretty much any other computer, presuming that your computers have sufficient memory. It just might be slower than frozen molasses and require server rooms the size of mars, but the job can get done.

HMHW has repeatedly made reference to IIT, including linking an article where Christof Koch insists that computers cannot ever be consciousness - not because there’s such a thing as a magic soul, but instead because physical matter is magic and consciousness is only created when physical matter is arranged in a certain way, like spell components in Harry Potter.

So yeah, I’m pretty sure HMHW thinks it can’t be done. (Which to be honest is neither here nor there, but it does show that naysayers apparently do exist.)

Because I’m a materialist, basically.

Consider a person who happens to possess a brain. They’re a conscious, self-aware entity. Now consider that person dying peacefully in their sleep. They are no longer conscious and self aware, but their brain is still there. All the parts are still there. Between the last moment of consciousness and the moment after when the lights have gone out, nothing vanishes from existence. Therefore, necessarily, the issue isn’t which material is or isn’t in a person’s brain; it’s what that material is doing.

Spiritualists of course posit that there’s an extraphysical soul that is manipulating the body like a puppet and which departs for parts unknown at death, but that’s not me.

Nah, I’m sure that you can build an artificial brain, or whatever way you put it (so is Christof Koch, of course). But you can’t do it by computation, any more than you can do it by writing, or any other form of description or modeling. The map, quite simply, is not the territory; and no matter how many maps you glue together, and no matter how complicated those maps are: it never will be.

Whether something is a person is absolutely a matter of interpretation, as humanity’s interaction with slaves and pets will amply demonstrate.

And I don’t think your argument is in any way threatening for the notion that computers could be sentient, so that does indeed line up.

Which computer running which software, again? It’s my understanding that Turing completeness argues that it’s not what you’re running it on, it’s what you’re doing with it that matters. As long as you have sufficient memory and no time limits, no problem.

Just to make sure I’m clear - I’m not arguing that every computer program is sentient (unless they are). I’m arguing that it’s necessarily possible that a sentient computer program could be created that would run on Turing compatible machines. I’m not arguing that it’s already happened or anything.

You do realize that “integrated information” is defined relative to the number of states things can take and how they communicate with one another, right? It’s ripe for simulation. And it wouldn’t even be simulated “integrated information” - it’d be actual integrated information, because the probability distributions and communications would actually be happening within the mechanics of the simulation.

By IIT, as it is described, you just need a sufficiently complicated program. Unless you’re the sort of IIT theorist who thinks that physical matter has magical properties when the spell components are arranged properly, which is certainly another way to go about it.

No, for two reasons. The less solid reason is because movies are necessarily a limited view of the universe, and you could never deduce that “key frame”. This goes triple if the movie has continuity errors, of course, but in any case you’d never be able to deduce enough of the universe to fill in the unfilmed stuff that is presumed to take place between and outside of the filmed scenes.

But that’s all a negligible problem compared to the real problem: that a simulation is something that actively occurs over a span of time. You could in theory analyze a movie to deduce the rules of physics that you could plug into a simulation, but the starting state and interaction rules themselves are not a simulation, because they’re static. A simulation isn’t a simulation until it’s running - until state is being represented that is being continually updated based on the rules of the ‘universe’.

Of course if your compression algorithm is also a simulator that kicks things off based on what it analyzed that’s a different matter, but that’s not the conventional definition of “compression algorithm”. But I will meet you in the middle and concede that all compression algorithms that are also simulators are, indeed, also simulators.

I created separate definitions of “computation” and “computation*” because your entire argument is based on them not being the same. Of course interpretation in real life isn’t some magical outside force independent of logic or reason - it follows logical, calculation-style rules as well. It’s just in your argument it’s apparently some kind of magical, outside force, which honestly is ridiculous.

And would you freaking define “computes f”, please? Because you’re right, the interpretation is necessarily just more computation, and thus there’s no difference whatsoever between the observer in your example and my example where the box interprets its own output as even or odd. Yet you refuse to acknowledge that as interpretation - so what makes the outside interpreter that you do acknowledge so special?

Would you mind defining “interpretation”, please? Because I write programs that interpret data all the time. You can refuse to believe that all you want, but it doesn’t change what I do for a living.

Then the outside interpreter isn’t doing any interpreting either.

In real life, interpretation is done based on rules. On a system. On a calculation. In your example you’re explicit about that, even. So at what point does that become “interpretation”? When you choose which interpretation calculation to use? The box I described did that.

So if not that, what do you imagine “interpretation” is?

-It just occured to me that your answer might be “because the interpreter is conscious”. In which case your argument would be “We know the box isn’t conscious because it’s not doing interpretation, and we know it’s not doing interpretation because it’s not conscious”.

There indeed is circularity there, but it’s not a problem for the box.

I cannot help but put all the argument aside and bask in the reflected happiness of knowing that somebody is enjoying a vacation. :slight_smile:

What exactly is your point?

I stated “like today’s computers” and you objected because you thought Star Trek computers were an option and I said the term “like” still works.

And now you respond with the above???

Please be clear:
What exactly do you want to add to the conversation by introducing some fictitious Star Trek computer?
If it is “like” current computers then there is zero benefit to introducing them, the logic of the debate remains the same.

If they are not “like” current computers, meaning they have some qualitative difference, like the ability to perform hyper-computations, then sure, feel free to start discussing how that changes everything, but I doubt it will be a productive conversation.

I mostly don’t want to be constrained to the current memory size of the average tablet, because that’s the kind of constraint that could naturally follow from saying we’re talking about “today’s computers”. (I’m also assuming that the machine’s pretty fast, though that doesn’t actually matter - if it takes the computer a second to simulate a second or a decade to simulate a second, either way the simulated objects/entities will only experience a second.)

My argument is somewhat lazily presuming I have the classic theoretical Turing machine with infinite tape - it won’t actually require infinite memory, of course, but the assumption is that however much memory it needs, that much memory will be available.

But I’m not presuming magic computers, no. Just that I don’t have to fit this puppy on an ipod classic.