Is there an isomorphic mapping between interpretations? In your inputs there seems to be. But not in your output, since otherwise f’(2,1) would be 4. We’d have to see the entire mapping to know for sure, though, since a mapping that just substitutes 6 for 4 and 4 for 6 would be fine.
If you allow inconsistent mappings, then you can’t say anything about anything.
If we knew the internals, then we could tell if they are equivalent - usually. But in this example it is impossible to know.
But wouldn’t you agree that no matter how good the simulation of a black hole, there is no actual gravity produced in the real world, just a bunch of numbers that happen to map to the same set of transitions that the measured particles of a black hole would go through if we could measure them?
Do you accept that a simulation of a computer running a program produces the same results as the actual computer running the program? That’s a more relevant example here.
This is ridiculous. I don’t have to know anything about what a computer does internally in order to check what it computes. I know that my calculator computes the square root, because if I enter a number, punch the SQRT-button, and read off the result, that result will be the square root of the number. What goes on on the inside is just a means to achieving that end; what means is used is wholly irrelevant.
But fine. Don’t let it be said I’m not doing my best to be maximally accommodating. First, for convenience, here’s my box again:
-----------------------------
| |
| (S11)(S12) |
| (L1)(L2)(L3) |
| (S21)(S22) |
| |
-----------------------------
Now, here’s the complete wiring diagram of my box, together with an explicit discussion of every possible case (I’m home sick at the moment, and evidently have too much time on my hands):
___
S12--+-| |
| | A1|-----------------------------------------X L3
S22-+--|___|
|| ___
||___________| |
|____________| B1|--+
|___| |
|
___ | ___
S11--+-| |-----+---------| |
| | A2| | | | A3|---------------------X L2
S21-+--|___| | +--|___|
|| | | ___
|| | |__________| |
|| |_________________| B3|--+
|| |___| |
|| ___ | ___
||___________| | +--| |
|____________| B2|-----------------------| C1|---X L1
|___| |___|
S11 S12 | S21 S22 || L1 L2 L3
-----------------------------
d d | d d || x x x
d d | d u || x x o (A1 yields h --> L3 o)
d d | u d || x o x (A2 yields h, A3 yields h --> L2 o)
d d | u u || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
d u | d d || x x o (A1 yields h --> L3 o)
d u | d u || x o x (B1 yields h, A3 yields h, L2 o)
d u | u d || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
d u | u u || o x x (B1 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
u d | d d || x o x (A2 yields h, A3 yields h --> L2 o)
u d | d u || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
u d | u d || o x o (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)
u d | u u || o o x (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o)
u u | d d || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
u u | d u || o x x (B2 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
u u | u d || o x o (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)
u u | u u || o o x (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o(
In the above, lines are wires, ‘+’ are wire joins, S11-S22 are the switches, and L1-L3 are the lamps. A switch, if flipped up (u), emits a high voltage signal ‘h’; in the state down (d), it emits low voltage ‘l’. A box of the type A emits a high-voltage signal ‘h’ if either one, but not both, the wires connecting to it on the left side carries ‘h’. A box of type B emits ‘h’ if and only if both of the wires connecting to it on the left carry ‘h’. Finally, a box of type C emits ‘h’ if one or both of the wires connecting to it carry ‘h’. If a lamp receives ‘h’, it lights up (o), if not, it remains dark (x).
Now, the physical behavior of the box is completely specified (I trust you don’t need me specifying the internal wiring of the A-C boxes, as well—although it would be trivial to do so). The table lists the light-patterns for any given switch-pattern, as well as giving the reason for why each lamp is on (it’s a trivial matter I did not bother with to give the corresponding reason why a lamp remains dark).
I’ve given the mappings in my post #93, but for convenience, let me repeat them here.
So, first, take ‘u’ to mean ‘1’, ‘d’ to mean ‘0’, ‘o’ to mean ‘1’, and ‘x’ to mean ‘0’. Then, interpret the first two switches, S11 and S12, as one binary number, switches S21 and S22 as another, and L1-L3 as a three-bit binary number. Then, the table above becomes:
x1 | x2 || f(x1, x2)
-----------------------
0 | 0 || 0
0 | 1 || 1
0 | 2 || 2
0 | 3 || 3
0 | 0 || 0
1 | 1 || 2
1 | 2 || 3
1 | 3 || 4
2 | 0 || 2
2 | 1 || 3
2 | 2 || 4
2 | 3 || 5
3 | 0 || 3
3 | 1 || 4
3 | 2 || 5
3 | 3 || 6
In other words, the device computes binary addition.
Now, keep the state identifications; but instead, read the binary numbers from left to right, with e. g. S11 being the least significant bit 2[sup]0[/sup], S12 being 2[sup]1[/sup], and likewise for S21 and S22. L1 then corresponds to 2[sup]0[/sup], L2 to 2[sup]1[/sup], and L3 to 2[sup]2[/sup]. Then, the above table becomes:
x1 | x2 || f'(x1, x2)
-----------------------
0 | 0 || 0
0 | 2 || 4
0 | 1 || 2
0 | 3 || 6
2 | 0 || 4
2 | 2 || 2
2 | 1 || 6
2 | 3 || 1
1 | 0 || 2
1 | 2 || 6
1 | 1 || 1
1 | 3 || 5
3 | 0 || 6
3 | 2 || 1
3 | 1 || 5
3 | 3 || 3
This is a perfectly sensible function, a perfectly sensible computation, but it’s not addition. Hence, the device can be seen to compute distinct functions on an exactly equivalent basis.
I should not need to point that out, but of course, I could’ve used any number of different mappings, obtaining a different computation for each. I could’ve considered ‘u’ to mean ‘0’ and ‘d’ to mean ‘1’; I could’ve swapped the meaning (indepentently) for the lamps; I could’ve considered the four switches to represent one four-bit digit; and so on. Each of these yields a perfectly sensible, and perfectly different, computation.
The one thing I require for a system to implement a computation is that it can be used to perform that computation. Hence, my calculator implements arithmetic, because I can use it to do arithmetic. In the same way, I can use the device above to add binary numbers: suppose I have the numbers 2 and 3, and didn’t already know their sum, the device could readily tell me—provided I use the correct interpretation!
But the latter is exactly the same for ordinary computers. If I try to compute the square root of 9, and my device displays 3, but I think that it’s the Cyrillic letter Ze, I won’t know the square root of 9.
Fine. So now’s your time to shine: put me in my place by demonstrating how the above internal functioning of the box singles out one among its many possible computations. It’s ‘obvious’, after all. Put up or shut up!
So, if I described the pain more completely, then it would be magically conjured into being? How complete does the description have to be in order to count? Who is it that’s experiencing the pain? Do you think my words suffice to bring experiencing entities into the world?
The table below corrects two mistakes in the above one.
S11 S12 | S21 S22 || L1 L2 L3
-----------------------------
d d | d d || x x x
d d | d u || x x o (A1 yields h --> L3 o)
d d | u d || x o x (A2 yields h, A3 yields h --> L2 o)
d d | u u || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
d u | d d || x x o (A1 yields h --> L3 o)
d u | d u || x o x (B1 yields h, A3 yields h, L2 o)
d u | u d || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
d u | u u || o x x (B1 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
u d | d d || x o x (A2 yields h, A3 yields h --> L2 o)
u d | d u || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
u d | u d || o x x (B2 yields h, C1 yields h --> L1 o)
u d | u u || o x o (B2 yields h, C1 yields h --> L1 o, A1 yields h --> L3 o)
u u | d d || x o o (A1 yields h --> L3 o, A2 yields h, A3 yields h --> L2 o)
u u | d u || o x x (B2 yields h, A2 yields h, B3 yields h, C1 yields h --> L1 o)
u u | u d || o x o (A1 yields h --> L3 o, B2 yields h, C1 yields h --> L1 o)
u u | u u || o o x (B1 yields h, A3 yields h --> L2 o, B2 yields h, C1 yields h --> L1 o)
Perhaps my above tour-de-force helps to clarify the difference between a Turing machine and its physical instantiation. The difference is the same as between an AND-gate and its physical instantiation: an AND-gate’s output is ‘1’ if and only if both of its inputs are ‘1’. This can be realized, physically, by a device that, for example, outputs a high voltage ‘h’ if and only if both of its input voltages are ‘h’—if we choose to interpret ‘h’ to mean ‘1’. So the abstract AND-gate has a physical instantiation in my B-type boxes once the appropriate interpretation is made.
But of course, there’s nothing special about that interpretation. I could just as easily consider low voltage, ‘l’, to mean ‘1’. In that case, the B-type boxes implement the (abstract) OR-gate.
So, the nature of the AND- (or OR-) gate is defined by reference to the abstract binary values ‘0’ and ‘1’; their physical implementation is realized by an appropriate mapping of voltage levels to logical values. The AND-gate, as an abstract object, thus is perfectly definite, but its physical realization acquires an interpretation dependence—simply because physical objects don’t directly operate on logical values, but on things like voltage levels. It’s the same with abstract Turing machines and their physical realizations.
I trust my previous comment makes it clear why the algorithm doesn’t really come into play here. If not, let me try and spell it out.
The algorithm computing binary addition that my box uses is given by the identification of physical states, carried through its specific wiring as given above. So, a box like A1, if we take the identification ‘u’ –> ‘1’, ‘d’ –> ‘0’, ‘h’ –> ‘1’ and ‘l’ –> ‘0’, then corresponds to the pseudocode:
IF (S12 = 1 AND S22 = 0) OR (S12 = 0 AND S22 = 1)
A1 = 1
ELSE
A1 = 0
END IF
With replacements such as this one, you get the entire algorithm computing the sum of two two-bit numbers.
However, when you change the identification, then, also, the algorithm changes. Say, I interpret ‘u’ → ‘0’, ‘d’ → ‘1’, ‘h’ → ‘0’ and ‘l’ → ‘1’. Then, A1 instead corresponds to the pseudocode:
IF (S12 = 1 AND S22 = 1) OR (S12 = 0 AND S22 = 0)
A1 = 1
ELSE
A1 = 0
END IF
Consequently, a change of the interpretation changes the function being computed, as well as the algorithm by means of which it is computed.
Now, it’s of course also possible to change the algorithm, while computing the same function. To do so, one need merely change the wiring diagram, while leaving the input-output mapping the same. There are, of course, infinitely many wirings that lead to the lamps lighting up as the switches are pressed in the same way as they do in my example. Tracking the meaning of the symbols through these different wiring patterns in the manner explained above may then yield distinct algorithms computing the same function—just like you can compute the square root by the Babylonian or the digit-by-digit method.
Finally, having it all explicitly laid out like that should also put to rest any claims of ‘strong’, or ‘kinda strong’, emergence in computers. For everything that a computer does, a story such as the one told above reducing its behavior (in the example, the lamps lighting up in response to the switches being pressed) to that of its lower-level components, thus showing exactly how that lower level causes the high-level behavior. Consequently, only the lower-level description needs to be specified to fully specify everything about a computer, and to enable us to find the causes of every aspect of the high-level behavior within the lower level.
Ah, a fresh, shiny new morning.
Half Man Half Wit, this will probably piss you off, but I think that you’re getting way too hung up on your overcomplicated example. Your argument doesn’t depend on the specific function of the box, after all - it’s supposed to be a generically applicable argument - one that can be applied to any calculative-or-theoretically-calculative object (like, say, the brain). In fact I’m pretty sure that the following example is equally descriptive of your argument:
There is a box. It has a button and a light.
Alice pushes the button and the light comes on. She releases the button and it turns off. Alice shrugs and says, “I guess the button operates the light”.
Bob pushes the button and the light comes on. He jumps back, releasing the button - it turns off. “Ahhh! Box possessed by demons! All going to die!”
All the elements are there: a box, a deterministic functional mapping from inputs to outputs (pushed => on; not-pushed => off) and, varying interpretations. I do not see how this example differs from your example in any meaningful way (other than seemingly-unnecessary complexity).
I will be honestly surprised if you disagree that this example is equivalent to your example.
You know, I’m not sure whether you ever did define what you think “computation” means, but when I ask Google to give me a definition for the term, it coughs up “the action of mathematical calculation”. And by your own statement only one ‘action’, one mapping from inputs to outputs, is taking place here. And also by your own statement the only difference between the “calculations” is in the eye of the beholder. Which means there’s really only one calculation, and two interpretations.
The scenario is exactly congruent to is one persons saw a piece of paper lying on the table and saw “WOW”, and then another person came up on the other side of the table and say “MOM”.
So here’s a question for you: Can I take your argument, apply it to that piece of paper rather than your box with lights and switches, and prove something? Can I prove that print is self-contradictory?
If not, why not?
Have you ever played a video game? One of the new-fangled ones where you can shoot things - Space Invaders, Asteroids, that kind of thing. In those you can push buttons to move your little digital ship around and make it shoot pixels. And if one of those pixels you shot happens to run into the little cluster of pixels representing an asteroid or an alien, then the behavior of that thing changes - if memory serves it turns into a little drawing of lines coming out of a point and then disappears.
So what you are definitely seeing is that the game is reacting to your actions. The things on the screen are just pixels, but the reaction is really happening. It’s a real reaction. Something happens, and a stimulus triggers a reaction in the active running process of the game. It is a real reaction.
I’ll stop here because I have to get ready for work, and you’re probably already shouting that there is absolutely nothing similar to how Asteroid.EXE recieves an input registers the input, and alters its behavior in response, and how I stub my toe, register the input, and howl in pain. So I’ll let you get right on typing your reaction now - but suffice to say, however different the example is, it does establish a digital entity that can experience and really truly react to stimuli.
(Whether you call that entity the asteroid/alien or the game as a whole, of course, is a matter of interpretation. )
Ok, so you’re going for the ‘babble on’ option, rather than actually putting your money where your mouth is, and make good on your claims about how obvious it is what my box computes. Expected, but still disappointing. At least, now I can say I’ve tried everything I could.
Wolfpup, re-posting so it doesn’t get lost in the noise, hoping to get a more clear understanding of your position.
Begbert, thoughts?
You haven’t defined what ‘computes’ means to you.
In real-world terminology, what the box “computes” is its output. See that laboriously-crafted mapping you made from inputs to outputs? That’s what it “computes”. It “computes” which lights to light up when the input switches are set in various ways. That’s what the box does. And golly gosh, it’s entirely consistent and non-self-contradictory about it.
I strongly suspect that when you say “computes” you are (deliberately?) conflating the actions of the box and the actions of the observer - and then blaming everything on the box. This is, of course, an invalid approach regarding a discussion of the actions, behaviors, and properties of the box.
One reason I think you’re deliberately doing this obviously invalid thing deliberately is because, well, you because you’re pretty clear about the fact you’re doing it deliberately. One reason I think you’re at some level aware that it’s invalid is because if you weren’t aware it was invalid, you wouldn’t be averse to applying your argument to the WOW/MOM paper and seeing where the logic takes you.
The gravity example illustrates that there are some attributes of systems that can’t be reproduced by a representative simulation (representative meaning it’s just a bunch of numbers+symbols that transition in the same way as the original IF we interpret them according to some set of rules (e.g. these numbers represent the position, mass, etc. of each particle)).
We don’t know if consciousness is more like a physical attribute like gravity, or if it’s a logical attribute that can be created by performing the right transformations in the right sequence (and interpreting the results as external observers, (e.g. yep we just got sequence 101100101001, that represents consciousness when the input is X and the previous state is Y).
I have, actually, in a response to RaftPeople, I believe.
If that’s the case, then all a modern computer computes are pixel patterns on a screen. But that’s not what we typically believe: rather, we think it computes, for instance, planetary orbits. But I don’t think even you are gonna argue that planetary orbits are among the outputs of any computer. Not to mention the impossibility, in this case, of computing things like square roots, since they are formal objects, not physical properties. Likewise for truth values.
Besides, I have already refuted this possibility: if computation truly were nothing but the physical evolution of the system—which what you’re saying boils down to—then computationalism is trivial, and collapses onto identity theory.
So if that’s why you made such a stink about how obviously stupid my arguments are, you’re really not bringing the bacon.
Don’t worry, I hadn’t forgotten you. There’s just a limit to how many things I can do at once.
Would you agree that no matter how painful a stubbed toe is to you, there is no pain pain produced in me, just a surprised reaction at the fact you just yelled (and that I somehow heard it over the internet)? The body, and the simulation, are mostly-contained systems. Events that happen inside the system aren’t necessarily communicated to things outside the system in the same way they occur in-system. Even if the things outside the system are capable of recognizing and experiencing the events the same way that the in-system things do!
Black holes are a little dangerous, so let’s talk about cakes for a moment. (Mmm, cake.)
If there’s a delicious cake sitting on the table in front of you, you see it - the image of it appears in your mind. But the image of the cake didn’t appear directly in your mind - what happened was that light in the environment impacted the surface of the cake, energizing the molecules; the molecules then emitted altered light in an effort to get back to a lower energy state. The altered light is emitted out in all directions and impacts lots of things, including your eye. The various translucent physical parts of your eye bounce the light around in a controlled way until it hits the cells of your retinal, which react photochemically. These cells are adjacent to other cells that are triggered electrochemically, which are adjacent to other cells which are triggered electrochemically, which are adjacent to other cells - okay, there are a lot of cells involved. Eventually this telephone game of electrochemical signals is handed off into your brain, where tiny bug-eyed alien driving you sees the image of the cake on its display monitor.
So yeah, there’s a whole lot of photonic, physical, and chemical signals being sent every which way to send you the image of that cake. It’s not just as simple as seeing it.
Now consider a simulated cake. Let’s pretend for a moment that it’s a really good simulation, that’s simulating things at the level of physics. So anyway, some numbers are generated somewhere that simulate photons flying through the air. The simulator process decides that these photons should be impacting the simulated cake’s surface, removes the photon, and adjusts the energy value of the molecules accordingly. Then on a later pass the simulator decides that the energized molecules would rather not be energized, and reduces their energy variable while generating altered photons flying in various calculated directions. And then these altered photons are calculated as flying through the air, and then - don’t impact your eye because you’re not in the simulator, you’re in the real world.
So you can’t see the cake; you can only see that a bunch of numbers are produced that happen to map to the same set of transitions that the measured particles of light would go through if we could measure them.
But wait! You have a set of VR goggles! And the simulation is conveniently designed to output its numbers to those goggles, causing their surfaces to electrochemically emit altered light in the direction of your eyeballs, and, yay! You can see it! You can see the simulated cake! And you’re really seeing it, too - the chemical process in your eyes was genuinely triggered in the same way that they did when they viewed the real cake! You are genuinely seeing the simulated cake - to exactly the same degree that you saw the real one. The physical processes you used were the same.
You then put on your super-sophisticated VR gloves and reach out and touch the simulated cake, feeling its simulated moistness the same way you would feel real moistness, with the same physical processes in your body being triggered. You are really feeling the simulated cake!
So then you lift it to your mouth, and -oh, boo. You don’t have a sophisticated VR tongue wrap. You can’t taste the cake! The cake is a lie!
Except that the only reason you can’t taste the simulated cake is because your sophisticated tongue wrap is in the shop. If it wasn’t then you really could taste the cake, through a series of complicated physical, chemical, and electrical interactions comparable to how you experience real cake.
So there’s your answer - the only reason you can’t feel the black hole is because nobody’s created a device to communicate the gravity numbers withing the simulation system in the way that our biological systems can understand. (And you know that about ten minutes after they do design such a device, we’re all gonna die.)
TL;DR: The only reason that you think you experience reality directly is because your brain and body (and to some degree all of outside reality) aren’t bothering to inform you of all the complicated processes that exist to transmit that information to you. You can’t experience simulations directly because no such processes exist to transfer the information - unless they do.
I will readily concede that I haven’t been reading your discussion with RaftPeople, because it’s exhausting, technical, and (in my eyes) ultimately beside the point. Your argument collapses long before the technical details come into play, because your argument is erroneously trying to blame the box for the activity of the observer.
But that’s okay. If the definition is so secret (or complicated) that you can’t repeat it for me, I’ll just assume that it’s something to do with you screwing up and trying to claim that your observations of the black box somehow impact what’s happening inside it.
All that reality “computes” is particles moving around. The typical notion that there are such a thing as “planets” is mistaken; that’s just a coincidental arrangement of particles that may or may not be moving in the same direction (I haven’t checked).
Until you explain to my how your argument doesn’t work on the WOW/MOM paper, I’ve not only brought the bacon, but set it in front of you with a bit of parsley garnish on the side. You just haven’t decided to bite.
I appreciate you provided a lot of detail in your answer, but you answered a different question which makes it a little tougher to just move the conversation along incrementally.
If I interpret your answer correctly, it sounds like the following is accurate from your perspective, please confirm:
1 - The simulation does not create the same gravitational effects that a real black hole does in our world
2 - Within the simulation, you would argue that the attributes of the simulation (numbers, symbols, whatever) generate effects relative to the simulation system that is just as real as gravity in the real world
The simulation does create the same gravitational effects that a real black hole does in our world - it’s just that the only things that experience those effects are other things in the simulation. Which sounds a lot like what you said, but the discrepancy I’m seeing is that you’re giving reality some unjustified slack that you’re denying the simulation. Reality is nothing more than “numbers, symbols, whatever” when you break it down and examine it at the level of what the particles are doing to one another. Particles in reality effect each other; simulated particles in the simulation effect one another. The ‘action’ of effecting is equally real in either case - something is really and truly responding to something in both cases. The only difference is that as entities that are outside of the simulation and thus not subject to the rules (read: laws of physics) of the simulation, we’re not effected the same way as something in-system is.
ISTM that you’re just restating exactly the same question that I already answered in post #194. Again, if you regard computation as a process, then Turing provided a very good description of the fundamental nature and limits of that process. If you regard it solely in terms of the results it produces, then it can be viewed as the mapping of a set of input symbols to a set of output symbols, which can be used to characterize any computation. They are different questions. For purposes of this discussion, HMHW regards his hypothetical box as a computing device based on the latter criterion, specifically disclaiming the relevance of anything going on inside it, and I have no quarrel with that.
Come on, yourself. I made it very clear in posts #179, #187, and #196 that you either don’t understand the computational theory of mind or have misrepresented it in order to support your failed and unsupportable hypothesis about the subjectivity of computation. Specifically, you made it clear that your position is that the mind cannot be computational because if it were, it would itself require an interpreter and that would lead to an infinite regress (the homunculus fallacy). You even went so far as to claim that CTM theory had been “dismantled” – by none other than Putnam himself!
You then desperately starting backpedaling when I showed that, far from having been “dismantled”, CTM plays a vital central role in cognitive science. You seemingly tried to obfuscate your position by trying to characterize it as some nebulous allusion to “computational modeling”; but as I showed in the quote in #196, CTM is absolutely not a metaphor and holds that mental processes are literally computations, and indeed Fodor laid out a detailed theory of exactly how those computations are carried out.
It’s very difficult to argue with someone who either cannot admit he was wrong, or who appears to have difficulty with comprehension of plain English.
You’re right, and I wanted to acknowledge this. I was nonetheless elucidating an important concept but my introduction of the term “algorithm” here was not useful. This sort of thing sometimes happens when I type more quickly than I think, and it popped into mind because algorithms are frequently thought of as specifications for solving problems. The real point I was getting at is that the only distinction between your functions f and f’ is a class-of-problem distinction and not a computational distinction, because the computation is invariant regardless of whether it is observed by someone interested in solving f class of problems, someone interested in solving f’ class of problems, or a squirrel.
Your “tour-de-force” was a tour-de-fizzle. It asounds me that you think this proves anything different than your original box with lights and switches; that is to say, it astounds me that you think it proves anything at all. It’s exactly the same, and as such, your latest version of the same fallacy is subject to a trivial deconstruction, as follows. There is a useful kind of logic gate that outputs a high voltage (say) if and only if both of its inputs are also high (otherwise different voltage inputs result in a low-voltage output). There is another useful kind that outputs a high voltage if either of its inputs are high (so therefore a low voltage output if and only if both inputs are low). If you choose “high voltage” to mean “1” or “TRUE”, then you get to label the first kind an “AND” gate and the second kind an “OR” gate. If you choose the reverse interpretation, the labels are reversed, that’s all. For me to build a useful computer, I need both kinds, and all that is required is a consistent interpretation, but the interpretation itself is completely arbitrary. You have fallen prey to another class-of-problem fallacy. Engineers in some foreign country with a reversed interpretation of the Boolean values of voltage levels could still order exactly the same logic gates and build exactly equivalent computers; they would just be annoyed at having to change all the labels!
You did respond, but it had phrases like “if your…” which sounded like you were exploring hypothetical angles a person might consider, but I’m trying to clearly understand what you are considering a computation.
For example, I was thinking it would be possible to answer the following questions with just a yes or no regarding computation:
1 - A lookup table that maps 0110011 to 0100010 (from your example) - is this an example of a computation? Yes or No
2 - A simple circuit that can only perform the mapping it has been built to perform, and it happens to map 0110011 to 0100010 - is this an example of a computation? Yes or No
3 - My laptop computer that is running a program that maps 0110011 to 0100010 - is this an example of a computation? Yes or No