A full understanding of quantum laws would do nothing to pierce the murky darkness beyond the uncertainty principle. It’s not that our abilities or understanding are limited, it’s that the “data” itself doesn’t exist.
You can’t truly “copy” anything because its fundamental properties are shrouded in impenetrable mystery. On a fundamental level, matter is unable to be copied because its current state cannot be fully known due to the uncertainty principle.
I’m an atheist, but I like to illustrate the point with “Not even God could know it.”
Specifically: “The present location, spin, energy level, etc. of all of the quarks that make up your body” is wholly and fundamentally unknowable, by any means or entity, even God.
We don’t know that yet. Potentially, the absolutes of quarks is unknowable in our reference frame, but clearly defined in a different one, like a higher dimension or whatever.
But more importantly, my argument presupposed that we had such a Godlike ability. The means by which this data was retrieved was explicitly hand-waved away. And further, I said that if that bothered you, I was fine to use statistical modeling and random number generation rather than trying to scan for and record specifics at such a low level. I’m just as happy to say that we scan for atoms and auto-generate probability matrixes to represent probability clouds based on our knowledge of the underlying physical elements within an atom of that type.
Assuming that we had all the math to work out all the physical laws at the base level, a computer would be able to do that math and to store variables that represented either the entities themselves or, at least, the probability intervals for the attributes. Ergo, the world is representable as information and the world could be simulated in a computer, including life.
While I agree completely with your (OP’s) assertions about physics and supposed gods, I’m not seeing a question here.
IMO …
I’d also question whether the UP, even as conceived by the OP, is in fact an absolute bar to copying anything. IOW, we could, in principle, build a Star Trek replicator. Why do I say that? Because the copy (probably) doesn’t need to be perfect in the classical sense.
Imagine a replicator whose fidelity was only to the level of 1) type of atom (C14, Be4, whatever), ionization level = electron count, and location of nucleus in 3-space +/- around half the electron shell diameter.
So we turn it on, the magic happens, and we pop all these various atoms into position on our 3D grid. When we switch off the magic, they all jump around a little to where the various atomic & electric forces push them. A few crystal lattice defects might form that weren’t present in the original, but a few that were in the original might also close up. And that’s about it. We’re done, we have a workable replica of whatever.
For any material (other than maybe nano machines and nano scale circuitry), that’ll be just fine. It sure wouldn’t harm the utility of bulk materials, nor even non-living biologics like meat or veggies. Whether a copy of a living thing would also be alive and undisturbed is an interesting philosophical question, but my bet is yes.
Bottom line: the spins and other QM attributes of the individual atoms just don’t matter.
All IMO of course, but not without some basis in logic and decidedly amateur physics / chemistry.
Keep in mind that QM is just the latest theory we have to represent reality. It works remarkably well, but it doesn’t fully explain the universe.
Do not confuse the representation with reality. And QM is almost certainly wrong. Or rather, incomplete. And it probably makes no sense, for instance, to talk about the “exact location” of an elementary particle as if it were a dot somewhere in space. On a fundamental level, we don’t know what it is.
There is no reason a computer cannot be programmed to simulate a system down to the quantum physics level. We actually do it already on a limited scale (for instance computational chemists regularly use programs (like the wretched virus that it is, Gaussian) to simulate the QED for all the outer electrons in a whole range of energy states of atoms, and calculate the energy minima and configuration of molecules. This could be extrapolated to a perfectly good simulation of the entire world, given enough time and compute power. However:
There is no reason that the simulation will evolve with the same answers as any real system you are trying to represent. Whereas you might (by some miracle) start with the same initial observable states it will diverge because of one of:
[ol]
[li]Every quantum event requires that the simulation go down one path or another, and so your simulation only represents one of an astronomical number of possible worlds.[/li]
[li]Every quantum event is truly stochastic, and the best you can do is use a perfect random number generator to run your simulation, and clearly it will diverge from the real world.[/li]
[li]There is hidden state within the quantum system, your simulation can model this, but it doesn’t know what the initial conditions were, and thus the simulation will diverge from reality.[/li][/ol]
To anyone saying that the precise values actually exist, and it’s just impossible to determine what they are, you need to read up on Bell’s inequality. To summarize, if you start with a set of what seems like very reasonable assumptions, then in certain sorts of experiments, the results of your measurements will always obey a certain mathematical relationship. Quantum mechanics predicts results which violate that relationship, and experiments have shown that the quantum prediction is the correct one. Which means that at least one of those assumptions, despite looking reasonable, is wrong.
Now, there are multiple ways around this. If you insist on your theory having hidden variables, you can make that work, but it’s always at the cost of something else, like locality. But you definitely have to give up something about your preconceived notions.
Not sure if that was aimed (partially) at me, but yes, I fully accept Bell’s inequality and the rest. My point is simply that for ordinary macro scale matter we can ignore the unknowably uncertain factors and all that they imply.
It’s in effect an exotic sort of noise in the macro signal. E.g. which molecules next spontaneously decay in our original will be different from which do so in our QM-ignorant copy. But both samples will decay at the same macro rate and that’s all that matters.
I’ll go one further and say that QM doesn’t explain anything at all; it models the behaviors we observe in the interactions of fundamental particles but without any explanation of “why” they operate that way or what that says about the fundamental nature of the universe even as it relates to specific conditions, e.g. global or local realism, causality, determinism, et cetera. There are interpretations of quantum mechanics such as the Copenhagen and other purely stochastic notions, many worlds/relative state, pilot wave and time symmetric intepretations, information-based interpretations, consistant histories, and the minimalist ensemble interpretation, among many others, but since none make testable postulates they don’t even rise to the status of a hypothesis.
Sage Rat appears to be expressing an information-based hypothesis, e.g. that the universe is composed of “information”, albeit information that isn’t accessible to us because of what we observe as fundamental indeterminancy of the complete quantum state as identified by Heisenberg. However, this may just mean that there are “hidden variables” that determine the state that we can’t observe, resulting in us observing an apparent randomness of states within a defined distribution. If such an interpretation is correct (and there are appealing reasons to believe this to be the case) then it has been demonstrated that these hidden variables could not be observed from a purely local standpoint, e.g. with strict causality; instead, we would need to be able to grasp an entire system of quantum interactions (potentially, the entire universe) as a gestalt. This is a perfectly valid interpretation, albeit not necessarily more accurate or useful than any other interpretation except from a conceptual standpoint.
The only thing we can say about quantum mechanics, aside from that it is certainly incomplete, is that it definitely shows us that all of the preceeding theories about how the mechanics of the universe, including relativity, are wrong at a fundamental level, and that the “real physics” of the universe has to date been best interpreted as a quantum field theory. However, even QFTs are probably not really the complete description of fundamental behavior, and in fact may even be a trivial simplification of some more complex set of rules.
My Physics 101 prof used to say “I don’t know how my computer works, but I know how to work my computer. Physics is the same deal. We don’t know how the universe works, but we have discovered ways to think about how it might work that allow us to get things done.”
Just to be clear, I did not intent “explain” to mean “discover why”. I simply mean “discover how”. Science is not int he business of discovering why.
Basically we’re a bunch of apes that figured out we can use this thing called “math” to better understand how to crack open the nut and, in particular, make predictions about how to crack open other nuts in the future. It’s easy to forget that the mathematical model of reality is just a model, not the reality itself. We talk about “elementary particles”, but that does not mean they actually are elementary or particles.
This is only true if you assume one of the stochastic interpretations of QM. Otherwise, the indeterminacy is an artifact of our limitation to inspect non-local hidden variables or other limitstions in measurement.
I wouldn’t say that I’m asserting that. I lean towards it, but I don’t believe that it is necessary for one to represent the universe in data.
Let’s accept, for example, that a dice roll is random. I.e., no “turtles all the way down” to quantum physics. A dice roll just is random, full stop.
In this universe, where a dice roll is random, I wouldn’t say that there’s a hidden variable nor would I state that there is a known result of a dice roll. But at the same time, I can still write information which gives the rules for how one could play out a battle in a D&D game and I could also play out that battle on a computer. But whereas, with some things, I can represent things as hard values, when it comes time for the dice to be rolled, I have to represent things as a range and a function acting on that range. For example, the range might be 1…20 and the function would be randIntInRange(). The name of the function (or the code for the function) are representable as information and the range is representable as information.
There is no hidden variable and unknowable outcomes are still maintained. The computer will roll differently than the real world dice, but I am representing the same rules and the same information as the real world (in the context of D&D).
My assertion is that regardless of whether the universe incorporates truly random information and unpredictable outcomes, that can still be represented (in a computer’s RAM) as information. Just, in stead of storing numbers, you would store ranges and formulas and so on.
The star trek replicator have three major problems stopping this.
1 The amount of energy needed on size of small star energy out put.
2 Uncertainy principle problem.
3 Computer power is way way way too crude today.
LSLGuy I spent the past weak reading up on replicator and it sad to say it will not work.
What may work is 3D nano printing.
A replicator is a notional device from science fiction (in particular, Star Trek.)
Well 3D nano printing is a conceptual extension of the current fad of 3D printing - it makes the idea of “assembling large scale objects from globs/drops of smaller stuff” easier for the layman to understand – the layman has heard of a 3D printer and probably seen products produced by one, and has seen enough media coverage to know what that is and have a sense of how it operates. Tacking on the “nano” to the buzzword tells the layman that they’re trying to take the technology from the scale of drops of epoxy resin to “things that exist as macrostructres on the nanoscale” – they want a molecular lego set that assembled products layer-by-layer. [this is big change from the old product advertising of molecular construction being “tinkertoys” use to create complex 3D structures - mainly as a result of people’s familiarity with the 3D stick models of molecules they’ve seen in college labs or on TV or in movies.]
so as far as the difference between the two notional construction/fabrication approaches:
3D printing implies that whatever you’re building is molecularly indistinct. e.g. inkjet printers create images using drops of ink. basic printers use 1 color (black). modern printers use 4 colors. but at the very baseline, it’s drops of ink on paper. if you can’t express your idea as drops of ink on paper, then it can’t be built. likewise, 3D printing builds objects from layers of epoxy resin. if you’re idea can’t be expressed that way (which is why there was a question whether or not working firearms (guns) could be manufactured using 3D printing techniques - it was unclear that the solid resin could stand the pressures and temperatures involved with working firearms, and if existing weapons designs could be modified to accommodate.) this is why you’ll see car body parts, like body panels and plastic bits made from 3d printed parts, but never the car frame or windows or wheel/rotors - parts that are better expressed in metal and glass. so the limit of 3D nano printing would be the homogeneity of the media it uses to build things, and the basic layer-by-layer technique that substrate is used to realize 3D structures. as a result, “nano” simply implies that final products can exist at the nano-scale, the micro-scale or the macro-scale.
in contrast, the replicator is conceptualized to take a “pattern” and a huge amount of pure energy, then use E=mc^2 to convert that energy into matter. It uses the “pattern” to determine now to assemble the matter that condenses from that pure energy into whatever is desired as output.
so at least conceptually, you should be able to condense whatever atoms you need to build whatever molecules you need to build whatever compounds you need to assemble whatever macroscopic object you need provided you have the “pattern” (or “recipe”) for putting it together. Thus, the input homogeneity constraint is limited to energy, and the assembly technique is not layer-by-layer but atom-by-atom.
there are serious theoretical holes with the concept though. one issue is conservation of energy - in order to create something having mass M, you need to turn mass M+m into energy first. then there’s the 2nd law issues. Obviously, you’re taking mass in one energy state and trying to convert it into mass having a more organized energy state. and that entropy has to be accounted for somewhere and paid for with an even greater amount of energy (putting the process efficiency well under 25% in terms of energy efficiency) otherwise the overall process simply cannot exist.