I’m a programmer and all these seemingly philosophical questions “does stuff exist when nobody looks at it?” and “does particle obtain real properties only when we measure them?” really, I mean REALLY remind me of how computer 3D game engines work. In a multiplayer 3D game you have a server which stores game world data and rules. These are essentially just numeric values for some properties of game stuff. Server DOES NOT render any stuff and it even does not process physics rules in game regions when nobody has been logged in. Still, it might run some game logic rules to ensure that some game character properties change even when nobody is looking at them, e.g. aging of game characters, earning money from game deposits in game banks etc. When somebody logs in, fun begins. On the player’s computer a client program gets launched. It downloads deterministic data from the server (“this house should be here because it was there last time, and we want a consistent game not some crazy world, right?”) and resources (textures, 3D models etc.). But if we had enough powerful computers, the client could theoretically generate everything from some “game stuff particles” or using some mathematical functions. Actually, there have been some games with so called procedural graphics but they were pain in the ass to create, thus mainstream game engines prefer to do it the old way with pre-created 3D stuff. So, I’ve logged in and I’m in the game world now. Does all the stuff in the world have to be created and rendered for me? Ohh, that would be such a heavy task for my computer to handle. It’s much easier to cheat and render only the stuff that I look at. Don’t understand me wrong – the stuff I don’t see does not disappear, it still exists in the informational level on the server, it just isn’t rendered for the observer. Actually, in some games if you move through the game world too fast, you might notice that some suff just pops right in front of your eyes – houses, trees. That’s because of lazy-loading issues – the stuff gets loaded only at the last moment when you request it, and sometimes it might be too late. Blame game engine designers for that. Or maybe you need a faster computer. Now let’s get deeper into this. There is one more exciting thing. The game stuff has multiple levels – so called levels-of-detail. It reminds me the real world with molecules, atoms and subatomic particles. If you are far from some guy in the game, a simplified model of the guy is rendered for you to preserve computer resources. For example, the model might be such that the guy has no nose and no lips, but you don’t care – you are far from him and for you the guy seems real and OK. When you come closer, your application thinks – oh, he’s dangerously close. “I should load the next level-of-detail now – the one with nose and lips and brows and nice smooth skin”. Does it remind anything quantum to you? No? Then let’s pretend, that you can somehow cheat the game and get real close the very last level-of-detail, closer than developers have expected. You look at someone’s skin. It’s blurry and uncertain like an interference pattern. Yeah, the difference from the quantum world is that in a 3D game the skin texture still has predefined pixels, although they are blurred. But let’s pretend that we have a game which creates stuff from particles instead of pre-created models, and you’ve reached the last level-of-details through some cheating (hacked in-game microscope maybe?). The game would be very reluctant to render any real properties – it would still try to be uncertain and blurry to preserve computing resources, and only if you do some tricky experiments which could potentially put the game in the danger of inconsistency, then the game engine would give up and render the requested level-of-detail for you. Essentially, here’s how a “quantum game engine” would work with its level-of-details. “Oh, man, you were not supposed to see atoms of the skin of this guy. I don’t even know where the electron you asked for is located exactly. Hmm, ok, I’ll launch my random generator and adjust some probabilities to make it seem more real but I won’t do in-depth calculations … here you go, have an electron and shut up. What? You are tearing this electron out of the guy’s nose and trying to run it through slits? Fine. I said, I don’t know where this electron was and how exactly should it behave in this situation, so maybe it will go through that slit… or not. Here you go, an interference pattern. What now? Oh, you put that detector in there… now I have to calculate the exact position of this particle. But don’t even ask for other properties, I don’t have time for that… Oh no. You’ve got the quantum eraser. But I’m still doing my lazy loading. I won’t calculate properties of the particle if I can do just fine without them. If you think you erased a property before looking at the results, then you’re wrong – I didn’t even calculate the property, so you actually did not have anything to “erase”. Oh, you think you did a measurement because the detector was turned on? Ha! The detector is made of the game stuff so I don’t have to calculate its actions if you “erase” the results before acknowledging them, got it? And don’t even think about retrocausality – I don’t have time for time paradoxes, and as I said, I didn’t somehow magically change the property back in time, the property just didn’t have any value at that point but I pretended that it did because you forced me to do so with your experiments. What about quantum entanglement? I call it lazy-loading or load-on-demand. When Alice requests a property, I’ll render it, and also render it for Bob to make the results as consistent as possible, but don’t be surprised if it doesn’t turn out as you’d expect – as I said, I’ll render only the property you asked for and no other properties – I’m trying to efficient with my calculation resources here. Space? Information moving faster than the speed of light? Comon, location is just a number, I can change a property of a particle no matter how far it is. This is how I work at this level.” Ok, enough from me, I guess you got the idea – I’m a fan of some simulation theory 😀 I’ve heard about digital physics by Nick Bostrom and Ed Fredkin but I’m not sure if they are talking about the same things as I did.
There is one more thing which is somewhat hard to conceptually grasp at first, and I admit, for me also it was hard until I learned to think about information as being primary and all physical being secondary. So, it is not that your hand disappears when nobody is watching it. You have to take into account not only watching but also any other perception and awareness of your hand or any other physical object. Thus, when you are not watching, it is only visual part of information which is no more being rendered. You still receive all the other information aspects about your hand – touch, weight, smell. In the “lazy rendering” reality, your hand does not consist of molecules as long as you don’t take a microscope to find a molecule – only at that moment molecules are rendered. If someone robs a bank, then the robber will know it, hence the information about the robbery will get fixed into our physical world, and thus the money will be gone from the bank. But let’s say, the robber managed to rob a bank and no-one knew about it, but then on his way home his car exploded and everything burned down. So from that moment, there is no more any source of conscious observer in our reality, who knew that there was robbery. Now the simulation is free to do as it wishes – to restore the amount of money in the bank as if robbery never happened or to leave it as it is now. Another way to think about is as you think about dreams. Consider our reality to be a shared, highly stabilized but still dream-like. If you have experienced vivid dreams – or even lucid, controllable dreams – you will know that in a dream things seem to materialize from nowhere as soon as you think about them and go away as soon as you forget. In a dream you might think about a library and suddenly it will “materialize” around you. You can take a book and open it, and you’ll see how the text in the book is being created on the fly. As dreaming consciousness is very confused, most probably the text in the book will change each time you look at it. It’s as if your subconscious “dream machine” is some kind of “probability computer” which computes what is the probability for you to see some specific text, based on multiple probability variables – your current emotions, your past experience. And this dream book does not consist of atoms. It’s easy to accept this idea, because we intuitively know that dreams are just illusion, so it is logically that there exists only what we see (and don’t forget other sensory inputs – feel, touch, hear, smell, taste) but everything else does not exist. But imagine, that in your dream you have a super microscope (everything is possible in dreams) which is capable of showing you atoms of the book. When you use this microscope, your subconscious “dream machine” will be forced to create and show you the atoms of the book because you really expected them to see there, and that’s how dream mechanics work – it cannot avoid creating information which you expect to receive. So, now you have a bunch of dream atoms in front of your eyes. But does that mean the dream book always really consists of dream atoms? Of course not. It consists of atoms only when you – the only observer of your dream – expect to experience the atoms. The more stable is your consciousness during the dream, the more stable will be the world, but at some point your consciousness will take over and the dream will fade away and you’ll wake up in the real world where things are real and consist of real atoms even when nobody is looking (touching, listening, smelling) them. But is this world that real? Is there even possible to prove that it’s not real if it tries to match your (and millions of other people) expectations every second, but everything beyond these expectations is in some virtual probability state? Still, there is this disturbing question – if our real world is capable to pretend to be so real, then why it fails to do so in atomic level? Maybe it’s so on purpose, to give us a chance to think about such crazy ideas. But maybe it’s some limitation of the “reality computer” and it starts to give up on such low scale. It’s like texture blur and pixelation in old games – everything looks nice from a distance, but if you happen to move really close – closer than the game developer intended – you’ll see textures getting blurry and pixelated. Maybe atoms are pixels of our physical part of reality, but as they are being rendered on the fly (thing about procedural content generation in games), they are mostly remaining in probability states unless explicitly asked to become more real.
The problem with the simulation theory is “Pi”. How would a finite simulator include an infinite irrational number in it. as long as we do not ask the simulation to retrieve X digits of Pi ( where X is larger than the capabilities of the simulator itself), we are good. But to reach that limit we would have to create our own calculator that reaches at least the power of the “reality calculator” itself. I guess, it should be impossible to do that – the number of available digits of Pi in the “reality calculator” might be that large that we would need to use all the physical matter in the Universe to store the previously calculated digits, which means that we ourselves should also go extinct (because we’d have to use our own atoms to store the calculation results), in which case the “reality calculator” could say: “Finally, I can stop calculating that stupid Pi because the conscious observers who requested it have gone for good. Request for Pi was aborted.”
Found this comment on Youtube by a user called camelCased really mind fucking. Thought I should share…