1
1
submitted 9 months ago* (last edited 9 months ago) by TypicalHog@lemm.ee to c/simulationtheory@lemmy.world

What if the universe is simulated and special relativity is caused by the drop/lower FPS/TPS in regions with high amounts of mass/energy (perhaps to save on computation)?

You know how the time passes more slowly near a block hole? What if that's because the universe is updating/processing stuff slower in such regions compared to the emptier areas?

Let's imagine a universe has a framerate. What if that framerate drops significantly near the event horizon? For example, for each update/tick/frame there, many thousands or millions of frames happen in the rest of the universe. If you were near a black hole, you would still feel like the framerate is normal and it would seem like the rest of the universe is running at a much much faster framerate and stuff there would be happening super fast from your perspective.

Maybe the framerate drops so so so much near the singularity/event horizon that stuff that falls in stays still essentially from the perspective of the rest of the universe since framerate there asymptotically approaches zero and the whole thing grinds to a halt AKA the stuff never really reaches the singularity since it not getting updated/processed anymore (I mean, it is, but so rarely it would take a like an infinite amount of time for it to reach it).

This is obviously just my fun lil speculation that's probably wrong, but what do you guys think? Does it make sense and if it doesn't, why not?

2
1

It's worth pointing out that we're increasingly seeing video games rendering with continuous seed functions that convert to discrete units to track state changes from free agents, like the seed generation in Minecraft or No Man's Sky converting mountains into voxel building blocks that can be modified and tracked.

In theory, a world populated by NPCs with decision making powered by separate generative AI would need to do the same as the NPC behavior couldn't be tracked inherent to procedural world generation.

Which is a good context within which to remember that our own universe at the lowest level is made up of parts that behave as if determined by a continuous function until we interact with them at which point they convert to behaving like discrete units.

And even weirder is that we know it isn't a side effect from the interaction itself as if we erase the persistent information about interactions with yet another reversing interaction, the behavior switches back from discrete to continuous (like we might expect if there was a memory optimization at work).

3
1

I've been a big fan of Turok's theory since his first paper on a CPT symmetric universe. The fact he's since had this slight change to the standard model explain a number of the big problems in cosmology with such an elegant and straightforward solution (with testable predictions) is really neat. I even suspect if he's around long enough there will end up being a Nobel in his future for the effort.

The reason it's being posted here is that the model also happens to call to mind the topic of this community, particularly when thinking about the combination of quantum mechanical interpretations with this cosmological picture.

There's only one mirror universe on a cosmological scale in Turok's theory.

But in a number of QM interpretations, such as Everett's many worlds, transactional interpretation, and two state vector formalism, there may be more than one parallel "branch" of a quantized, formal reality in the fine details.

This kind of fits with what we might expect to see if the 'mirror' universe in Turok's model is in fact an original universe being backpropagated into multiple alternative and parallel copies of the original.

Each copy universe would only have one mirror (the original), but would have multiple parallel versions, varying based on fundamental probabilistic outcomes (resolving the wave function to multiple discrete results).

The original would instead have a massive number of approximate copies mirroring it, similar to the very large number of iterations of machine learning to predict an existing data series.

We might also expect that if this is the case that the math will eventually work out better if our 'mirror' in Turok's model is either not quantized at all or is quantized at a higher fidelity (i.e. we're the blockier Minecraft world as compared to it). Parts of the quantum picture are one of the holdout aspects of Turok's model, so I'll personally be watching it carefully for any addition of something akin to throwing out quantization for the mirror.

In any case, even simulation implications aside, it should be an interesting read for anyone curious about cosmology.

4
1

While I'm doubtful that the testable prediction will be validated, it's promising that physicists are looking at spacetime and gravity as separated from quantum mechanics.

Hopefully at some point they'll entertain the idea that much like how we are currently converting continuous geometry into quantized units in order to track interactions with free agents in virtual worlds, that perhaps the quantum effects we measure in our own world are secondary side effects of emulating continuous spacetime and matter and not inherent properties to that foundation.

5
1
submitted 1 year ago* (last edited 1 year ago) by kromem@lemmy.world to c/simulationtheory@lemmy.world

I'm not a big fan of Vopson or the whole "let's reinvent laws of physics" approach, but his current approach to his work is certainly on point for this sub.

6
1

At a certain point, we're really going to have to take a serious look at the direction things are evolving year by year, and reevaluate the nature of our own existence...

7
1

An interesting bit of history on thinking related to simulation theory even if trying to define itself separately (ironically a distinction relating to why and not how, which physicists typically avoid).

It's a shame there's such reluctance to the idea of intention as opposed to happenstance. In particular, the struggles to pair gravitational effects against quantum effects mentioned in the article might be aided a great deal by entertaining the notion that the former is a secondary side effect necessary in replicating a happenstance universe operating with the latter.

Perhaps we need more people like Fredkin thinking outside the box.

8
1

I find this variation of Weigner's friend really thought provoking, as it's almost like a real world experimental example of a sync conflict in multiplayer netcode.

Two 'observers' being disconnected from each other who both occasionally measure incompatible measurements of something almost seems like universal error correction in resolving quanta isn't being applied more than one layer deep (as something like Bell's paradox occurring in a single 'layer' doesn't end up with incompatible measurements even though observers are disconnected from each other).

What I'm currently curious about would be how disagreement would grow as more layers of observation steps would be added in. In theory, it should multiply and compound across additional layers, but if we really are in a simulated world, I could also see what would effectively be backpropagation of disconnected quanta observations not actually being resolved and that we might unexpectedly find disagreement grows linearly according to the total final number of observers at the nth layer.

In any case, even if it ultimately grows multiplicatively, disagreeing observations by independent free agents of dynamically resolved low fidelity details is exactly the sort of thing one might expect to find in a simulated world.

9
1

One of the first papers by a serious physicist I've seen that is modeling their work on the premise of being in a simulation:

It has long been theorized since Euclid’s study on mirrors and optics that as the most fundamental law of physics, all nature does is to minimize certain actions. But how does nature do that? The machine learning and serving algorithms of discrete field theories proposed might provide a clue, when incorporating the basic concept of the simulation hypothesis by Bostrom. The simulation hypothesis states that the physical universe is a computer simulation, and it is being carefully examined by physicists as a possible reality. If the hypothesis is true, then the spacetime is necessarily discrete. So are the field theories in physics. It is then reasonable, at least from a theoretical point of view, to suggest that some machine learning and serving algorithms of discrete field theories are what the discrete universe, i.e., the computer simulation, runs to minimize the actions.

  • (relevant paragraph from the paper)

The central hypothesis of a discrete spacetime is rather conservative of a leap, but the looser hypothesis which is hinted at here (and one that's very much been on my mind recently) is the notion that machine learning is behind aspects of natural processes.

SimulationTheory

62 readers
2 users here now

A place for serious discussion of simulation theory.

Rules:

  1. No hate speech.
  2. Treat others with respect, no matter your agreement or disagreement.
  3. No low quality participation.
  4. Posts must clearly tie in with simulation theory or a submission statement must be added to explain the relevance to the topic.

founded 1 year ago
MODERATORS