DEVS is one of my favorite sci-fi series in recent years. Spoilers ahead if you haven’t seen it yet.
The premise is that they’ve been able to create a perfect simulation of reality such that you can look into the past or future with crystal clarity. The drama comes from a seemingly inevitable event which cannot be changed despite foreknowledge of it.
Here’s a good recap of the story:
One thing that has never sat well with me is the idea that despite knowing your own actions in future, you cannot change them
The key characters, who have seen the future, are resolved to see them play out as ‘scripted’ according to what they’ve already seen.
The Problem with Causal Determinism Being Foretold
The idea that we live in a kind of clockwork universe is not new. DEVS suggests that every event from the Big Bang onwards is a systematic process where if you knew the configuration of atoms at the starting point, you could model every subsequent event up to modern day and beyond.
There are also many time travel films, such as Tenet where future events are immutable. Or even going back to the story of Oedipus, where knowledge of the future does not change the outcome.
What’s different about DEVS is the minute-to-minute precision in understanding the future. It leads to the following problem:
What if I set the machine to a minute in the future and focused it on myself. In front of me, I put a red pen and a green pen, knowing that I plan to pick one of them up. If I see myself pick up the red pen — why can’t I pick up the green pen?
In fact, breaking the premise of the future being deterministic leads to the climax of the show, and also suggests that perhaps micro-events still have no impact on greater macro-events.
Maybe, as thinkers like Sam Harris would argue, there is no such thing as free will in this story, in which case the outcome to my thought experiment above would be: Despite wanting to pick up the green pen, I still pick up the red pen involuntarily.
But I find this unsatisfying, so given the premise of DEVS, here’s how I think the machine would actual work.
Fixing the Machine: Probabilistic Futures
If you believe in Causal Determinism and Free Will, there’s only one way for the machine to work.
Whenever the machine is predicting an outcome that involves actors with foretold knowledge of future events — the results could only be probabilistic
In other words, we’ve gone a little Minority Report here in that the machine could only show you the highest probability outcome. Or it could show you various alternatives.
So maybe in my previous thought experiment, it shows me picking up the red pen with a 50% probability score.
This fixes a lot, but there are still some problems.
A Temporal Trolley Paradox
Let’s try a different thought experiment, using the Probabilistic Future Machine I’ve described above.
What if I focus the machine one hour in the future, on a street corner that’s a few minutes away from me. Without my intervention, an old lady will step off the curb and get hit by a car — what does the machine show me?
At first glance, you’d think — I’d see an old lady getting hit by a car. So naturally, I’d go out, with plenty of time to spare, and prevent that from happening.
But then, given there’s 0% chance I wouldn’t save the old lady, this should be contained within the future simulation, in which case I’d see myself saving the old lady from getting hit by a car.
Here’s the paradox, because if I’ve only seen myself tackling some old lady on a street corner, how would I know that I was actually saving her from getting hit by a car?
To exaggerate the paradox, let’s just say there’s 0% chance I would leave the house if I think it will lead to me tackling some old lady — thus ensuring that she did actually get hit by the car.
So maybe the system should show me either result with a 50% probability, however this is still wrong, because whatever I see — it ensures the opposite outcome is 100% likely.
The perfect system would show me the unlikely outcome with a low probability — in this case, the old lady being hit by the car. Leading me to discover the higher probability outcome, me saving the old lady.
Thanks for indulging me on this conceptual itch I had to scratch!