top of page

Free Will vs. Prediction: What Centaur’s 64% Accuracy Says About Human Freedom

  • Writer: themindreviewmagaz
    themindreviewmagaz
  • Aug 22
  • 7 min read

Free will has always been the kind of topic that philosophers love to argue about and psychologists love to test in the lab. For centuries, people have debated whether our choices are truly our own or just a fancy illusion engineered by neurons, hormones, and hidden environmental nudges. And now, in 2025, we’ve got a new player on the scene: an AI model called Centaur. Trained on over ten million decisions made by sixty thousand people across more than a hundred experiments, Centaur claims it can predict human choices with about 64% accuracy (Griffiths et al., 2025). That’s not mind-reading, but it’s also not nothing. Imagine a machine that, more often than not, knows whether you’ll pick fries over salad before you even know yourself. It feels a little bit like free will just got subtweeted by data science.


The number 64% is both comforting and unsettling. On the one hand, it means we’re still unpredictable 36% of the time, which suggests there’s wiggle room for spontaneity. On the other hand, it also means that a lot of our so-called “choices” are running on scripts we don’t consciously control. What does that say about freedom? Are we just playing out probabilistic patterns that can be graphed and gamed? Or is that 36% margin where real autonomy lives, the messy human spark that refuses to be reduced to code? Centaur’s success throws us into the heart of a timeless question with a very modern twist: if AI can guess our moves better than our best friends can, how much of our life is really ours to decide?


ree

To understand why Centaur’s achievement is such a philosophical mic-drop, we need to rewind a bit. The idea that human behavior might be predictable isn’t new. Back in the 19th century, Pierre-Simon Laplace imagined a “demon” that, if it knew all the forces and positions of particles in the universe, could predict the future with absolute certainty. That was the OG determinism flex. In psychology, behaviorists like B.F. Skinner leaned hard into the idea that behavior is just stimulus and response: give a pigeon a treat, it pecks the button; no mysteries required (Skinner, 1953). But then along came cognitive psychology and neuroscience, insisting that the mind is not just a reflex machine but a complex, dynamic system full of emergent properties. Fast-forward to the 21st century, and experiments in neuroscience started hinting that free will might be an illusion. Benjamin Libet’s famous 1980s experiments showed that our brains “decide” to act milliseconds before we become consciously aware of deciding (Libet et al., 1983). Basically, the body hits play before the mind writes the script. Centaur doesn’t arrive in a vacuum. It’s joining a long tradition of science trying to poke holes in our sense of autonomy. What makes it different is scale. Unlike Libet’s small lab tests or Skinner’s pigeons, Centaur was trained on a staggering amount of data, ten million decisions, spanning 160 experiments. That breadth gives it a robustness that earlier single-lab findings lacked. And instead of being limited to one task, it adapts to novel scenarios and even predicts not just what we’ll decide but how fast we’ll make that decision (Griffiths et al., 2025). Reaction times, which often reveal unconscious processing, can be forecast with decent accuracy too. The unsettling implication is that Centaur isn’t just mapping our conscious choices; it’s peeking into the hidden architecture of cognition, the part we don’t control but that drives so much of what we do.

Now, 64% is not 100%. But it’s also not random guessing. Flip a coin and you’ve got 50-50 odds. Centaur adds 14 percentage points above chance. That’s significant when you scale it. Imagine a therapist, educator, or policymaker who can predict what people will do two-thirds of the time. That shifts interventions from “maybe this works” to “we’re pretty sure this works.” In clinical psychology, for instance, predictive models could anticipate relapse in depression or substance use, letting clinicians step in proactively (Eisenberg et al., 2021). In education, it could forecast which students are likely to disengage, allowing tailored support before failure sets in (Siemens & Baker, 2022). Even in experimental design, Centaur can help researchers by simulating human responses to new tasks, saving time and resources. That’s the optimistic spin.



But here’s the part that makes philosophers and Gen Z Twitter alike nervous: if a machine can call our choices two-thirds of the time, are we really free? The tension between determinism and free will is suddenly not just armchair philosophy but actual machine learning. If Centaur knows what you’ll choose before you choose it, then is your “choice” authentic? Some scholars argue that predictability doesn’t negate freedom, because freedom is not about randomness but about acting in line with our values and reasons (Dennett, 2003). By that view, if Centaur says you’ll pick fries, and you do, it doesn’t mean you weren’t free—it just means your preference for fries is stable and intelligible. The model’s prediction reflects your character, not its override.

But let’s be real: the likelihood of having your brain second-guessed by an algorithm are… weird. Imagine scrolling Netflix, and the platform already knows which movie you’ll end up watching after you’ve rejected twelve others. At some point, the suggestion stops feeling like convenience and starts feeling like surveillance. Prediction, even when accurate, risks reshaping our sense of agency. Studies in psychology show that when people are told free will is an illusion, they sometimes act less responsibly, cheat more, and show less motivation (Vohs & Schooler, 2008). Believing we are predictable can change how we behave—which ironically makes us more predictable. It’s a bit of a feedback loop: Centaur says we’re predictable, we internalize it, and boom, we become more machine-like. Of course, not everyone buys into the doom-and-gloom. There’s also the 36%. That unpredictable margin may be where human freedom thrives. Chaos theory in mathematics shows how even deterministic systems can yield outcomes that look random because of sensitivity to initial conditions. Likewise, cognitive neuroscience points to noise in neural firing and stochastic processes that make behavior variable (Rolls & Deco, 2010). So maybe Centaur’s 64% ceiling is actually revealing something fundamental: that human beings are partly rule-governed and partly chaos. We’re like jazz improvisers: structured by rhythm and key, but still capable of riffs that nobody could see coming.

ree

There’s also the fact that prediction doesn’t equal control. Just because I can predict the sun will rise tomorrow doesn’t mean I control it. Similarly, Centaur forecasting your choice doesn’t mean it caused your choice. This distinction matters. Human freedom, many philosophers argue, isn’t about being totally unpredictable but about being self-determined our actions flowing from who we are, even if that “who” can be modeled (Frankfurt, 1971). In that sense, the fact that Centaur can predict us might actually affirm the consistency of our identities. We’re not random; we’re coherent. And coherence is arguably what makes freedom meaningful.

Still, the creep factor lingers. If institutions start using models like Centaur to shape behavior- think governments nudging citizens, or corporations optimizing ads, the line between prediction and manipulation blurs. Behavioral economics already leans on “nudges” to steer decisions subtly (Thaler & Sunstein, 2008). Add AI with massive predictive power, and nudges become shoves. There’s a reason data privacy activists are sounding alarms: prediction plus power equals influence, and influence at scale risks eroding authentic choice. If every decision you make is anticipated, packaged, and targeted, then freedom risks becoming a curated illusion.


Interestingly, the cultural response to all this might split along generational lines. Older generations often cling to free will as a moral cornerstone: no freedom, no responsibility. Gen Z, by contrast, tends to align with probabilistic thinking we grew up on algorithms curating feeds, Spotify predicting our tastes, TikTok guessing our humor. For many, the idea that choices are predictable doesn’t feel like the end of freedom; it feels like Tuesday. In some sense, Centaur isn’t introducing a new reality but exposing the one we’ve been living in: one where patterns, preferences, and probabilities guide us more than we admit. The trick is whether we embrace that knowledge as empowering (“I know my patterns, so I can hack them”) or disempowering (“I’m just code in a cosmic simulation”).


So where does this leave free will? Probably somewhere in the messy middle. Centaur’s 64% accuracy doesn’t kill freedom, but it does complicate it. We’re not the pure authors of our choices, but neither are we puppets on deterministic strings. Instead, freedom might live in that 36% unpredictability, in our ability to reflect, resist, and sometimes surprise even ourselves. Philosophers call this “compatibilism” the idea that free will and determinism aren’t enemies but roommates who awkwardly coexist (Dennett, 2003). Centaur just gives compatibilism a new dataset to chew on.

The real question may not be whether we’re free, but how we live knowing our freedom is bounded. If machines can predict us, do we retreat into denial, or do we learn to navigate our patterns more consciously? Maybe the future of free will isn’t about proving we’re unpredictable but about leveraging predictions wisely, using them to grow, heal, and connect, rather than to manipulate. In that sense, Centaur is less a threat and more a mirror, reflecting back the structured chaos that makes us human. It’s unsettling to see ourselves as data, but maybe also liberating. Because if 64% of us is predictable, then 36% of us is the mystery we get to keep. And maybe that’s enough.




References


Dennett, D. C. (2003). Freedom evolves. Penguin.


Eisenberg, I. W., Bissett, P. G., Zeynep Enkavi, A., Li, J., MacKinnon, D. P., Marsch, L. A., & Poldrack, R. A. (2021). Uncovering the structure of self-regulation through data-driven ontology discovery. Nature Communications, 12(1), 1-15. https://doi.org/10.1038/s41467-021-21890-3


Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1), 5-20. https://doi.org/10.2307/2024717


Griffiths, T. L., et al. (2025). The Centaur model: Predicting human decisions across tasks. Nature Human Behaviour. https://doi.org/10.1038/s41562-025


Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). Brain, 106(3), 623–642. https://doi.org/10.1093/brain/106.3.623


Rolls, E. T., & Deco, G. (2010). The noisy brain: Stochastic dynamics as a principle of brain function. Oxford University Press.


Siemens, G., & Baker, R. S. (2022). Learning analytics and educational data mining: Towards communication and collaboration. Proceedings of the 2nd International Conference on Learning Analytics & Knowledge, 252-254. https://doi.org/10.1145/2330601.2330661


Skinner, B. F. (1953). Science and human behavior. Simon & Schuster.


Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.


Vohs, K. D., & Schooler, J. W. (2008). The value of believing in free will: Encouraging a belief in determinism increases cheating. Psychological Science, 19(1), 49-54. https://doi.org/10.1111/j.1467-9280.2008.02045.x

Comments


Join the Club

Join our email list and get access to specials deals exclusive to our subscribers.

Thanks for submitting!

bottom of page