The advent of modern science began to push back that darkness, at first very slowly.
The following excerpts are adapted from the authors’ recent book, The Age of Prediction, published by MIT Press
by Igor Tulchinsky and Christopher E. Mason
Humanity has entered a new era. We are now living in a world that is increasingly wired by billions of predictive algorithms, a world in which almost everything can be predicted and risk and uncertainty appear to be diminishing in almost all areas of life. We are living longer, thanks to advances in health care and precision medicine. We have a greater mastery of the physical world that allows us to dream of, and build, new technologies that allow us to explore other planets and visualize billions of galaxies. We can model markets, disease, and traffic with increasingly greater precision, and we’re getting very close to handing over the keys to the car so it can drive itself. Even more striking, our tools may be revealing the genesis of some elusive and stubborn complexities of human behavior, and algorithms are even being used to alter people’s behavior. Predictive algorithms have changed the world, and all the worlds to come, and there is no going back.
Magician by Anne Nygård [Unsplash] |
To better understand this new era, this book is focused on two abstractions that are increasingly shaping our lives: prediction and risk. Prediction speaks of what is to come; risk, less visibly, calculates the probability that a model is wrong and, like a grim accountant, totals up the costs of that error. Where prediction serves as a light projecting into the likely future, risk is the shadow of what cannot be seen or predicted. Their relationship is often paradoxical, particularly as prediction changes age-old apprehensions about risk and reshapes the very type of society and world in which we live.
We call this new era the Age of Prediction. In this emerging age, prediction is growing more powerful, ubiquitous, and precise, both because the tools for making predictions are advancing quickly and because the fuel of prediction—data—is accumulating at such a rapid and exponential (sometimes super-exponential) rate. Yet, none of this is easy. When it comes to many natural processes, we have grown more and more quantitative—and thus more predictive. Even quantum physics, a field that has seen some of the most dramatic scientific advances in the past century, requires the ability to precisely calculate elements of indeterminism, randomness, chaos, and probabilistic functions. Biology and medicine now require the same ability, as they adapt to ever-changing genetic mutations and environmental variables, such as drug-resistant tumors or an invasive species disrupting an ecosystem.
However, the challenge of dealing with social processes is more difficult. Any trend fueled by the decisions of one or many human agents involves more risk—sometimes much more. As the economist John Kay and the former Bank of England governor Mervyn King note in their book Radical Uncertainty (2020), “The world of economics, business, and finance is non-stationary—it is not governed by scientific law.” People are often the hardest things to predict, and that makes prediction more difficult in these social areas.
Regardless of this difficulty, we now have the tools and the data to predict more things with greater accuracy and farther out in time. In the Age of Prediction, this trend will continue, expand, and accelerate, and risk will shrink but not disappear. Risk is a realm of the unknown, of the incalculable and uncertain. The relationship between prediction and risk is conventionally an inverse one: if our ability to accurately predict the weather a week ahead were ever to approach 100 percent, the risk that you would fail to bring your umbrella on a rainy day would, in theory, approach zero (you might forget the umbrella or lose it, but the likelihood of this too could be quantified). Prediction in natural, stationary systems can approach perfection once those processes are truly understood; large numbers drive certainty so close to 100 percent that we can declare a process a scientific law. Empirical proof of this phenomenon, known as inverse probability, was scribbled down as a now-ubiquitous equation in an essay by the then obscure nineteenth-century British nonconformist cleric Thomas Bayes. From him, we have Bayesian models, in which new data continually update probabilistic judgments about future events.
Yet, the relationship between prediction and risk is not as simple as getting more data, running them through an algorithm, and applying them broadly. For instance, whose risk are we talking about? In chapter 8, we discuss the rise of autonomous, predictive weaponry. A military that unleashes such weaponry obviously believes it will be effective in military terms; smart weapons may kill more of the enemy and fewer of that military’s own soldiers. However, given the competitive nature of armed conflict and the cross-fertilization of technologies such as artificial intelligence (AI), robotics, and modern munitions, any prediction about the evolution of warfare is profoundly uncertain and represents a rising risk.
There are other ways to look at prediction and risk. The convention that better predictions mean reduced risks may well reflect how the human condition has evolved since our emergence as self-aware, humanoid animals in Africa. The human brain has evolved into an instrument of forethought, planning, and prediction. One could argue this is a quintessential aspect of being human: projecting dreams, plans, and ideas into the future. For many millennia, prediction was more accurate under relatively simple conditions and over short periods of time—for planting, hunting, and fighting. Much of life remained uncertain, and much thought went into the role of what seemed like fate. Death appeared suddenly and arbitrarily. Disaster lurked at every turn. Risk was omnipresent.
The advent of modern science began to push back that darkness, at first very slowly. The seventeenth century brought us advances in physics and mathematics, including calculus, which enabled Isaac Newton to predict accurately the regularities of the heavens and the flight of bullets. That same period also saw the first breakthroughs in what we now know as probability and statistics. A better understanding of probability opened the door to a deeper understanding of random processes such as chance and luck, and statistics drove efforts to gather data and analyze them with sophistication. By the nineteenth century, data gathering and analysis had entered a capitalist world that focused increasingly on prediction; significantly, this period saw the development of robust futures markets, which consumed agricultural statistics and weather forecasts to set national and global prices. Later, but no less important, advances came from a deeper understanding of life itself, from evolution to genetics to genomics, unleashing the powers of modern medicine. All these advances have altered myriad perspectives on risk, spanning predictions of a single cell’s response to a drug or a single stock’s fluctuations or an individual’s likely next vote. As we describe in this book, a surprising common thread of prediction is that the methods used in one discipline can inform another; metrics like the Gini coefficient can measure economic differences, yet we can also use the same formula to map shifts in bacterial DNA to predict growth and resistance to antibiotics.
Predictive algorithms have fundamentally changed almost all aspects of our world, yet such a change has been brewing for several centuries. In her book describing the changing relationship between prediction and uncertainty in late nineteenth-century America, the historian Jamie Pietruska offers up striking conclusions about that era. Americans quickly grew accustomed to predictions in many areas of life, but they also grew skeptical of those predictions’ accuracy and more willing and able to discriminate between scientific efforts and fortune-telling or phrenology, as well as to accommodate the potential corruption stemming from the rising value of data and prediction. Yet even as new predictive technologies surfaced, new risks appeared: economic instability, social dislocation, and inequality as well as, Pietruska notes, “industrial accidents, steamboat explosions, and railroad collisions.” Americans changed their behavior in nuanced ways as they struggled with what she calls “the spectre of Uncertainty.”
Igor Tulchinsky is founder, chairman, and CEO of WorldQuant, a quantitative investment firm based in Old Greenwich, Connecticut. He is the author of Finding Alphas: A Quantitative Approach to Building Trading Strategies and The UnRules: Man, Machines and the Quest to Master Markets.
Christopher E. Mason is a geneticist and computational biologist who has been a Principal Investigator and Co-investigator of 11 NASA missions and projects. Mason is Professor of Genomics, Physiology, and Biophysics at Weill Cornell Medicine and the Director of the WorldQuant Initiative for Quantitative Prediction. He also holds affiliate appointments at the New York Genome Center, Yale Law School, and the Consortium for Space Genetics at Harvard Medical School. Dr. Mason is the author of The Next 500 Years: Engineering Life to Reach New Worlds.
Post a Comment