Skip to main content

Stream every public session from the 27th annual Global Conference right here on our website.

Fear of the Machine?

Power of Ideas
Fear of the Machine?

Existence in the 21st century is quantifiably and significantly better for the majority of humankind than it was a century ago. Billions of the global population have seen their living standards improve dramatically; diseases that once killed millions have been eradicated or controlled; education, technology, and sanitation—all have become commonplace in countries where they were once the preserve of the very rich. Obviously, there are still inequities and systemic social and economic issues that need addressing, and the threat of climate change is ever-present, but the underlying trend when it comes to quality of life has been positive—continuously—for generations. You wouldn’t know this from our cultural output, though: literary and cinematic visions of the future are almost exclusively dystopian, whether it’s environmental apocalypse in The Road or the clash of human and computer in Ex Machina.

It seems, then, that we imagine negative futures for ourselves even when the evidence suggests otherwise. At the moment, that is particularly the case when it comes to artificial intelligence (AI). Just as we are viewing the initial stages of what is likely to be a transformational period in our history, with machine learning augmenting and expanding human potential in every sphere, and the latest versions of ChatGPT and Midjourney offering dazzling glimpses of the future, it feels like many of us are trapped watching re-runs of The Terminator. Perhaps in our atavistic pasts, pessimism surrounding new technologies protected us from harm; now, though, there’s a risk that excessive caution surrounding the implications of an AI-driven world may seriously impede the kind of progress that has prevented our dystopian nightmares from coming true.

It seems that we imagine negative futures for ourselves even when the evidence suggests otherwise.

We have written before about the need to think of ourselves as guardians of the planet—positioning humans as a new category of species, one with the unique ability to monitor and prevent or mitigate the extinction of other species. Guardians have a duty to serve as protectors of the life within an ecosystem, and to manage threats to that life. What if rather than being the malevolent figures of the popular imagination, future incarnations of sophisticated AI actually ended up being better than humans? Not only better at chess and calculus, but also better in a deeper, moral sense. What if machines were good, and better at being good, than we could be?

What do we mean by good? We tend to think of AI-driven decision-making as coldly rational. But rational according to what code? What if the machines had as their guiding principles not profit or power, but a code that prompted them to use their vast reserves of data and computational power to be better stewards of life, better at deploying resources and organizing distribution, better at administering medicines? If machines were unshackled from some of the more excessive checks and balances we seek to impose on them and permitted instead to develop intelligence that was as complex, broad, and multifaceted as humans, why do we assume that this would lead to an inevitable clash, rather than a collaborative effort in which we work together to build a future that was more just and peaceful, maximizing the optimal combination of human ingenuity and machine learning?

In our respective worlds of finance and genetics, we are already seeing astonishing developments as we enter the first stages of the AI revolution. We are using neural networks to help us clarify and refine investment strategies; we are achieving extraordinary efficiencies in the analysis and prognosis of diseases from imaging and DNA-sequencing data; we are using the data-processing power of machines to drive and shape the future iterations of our industries. Nothing we have seen so far persuades us that technology, twinned with benevolent and progressive human intelligence, will deliver anything other than a brighter, better future. We need, of course, to ensure that technological progress proceeds in a manner that is responsible and considered, but we must not allow ourselves to be so paralyzed by fear that we are unable to move forward. We have created machines that are immortal, self-improving, and unburdened by the petty cravings that prevent humans from achieving the best outcomes for themselves and their species. Now we need to have the courage to allow them to achieve their full potential.