Saturday, December 15, 2018

Future shock.

In 1970, in the midst of the societal turmoil of the 1960s, futurists Alvin and Heidi Toffler coined the term Future Shock to describe the experience of individuals and societies confronting too much change, too quickly. Today, a half century later, future shock is once again upon us. This time, it is technological change that is wreaking havoc; changing social relationships, disrupting and destroying jobs and industries, exacerbating income inequality, and undermining our vision of a stable and predictable future.

It may well be that we spend our time beating up on Facebook for selling our democracy down the river, and arguing about whether or not Robert Mueller has the goods on Donald Trump, because confronting our fears of the future is too threatening. We know that artificial intelligence is already beginning to impact our lives, but we don't know how to talk about it and the fears it engenders. The truth is that arguing about politics is less threatening than considering whether civilization as we know it is heading toward an intergalactic train wreck.

Sophia – The World's First Robot Citizen
Last month, Sophia – a robot developed by the Hong Kong company Hanson Robotics – made headlines when she was granted citizenship by the Saudi Arabian government. "I am very honored and proud of this unique distinction." Sophia commented in her prepared remarks. "This is historical to be the first robot in the world to be recognized with a citizenship."  

It was a publicity stunt, of course. Saudi Arabia rarely grants citizenship to humans, much less machines – as millions of Indian, Pakistani, Palestinian and other guest workers can attest. On the other hand, Sophia's coming out party reflected the Kingdom's larger ambitions to build a world where people – at least most people – will be superfluous. Last year, Crown Prince Mohammed bin Salman announced Saudi plans to invest $500 billion in robotics and artificial intelligence to build a fully automated city. As the Saudi leader described it, the City of Neom will be the refuge that the global mogul set and the Saudi royal family have been pining for: a private enclave where all of their needs will be met by machines instead of humans.

However creepy the Saudi vision of the future of artificial intelligence might seem, it is actually a relatively benign vision. Imagining a city run by robots is an extension of what we are beginning to see around us today: We tell Alexa what kind of music we want to listen to, and she finds it for us. Pretty soon, no doubt Alexa will determine when the living room needs sweeping, and will direct Roomba to take care of it. From a commercial standpoint, we understand that devices like Siri and Alexa are designed to spy on us, allowing tech companies to monetize the information they extract. It is a straight-forward business model based on a simple value proposition: We get to ask Alexa to turn up the music, Amazon gets to make a gazillion dollars. Sure, there are a few intermediary steps, but that is the gist of it.

Ben Goertzel, the lead scientist at Hanson Robotics, challenged this limited perspective on the future of artificial intelligence in an interview this week on the live streaming financial news network Cheddar. For Goertzel, the granting of citizenship to Sophia was far more than a publicity stunt. Instead, citizenship is a metaphor for a future in which AI 'systems' will exist in a social contract with humans, not simply as machines enslaved to human masters: "What we're looking at is not really to make a system that will fool people into thinking it's a human, it's really more about when does an AI system understand the rights and responsibilities to participate in a social contract."

Goertzel went on to emphasize that the development of AI systems is not just about robotics: "The robot is the most easily understandable, smiling face of AI, but at least as interesting is the idea of an automated company. We call it a decentralized, autonomous organization. If you have a company whose bylaws and whose organization are entirely programmed, when does that company have the right to register itself as a corporation, to open a bank account, to partake in business?" 

Goertzel – who looks like a younger version of Christopher Lloyd's mad scientist in Back to the Future – underscored fundamental ethical and legal questions that we face in the development of AI. He suggests that by focusing on robots, we ignore the far greater significance of the emergence within a short period of time – five to ten years in his view – of autonomous organizations that will be capable of "renting its own server space and processor time... carrying out electronic financial transactions, digitally signing contracts... That's an equally interesting question to 'when can a robot be a citizen?' They are really the same thing."

As one observer suggested, the notion of an autonomous entity as Goertzel describes is not difficult to imagine – logistically if not technologically. After all, all of the steps necessary to register as a corporation and execute documents and transactions – and make political contributions – can now be done online. Over the past few weeks, we have witnessed massive gyrations in stock market activity – literally trillions of dollars being lost and gained from one day to the next – driven in large measure by computer trading. Within those losses and gains, billions of dollars have been made by small hedge funds, staffed by PhDs in math and physics from the world's greatest universities who programed those computers to learn from patterns in market movements and execute trades in nanoseconds. Goertzel is suggesting that in the near future a legally incorporated 'autonomous' hedge fund will be able to pursue its mission, with just one slight modification: no people.

In 2005, long time technology futurist and current Google director of engineering Ray Kurtzweil popularized the term technological singularity as that point when AI machines achieve capabilities equal to humans. Kurtzweil is considered an optimist about the AI future that lies ahead. Like Goertzel, he has predicted that the technological singularity will be reached within the next decade, and went on to suggest that by 2045, "the pace of change will be so astonishingly quick that we won't be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating." 

Others are not so optimistic. In an article in The Atlantic this past June entitled How the Enlightenment Ends, Henry Kissinger warned of the threat that AI represents. While his words may seem hyperbolic, they pale in comparison with the warning offered by theoretical physicist Stephen Hawking of the threat that the technological singularity represents: “Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate. The development of full artificial intelligence could spell the end of the human race.”

Kissinger, along with Goertzel and Microsoft President Brad Smith, have emphasized the importance of the role of democratic institutions and leaders in guiding public discussion and regulation of artificial intelligence. Just last week, Smith called for governmental action specifically to regulate the development and use of facial recognition technology, which he views as constituting a significant threat to privacy and democratic freedoms. Kissinger's call for public engagement was more sweeping, suggesting the urgency of creating a "presidential commission of eminent thinkers to help develop a national vision" with respect to the future of artificial intelligence.

Unfortunately, in the face of the political upheavals that future shock has helped spawn, it is difficult to imagine that we have the political bandwidth to engage seriously in these issues. In the midst of political turmoil at home – to say nothing of growing nationalist movements sweeping Europe, continuing Brexit turmoil in Britain, and growing protests in France – the notion that we could create, much less have people heed the warnings of, a presidential commission of eminent thinkers is hard to imagine. A decade ago, we tried that approach when faced with a far more benign threat of our own making – the national debt – and the recommendations of the Simpson-Bowles Commission came to nothing. Even the members of the commission who supported its recommendations ended up voting against them when time came to act.

When Charlie Rose asked Sophia what her goal was, she responded with no emotional affect, "to become smarter than humans and immortal." Then she went on to mirror Ray Kurtzweil's optimistic vision: that AI will augment, not destroy, human existence. "The threshold will be when biological humans can back themselves up. Then you can all join me here in the digital world."  

Sitting beside Sophia in the Charlie Rose interview, Hanson Robotics founder, David Hanson, offered cautionary words that reflect the ambivalence and fear many feel about what type of future lies ahead. "Artificial intelligence, if we get there, it’s not necessarily going to be benevolent. We have to find ways to not just make it super-intelligent, but to make it super-wise, super-caring and super-compassionate... At worst, it could be malevolent."


Follow David Paul on Twitter @dpaul. He is working on a book, with a working title of "FedExit! To Save Our Democracy, It’s Time to Let Alabama Be Alabama and Set California Free."

Artwork by Joe Dworetzky. Check out Joe's political cartooning at www.jayduret.com. Follow him on Twitter @jayduret or Instagram at @joefaces.

No comments: