Humans are intrinsically wired to resist technology. As author Calestous Juma explained, technology has for hundreds of years introduced tension between the need for innovation and the pressure to maintain continuity and social order.
The very debates and concerns surrounding artificial intelligence, robotics, gene editing and others mirror the interpersonal and economic rationale that guided resistance to the printing press, farm mechanization, electricity and automobiles.
Despite centuries of protest, history has long proved that the majority of technologies initially refuted evolve into many of the world’s most consequential innovations. Those once feared as eliminating jobs or lessening human intellect result in the exact opposite. As Juma explained, technologists and consumers alike scoffed at the introduction of the cellphone, fairly noting that early models did little to augment our humanity. As we now know, cellphones are not only a tool for communication, but a global conduit for banking, education, medicine, transportation, social engagement and more.
Rethinking our association with AI
Public association with AI is mainly driven by Hollywood thanks to decades of films about an array of robots with mixed intentions. Films like Metropolis (1929), Starwars (1977), Terminator (1984), Short Circuit (1986), iRobot (2004), Wall-e (2008) and Ex Machina (2014).
However, billions of people around the world interact with AI on a daily basis through their phones and computers. AI powers the technology behind maps and search engines (Google), voice controls (Apple, Tencent), social networks (Facebook, Twitter), eCommerce (Amazon) and financial services companies (Visa, Stripe). These companies build products that millions of people use, which rely heavily on machine learning, natural language processing, computer vision and other components of AI.
Like the evolution of the cellphone, we need to expand our lens of understanding the true, beneficial potential of AI
This means using technology to lessen poverty, improve nutrition, eradicate diseases like cancer, stop or reverse climate change and, very importantly but most often forgotten, the more equal and fair distribution of resources and human rights.
The foundation for the Singularity
Individuals are understandably concerned given that experts can’t agree about whether AI will benefit or harm society. Some predict the Singularity in 20 years given Moore’s Law, while others predict 1000 years. The latter claim that Moore’s Law has physical constraints (i.e. silicon instability and limitations in size and layout). Still, others claim that the post-silicon era will start in about 10 to 15 years, replaced by optical, quantum, DNA, protein or other forms of technologies that will introduce the next level of computing powering the infrastructure of AI.
Regardless, we must be actively involved as we create and lay the foundation for good actions later. Like a child, if we miss out on educating and training AI now in proper, so understanding the matter of respect among humans, our environment and any living being, of inclusion and diversity in moral and free will, and of phasing consequences upon any action taken, we will have difficulties compensating for that as time progresses. One the one hand efforts like Teaching AI Systems to Behave Themselvesare an initial attempt as well as attempts to Democratizing AI Research and Development.
On the other hand, the past two decades since IBM Research’s Deep Blue in 1997 started with challenges beating humans in chess, followed by others such as Google’s Deep Mind beating humans in Go, beating humans in Poker, then in Civilization, now beating humans in Dota 2. I certainly can understand the challenge from a science perspective behind it, though I’m sure that if I’d teach and even more specifically train my own children constantly on how to beat humans, I can guess the outcome in the long term.
Any current attempt feels like any AI gets specifically trained to understand the weaknesses of humans and identify ways to beat them, not how to help them understanding the matter better or how to make use of it in an augmented way
Two key steps to applying ethical principles and moral values to AI:
Involve and educate all the sectors of the society
Define a unified set of values and principles that will guide the further development of AI
Active Involvement & Education
As Jaan Tallinn (co-founder of Skype and Kazaa, Deep Mind investor and founder of the Future of Life Institute) explained, “The time to plot a global trajectory is now, crucially that the trajectory planning is globally transparent and fair to engage as everyone.”
This is important given a discrepancy in expectations and methods that result from varied developer and public- and private-sector interests. Transparency is key to addressing such discrepancies.
I strongly believe that we must get as many people as possible educated and enabled on the AI technologies. This stands as the vision of City.AI, an organisation I co-founded to bring together not only industry peers, but even more with practitioners from many backgrounds (tech, science, product, business, investment, and more). Those come from regions and cultures around the world with the goal of sharing lessons learned applying AI, collaborating on current technical and ethical challenges putting AI into production and providing many different practical and ethical viewpoints on the way developing AI further.
Early efforts to inform and educate these constituencies are promising. This includes performing and supporting research studies, publicizing materials on AI and providing open source information and technology.
These organizations create awareness and ensure proper management of the challenge by increasing transparency in their development of the technology.
Unified Set of Principles & Values
An initial step in defining a unified set of principles developing AI further has been achieved at the Asilomar Conference 2017. A group within the Future of Life Institute came up with 23 AI principles in the areas of research issues, ethics and values and longer-term issues. All of those are relevant and provide a framework to act upon based on the area AI gets developed and applied.
In essence it’s about 2 things:
democratizing AI by educating as many people as possible about the impact and reality of AI technologies
sharing the prosperity created with AI. This means prioritizing practices to solve the world’s most pressing challenges (see the UNB’s 17 Sustainable Development Goals) including poverty, hunger and health
This also means defining “Good AI” as AI built with virtue — machines with morality. Therefore, a collaboration among the biggest names in AI technology: Amazon, Apple, DeepMind, Google, Facebook, IBM, and Microsoft have come together forming the Partnership on AI to Benefit People and Society. The organization’s eight tenets aim to guide the development of AI towards virtuosity and ensure AI benefits and empowers as many people as possible, without ulterior motives, so not simply for profit in order to give us a reason to believe that AI is good.
We need experts like Elon Musk, Stephen Hawking and others to help reinforce this notion and make sure we maintain zero-tolerance for AI’s use as a weapon.
Conclusion
Machines will reflect the values and principles of its creators and trainers. They will act based on the goals that they have been set up for
Therefore I agree with Harvard’s Beth Altringer, who calls for an ethical design principle for all who develop and apply AI. It won’t ensure the concrete ethical values, but it would ensure to limit the harm undirected intelligence could cause and push the move into beneficial AI. Therefore, as practitioners applying and developing AI, we should always answer the following:
What is really desirable about it?
For whom is it desirable?
For whom is it not desirable?
An AI-led future is as inevitable as that of electricity and farm mechanization. Generating fear and reluctance against AI is the contrary of what we can do, determining the fundamental cornerstones of our future society need to be our key priority. I encourage everyone to study the impact of AI and contribute to ethical development and implementation. Until then, we must adapt and embrace a new world of opportunities as enabled through technology.
PS: Let’s discuss further at the applied #AI conference WorldSummit.AI in Amsterdam this October 2017!
留言