Artificial Intelligence (AI) is already impacting our lives. From self-driving cars to intelligent home security systems to a chess bot which can beat the world champion, AI-powered devices are changing how we live, work, and play. But are we ignoring their potential dangers?
How can AI threaten humanity? You may have seen movies like the Terminator series where AI-powered robots have a mean streak. Today’s AI is unlikely to hop in a truck and try to run you off the road. That doesn’t minimize however, the potential threat however of having intelligent systems which can purposely or inadvertently cause harm. For example, it is now possible to design an AI system to discover the security protocols and procedures used to protect a company’s web services, identify the easiest vulnerabilities to exploit, and then use those vulnerabilities access the company’s intellectual property.
With AI quickly becoming mainstream, it’s time for us to begin the thoughtful regulation of the use of AI in production systems. We need to consider AI just as we do other technologies which impact peoples’ lives. ICANN and NIST for example are two of the bodies responsible for developing standards and regulations for the internet. There should be an equivalent set of global regulatory bodies who provide the rules and guidelines for AI developers.
Unfortunately, developing a set of rules for AI is not an easy task. First, there is no clear definition of what AI is. It’s hard to regulate something which we can’t yet define. To start, we need to create a clear definition of AI. Our current knowledge lacks the experience to fully understand all of the issues which could arise, simply because AI is still so new to us. Without this knowledge, it becomes even more difficult to develop a clear, concise set of rules to regulate this rapidly growing field.
To address this problem, tech leaders have repeatedly come together to pool their knowledge to study the effects of AI on human lives and the problems most likely to arise as it reaches ubiquity. This collaboration is focused addressing the ethical and moral issues of AI research, and on creating set of rules – if needed – to then regulate AI research. These leaders include representatives from the industry’s top companies and universities, including: Facebook, Alphabet, Amazon, Microsoft, IBM, UT Austin, Harvard, Stanford, and others.
Bill Gates and Stephen Hawking have notably raised their concerns over the impact of AI on our daily lives. Tesla founder Musk supported Gates’ and Hawking’s bold claim that AI is going to be a threat to humanity’s existence. While it is certainly possible to create a machine surpassing humans’ intelligence, for now, it is still brutally difficult. The hope is that these experts’ discussions will result in an agreement on an initial set of rules and regulations which will protect both humanity and innovation. Many AI researchers share the opposing view that AI research should not be limited, and that any regulations will only slow down the field’s growth.
There may not yet be a clear answer, but it’s a question worth discussing. What’s your opinion? Tweet us on @unifiedinbox and let us know what you think!