• Konrad Fernandez

Autonomous AI – Why We Must Tread Carefully


"… With AI especially, I'm really optimistic and I think that people who are naysayers and try to drum up these doomsday scenarios … I don't understand it. It's really negative and in some ways, I actually think it's pretty irresponsible." - Mark Zuckerburg


That’s at end of the spectrum. And at the other, is Elon Musk calling AI our “biggest existential threat.” It’s a sensitive topic. People on either side of the debate have strong views. And while AI has the potential to enable a better world, it must be approached very carefully from two perspectives (a) the nature of application, and (b) the extent of autonomy in AI.


Stephen Hawking was another among the prominent names who have been vocal about the need for great caution in how we move forward with AI. And this consciousness is growing. But the level of caution is nowhere near what it should be, and this is sufficient cause for concern!

This is an extremely serious issue which needs serious attention and, hopefully, some intervention to ensure more responsible development and application of AI to enable greater transparency and governance. We know that AI is a powerful tool, but the question is; at what cost to our freedom, privacy, and perhaps even safety, do we empower machines?


Of course, not all AI has the same risk potential. The level of risk is dependent on the level of AI. Arend Hintze Assistant, Professor of Integrative Biology & Computer Science and Engineering, Michigan State University classifies AI into 4 levels (https://bit.ly/2fyeKmL)


Level 1 - Reactive Machines which do not have the ability to form memories and take decisions using past experiences. The older chess playing programs are of this class.

Level 2 - Limited Memory programs like the ones used in self driving cars; which can look at past experiences, observe patterns and associate these patterns with rules and decision frameworks.

Level 3 - Theory of Mind programs. Theory of Mind is a concept from psychology. It “is the ability to recognize and attribute mental states — thoughts, perceptions, desires, intentions, feelings –to oneself and to others and to understand how these mental states might affect behavior. It is also an understanding that others have beliefs, thought processes and emotions completely separate from our own.” (https://bit.ly/2lfHBk1)


Hintze distinguishes level 3 and level 4 AI. Theory of Mind programs understand the environment in which they are (like traffic, for instance) and also have a sense of other objects or people.


But Level 4 is self-aware!


Level 4 - Self-Awareness. At this level, programs understand their own being, and can understand others and predict behavior. Self-learning at this level gains a degree of sophistication that attempts to replicate human cognition, emotion and behavior.

Current AI tech is at level two and making advances towards level 3 and 4.


So, what are the risks?


1) Self-aware Rogue Programs

Perhaps, at the top of the list are programs that evolve and develop a personality based on skewed values, or morph into a rogue through a series of value modifications.

If Google’s recent experiment is anything to go by, there may be real cause for concern.

When DeepMind, Google’s AI arm, had two agents compete in a game, the programs demonstrated greed, aggression and the willingness to “shoot” the opponent with a laser in order to win. When the game was changed to one in which they had to co-operate to win, they demonstrated co-operative behavior. The more sophisticated the AI, the greater the complexities are in terms of “behavioral issues” and the greater the potential risks.


The bottom-line is this. Self-learning, self-aware, sophisticated AI programs can, and will, alter their behavior based on what values they are programmed with. And even more ominous is their ability to evolve, learn outside their memories, and possibly change their personality into something that was never initially intended. As Stephen Hawking warned, "It would take off on its own, and re-design itself at an ever-increasing rate."


2) Mal-function

There have been many recent cases of bots going wild! For instance, Wiki’s bots giving contradictory instructions and trying to correct each other’s instructions again and again in an endless cycle. More serious fiascos like Uber’s unauthorized “test” in San Francisco where the self-drive program drove a car past 6 red lights! And the somewhere in-between case of Microsoft’s Twitter Chatbot “Tay” going berserk; intended to be a friendly bot learning from human interactions, Tay soon learnt very offensive, racist behavior and had to be taken offline.


Any kind of program can have its technical glitches. But in an AI context, these issues can become more ambiguous, fluid, and slippery. In more advanced self-learning machines, such mal-functions can become not merely the product of code, but the outcome of complex social, often unforeseeable, influences. And no matter how advanced we get in technology and security, there are risks that penetrate and bring down the most robust systems. The recent news (https://bit.ly/2sZkPQu) of the South Korean Cryptocurrency Exchange “Coinrail” being hacked is just another reminder of the fragility of supposedly ultra-secure systems.


3) Bias

John Giannandrea, former-AI Chief at Google, expresses concerns about AI. Not so much the doomsday prophecies and mankind’s imminent termination, but a more day-to-day, subtle and widespread potential risk. Bias.


He is openly skeptical about any kind of opaque system that may employ questionable algorithms: “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.” (https://bit.ly/2hJmLuu)


Many black box systems may be building bias into their algorithms. And with high-impact, large scale consequences, clearing up the mess after discovery may be too little too late. A company called Northpointe has come up with a program that can predict a defendant’s likelihood of re-offence. A deep-dive into its algorithms indicates it is biased against minorities. Imagine the outcome if such programs go unchecked. Combine such “coded” distortions with AI’s ability to use social interactions and learn new biases… and the outcomes can be catastrophic.


4) Jobs – The Replacement Challenge

Futurist Thomas Frey predicts that by 2030 about 2 billion jobs will be lost, and that new ones will be created (https://bit.ly/2t0Fx2I). But it’s still not quite clear if the number of new jobs will match the ones going to be lost.


Business trends, process reinvention, and automation will hugely account for this massive shake up, and perhaps AI; specifically, advanced autonomous AI, will play a significant role here in ways we currently cannot foresee. In general, the corporate community has not shown the track record for valuing people over profits. Will companies go the extra mile to ensure there are still sufficient jobs? Will they favor creating jobs over possibly game-changing profits that autonomous AI may bring? These are questions worth pondering.


5) Spheres of Application and Level of Risk

Whether AI programs are intentionally developed to be biased and demonstrate deviant behavior, or whether the risks emerge because of errors or the programs’ self-learning, the sphere of application will determine the depth and width of the impact. The stakes don’t matter if rogue behavior emerges in a game. But they are unimaginable when the application is in defense, or medicine, or mass communication, or global financial transaction management. The movie Die-Hard 4 painted a very ominous picture of what happens when there is sabotage of large technology systems which turn against people on a large scale. With a super-intelligent, super-efficient, and possibly, rogue program going in the same direction, the outcome can take on apocalyptic proportions.


The European Group on Ethics in Science and New Technologies in its statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems brings up several key concerns and questions. For instance, who takes moral responsibility when autonomous AI goes beyond a point, and humans may not be able to fully control, explain or discern the workings of an agent? The recent instance, when Facebook had to abandon an experiment because two of its AI programs were supposedly “chatting” with each other in a language people could not understand, is a case in point indicating that the human control element can quickly vanish from the scene.

There are also specific concerns about Lethal Autonomous Weapons Systems (LAWS). The question we need to ask here is - Why would we want any destructive force to be handed over to an autonomous entity where human control becomes unpredictable, unstable, or completely impossible? And what about the many issues around privacy!


A way Forward

There is no easy solution. Like any powerful tool, it is how we ensure sanity and order within a system that will determine to what extent, and how, autonomous machines can impact us.


Regulation

Regulations are urgently needed on the kind of applications that AI is used for, and what problems it can be used to solve. Should some spheres of life be left out of the scope of autonomous AI?


Discussions are already underway within the AI led community about the kind of regulation that may be needed, and the idea that regulations may need to be industry-specific. The United States government is already taking cognizance of the need for regulation. The Future of AI Act and the setting up of an advisory committee to advise the Secretary of Commerce on numerous implications of AI including privacy issues, legal issues and ethics, is a step in the right direction.

(https://bit.ly/2Curghr)


Singapore has drafted plans for governance around AI and personal data. The UK has committed to developing a data ethics center. (https://on.ft.com/2Mt5u3J)


Europe’s new General Data Protection Regulation (GDPR) has outlined norms that include the need for companies to declare what personal data will be used for, and stay accountable so that such data is not used for other purposes. Companies must also be able, and willing, to modify or delete such data when requested. And very importantly, companies must be able to share and explain any logic that uses personal date for decision making. (https://for.tn/2IKBi5Y)


We need robust regulation and we need it fast. Regulation which can ensure our fundamental safety, prevent bias in autonomous systems, enable an internationally accepted value framework, offer a solution to the concerns on data security and privacy and set the limits of autonomy.


Yes, there is some traction. But it looks like the rate of technological development in this sphere is far ahead of the rate of discussion and regulation. Are we moving too slowly?


Watching the limits and levels

AI must never be so autonomous that it slips out of human control. That would amount to abdication of our fundamental responsibility. The European Group on Ethics in Science and New Technologies states:


“Since no smart artefact or system - however advanced and sophisticated - can in and by itself be called ‘autonomous’ in the original ethical sense, they cannot be accorded the moral standing of the human person and inherit human dignity. Human dignity as the foundation of human rights implies that meaningful human intervention and participation must be possible in matters that concern human beings and their environment. Therefore, in contrast to the automation of production, it is not appropriate to manage and decide about humans in the way we manage and decide about objects or data, even if this is technically conceivable.”


This makes a whole lot of sense! Clear limits must be set for what an Autonomous AI program can decide or suggest to people. And people must have the right to know on what occasions they are interacting with AI, what level of AI is involved, and what the implications are of their taking the program’s advice or being at the receiving end of its decisions. (https://bit.ly/2Ftztog)


Blockchain

Blockchain is intended to enable a decentralized, transparent, traceable record keeping mechanism. This means AI algorithms can be tracked. Changes to programs can be monitored and audited, and some level of accountability can be created to ensure governance of input, processes, decisions and outcomes. If you explore this subject further, you will find there is more talk about the use of AI in blockchain. But I think it is the use of Blockchain in AI which is even more critical, necessary, and urgent.


The Bottom-line

In the recently released documentary “Do You Trust This Computer?” directed by Chris Paine, Elon Musk issues a grim warning: "AI doesn't have to be evil to destroy humanity, but if AI has a goal, and humanity just happens to be in the way, it will destroy us as a matter of course without even thinking about it.” (https://bit.ly/2GH00TI)


Watch the trailer here:

https://www.youtube.com/watch?v=3CJE6XheubM


Now, that’s a warning worth thinking about! I am convinced that we must steer clear of the path to self-aware machines. The level of technology we currently have can be sharpened to diagnose disease better, cure faster, build smarter, conserve resources better, help the environment, and solve many of the large and complex problems that stand before us.

But we don’t need self-evolving and self-aware machines. Because when we cross the boundary where AI builds its own personality, and can do so on values, rules and algorithms that are no longer under our control, a level of danger creeps in with potential consequences which we may never be able to fully grasp; leave alone deal with and survive.

20 views

© 2018 by Konrad Fernandez.