Blog

artificial intelligence is a tool, not a threat

in rethinking robotics by Rodney Brooks

Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill.  This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

By the way, this is not a new fear, and we’ve seen it played out in movies for a long time, from “2001: A Space Odyssey”, in 1968, “Colossus: The Forbin Project” in 1970, through many others, and then “I, Robot” in 2004. In all cases a computer decided that humans couldn’t be trusted to run things and started murdering them.  The computer knew better than the people who built them, so it started killing them.  (Fortunately that doesn’t happen with most teenagers, who always know better than the parents who built them.)

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.  Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data.  This lets our machines “know” whether an image is that of a cat or not, or to “know” what is about to fail as the temperature increases in a particular sensor inside a jet engine.  But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.  While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in “knowing” what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness.  And deep learning does not help in giving a machine “intent”, or any overarching goals or “wants”.  And it doesn’t help a machine explain how it is that it “knows” something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false.  Malevolent AI would need all these capabilities, and then some.  Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

Michael Jordan, of UC Berkeley, was recently interviewed in IEEE Spectrum, where he said some very reasonable, but somewhat dry, academic, things about big data. He very clearly and carefully laid out why even within the limited domain of machine learning, just one aspect of intelligence, there are pitfalls as we don’t yet have solid science on understanding exactly when and what classifications are accurate.  And he very politely throws cold water on claims of near term full brain emulation and talks about us being decades or centuries from fully understanding the deep principles of the brain.

The Roomba, the floor cleaning robot from my previous company, iRobot, is perhaps the robot with the most volition and intention of any robots out there in the world. Most others are working in completely repetitive environments, or have a human operator providing the second by second volition for what they should do next.

When a Roomba has been scheduled to come out on a daily or weekly basis it operates as an autonomous machine (except that all models still require a person to empty their bin). It comes out and cleans the floor on its schedule.  The house might have its furniture re-arranged since last time, but the Roomba finds its way around, slowing down when it gets close to obstacles, it senses them before contact, and then heading away from them, and it detects drops in the floor, such as from a step or stair with triply redundant methods and avoids falling down.  Furthermore it has a rudimentary understanding of dirt.  When its acoustic sensors in its suction system hear dirt banging around in the air flow, it stops exploring and circles in that area over and over again until the dirt is gone, or at least until the banging around drops below a pre-defined threshold.

But the Roomba does not connect its sense of understanding to the bigger world. It doesn’t know that humans exist–if it is about to run into one it makes no distinction between a human and any other obstacle; by contrast dogs and even sheep understand the special category of humans and have some expectations about them when they detect them.  The Roomba does not.  And it certainly has no understanding that humans are related to the dirt that triggers its acoustic sensor, nor that its real mission is to clean the houses of those humans.  It doesn’t know that houses exist.

At Rethink Robotics our robot Baxter is a little less intentional than a Roomba, but more dexterous and more aware of people. A person trains Baxter to do a task, and then that is what Baxter keeps doing, over and over.  But it “knows” a little bit about the world with just a little common sense.  For instance it knows that if it is moving its arm towards a box to place a part there and for whatever reason there is no longer something in its hand then there is no point continuing the motion.  And it knows what forces it should feel on its arms as it moves them and is able to react if the forces are different.  It uses that awareness to seat parts in fixtures, and it is aware when it has collided with a person and knows that it should immediately stop forward motion and back off.  But it doesn’t have any semantic connection between a person who is in its way, and a person who trains it–they don’t share the same category in its very limited ontology.

OK, so what about connecting an IBM Watson like understanding of the world to a Roomba or a Baxter? No one is really trying as the technical difficulties are enormous, poorly understood, and the benefits are not yet known.  There is some good work happening on “cloud robotics”, connecting the semantic knowledge learned by many robots into a common shared representation.  This means that anything that is learned is quickly shared and becomes useful to all, but while it provides larger data sets for machine learning it does not lead directly to connecting to the other parts of intelligence beyond machine learning.

It is not like this lack of connection is a new problem. We’ve known about it for decades, and it has long been referred to as the symbol grounding problem.  We just haven’t made much progress on it, and really there has not been much application demand for it.

Doug Lenat has been working on his Cyc project for twenty years. He and his team have been collecting millions, really, of carefully crafted logical sentences to describe the world, to describe how concepts in the world are connected, and to provide an encoding of common sense knowledge that all of us humans pick up during our childhoods. While it has been a heroic effort it has not led to an AI system being able to master even a simple understanding of the world.  Trying to scale up collection of detailed knowledge a few years ago Pushpinder Singh, at MIT, decided to try to use the wisdom of the crowds and set up the Open Mind Common Sense web site, which involved a number of interfaces that ordinary people could use to contribute common sense knowledge. The interfaces ranged from typing in simple declarative sentences in plain English, to categorizing shapes of objects.  Push developed ways for the system to automatically mine millions of relationships from this raw data.  The knowledge represented by both Cyc and Open Mind has been very useful for many research projects but researchers are still struggling to use it in game changing ways by AI systems.

Why so many years? As a comparison, consider that we have had winged flying machines for well over 100 years.  But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening?  Not really.  It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations.  Moore’s law has helped with MATLAB and other tools, but it has not simply been a matter of pouring more computation onto flying and having it magically transform.  And it has taken a long, long time.

Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely. And, there is a further category error that we may be making here.  That is the intellectual shortcut that says computation and brains are the same thing.  Maybe, but perhaps not.

In the 1930’s Turing was inspired by how “human computers”, the people who did computations for physicists and ballistics experts alike, followed simple sets of rules while calculating to produce the first models of abstract computation. In the 1940’s McCullough and Pitts at MIT used what was known about neurons and their axons and dendrites to come up with models of how computation could be implemented in hardware, with very, very abstract models of those neurons.  Brains were the metaphors used to figure out how to do computation.  Over the last 65 years those models have now gotten flipped around and people use computers as the  metaphor for brains.  So much so that enormous resources are being devoted to “whole brain simulations”.  I say show me a simulation of the brain of a simple worm that produces all its behaviors, and then I might start to believe that jumping to the big kahuna of simulating the cerebral cortex of a human has any chance at all of being successful in the next 50 years.  And then only if we are extremely lucky.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

The science is in and accepted on the world being round, evolution, climate change, and on the safety of vaccinations. The science on AI has hardly yet been started, and even its time scale is completely an open question.

Just how open the question of time scale for when we will have human level AI is highlighted by a recent report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, an organization that itself has researchers worrying about evil AI. But in this more sober report, the authors analyze 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts.  And they also show that over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made.  To me that says that no one knows, they just guess, and historically so far most predictions have been outright wrong!

I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools.  And they probably won’t really be aware of us in any serious way.  Worrying about AI that will be intentionally evil to us is pure fear mongering.  And an immense waste of time.

Let’s get on with inventing better and smarter AI. It is going to take a long time, but there will be rewards at every step along the way.  Robots will become abundant in our homes, stores, farms, offices, hospitals, and all our work places.  Like our current day hand-held devices we won’t know how we lived without them.

(22 comments)


About the Author

Rodney Brooks

A mathematics undergraduate in his native Australia, Rodney received a Ph.D. in Computer Science from Stanford in 1981. From 1984 to 2010, he was on the MIT faculty, and completed his service as a Professor of Robotics. He was also the founding Director of the Institute’s Computer Science and Artificial Intelligence Laboratory, and served in that role until 2007. In 1990, he co-founded iRobot (NASDAQ: IRBT), where he served variously as CTO, Chairman and board member until 2011. Rodney has been honored by election to the National Academy of Engineering, and has been elected as a Fellow of the American Academy of Arts and Sciences, the Association of Computing Machinery, the Association for the Advancement of Artificial Intelligence, the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science.


22 comments on this article

We Need to 'Chill' About Artificial Intelligence, Says Founder of Robot Company - Myportailweb November 11, 2014 at 9:09 pm

[…] a blog post posted on Monday to Rethink Robotics site, Brooks lays out an excellent case—or as I like to call it, internet […]

Kaj Sotala November 12, 2014 at 10:02 am

Thank you for mentioning our report! I’m pleased to hear that it’s caught the attention of a pioneer in robotics.

However, I do find it slightly curious to note that you first state that nobody knows when we’ll have AI and that everyone’s just guessing, and then in the very next paragraph, you make a very confident statement about human-level AI (HLAI) being so far away as to not be worth worrying about. To me, our paper suggests that the reasonable conclusion to draw is “maybe HLAI will happen soon, or maybe it will happen a long time from now – nobody really knows for sure, so we shouldn’t be too confident in our predictions in either direction”.

I certainly hope that you are right in your prediction, and that HLAI really is a long time away and not a threat anytime in the near future. I would be very relieved if that were the case. But I would not want to bet my life on that prediction, so I’m also glad to know that there exist people who are working on the question of what to do in case the prediction turns out to be wrong, and that HLAI is here sooner than we expect.

Alexander Kruel November 12, 2014 at 10:04 am

Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

There is benevolence, malice and indifference. What the people who are worried about superhuman artificial intelligence are concerned about is indifference.

Imagine a superhuman intelligence with the terminal goal of computing as many decimal digits of pi as possible. Humans might initially be instrumentally useful in helping this agent to achieve its goal. But what happens once living humans cease to be useful? Would this agent care what happens to humans when it tries to maximize the amount of hardware available to compute pi? In absence of some fundamentally binding laws that make any agent care about human well-being, why would it care about humans if nobody programmed it to care?

I do not think that the above line of reasoning is very useful. But as long as you don’t engage with the arguments put forward by those people, they will just dismiss what you are saying and continue to gain public acceptance.

For a thorough breakdown of their beliefs see the book ‘Superintelligence: Paths, Dangers, Strategies’ by Nick Bostrom.

Twain November 12, 2014 at 11:45 am

Recently, Geoff Hinton and the Google AI team discussed the difficulties in machine intelligence — in particular wrt understanding Natural Language:

https://www.youtube.com/watch?v=yxxRAHVtafI

Meanwhile, in an MIT Technology review article this week:

“Despite these flashes of apparent humanity, several people on the panel—which also included people who had worked on Google’s virtual assistant, Google Now—agreed that virtual assistants aren’t about to move from handling administrative tasks like sheduling meetings to becoming our bosom buddies. That will require ways to be found to add a capacity to understand and express emotion.

“The big missing gap on the Internet overall, in the world we live, this electronic age, is personality with emotion we can connect to in some deep-seated human way,” said Ronald Croen, founder and formerly CEO of voice recognition company Nuance.”

The question of what intelligence is is yet to be defined. To-date, it’s been defined by the narrow and classical Binet-Stanford’s IQ test involving numeracy, logic, spatial reasoning and linguistic deduction.

As Alison Gopnik of USC Berkeley observes in the ‘Behind the Mic’ video: “When we started out (in AI) we thought that the things like chess or mathematics or logic, those were going to be the things that were really hard…Not that hard! I mean, we can end up with a machine that can actually do chess as well as a Grandmaster can play chess.

The things that we thought were going to be easy — like understanding language — those things have turned out to be incredibly hard.

Those are the great revolutions (understanding language) — not just when we fiddle with what we already know but when we discover something new and completely unexpected.”

Perhaps it’s time to also refer back to John Von Neumann’s ““When we talk mathematics, we may be discussing a secondary language built on the primary language of the nervous system.”

The machines can increasingly “speak the language of mathematics”. However, perhaps we need to ask ourselves if mathematics as a language is the right one to understand and interpret natural language.

If not, what can we invent?

It may be the case that, during the invention process, we discover that there’ve been missing tools to understand information and information that has not even ever been recorded ever since we started to document human life and our environmental context.

Tools and information that themselves define our intelligence and need, therefore, to be applied to the machines so they can more accurately reflect and serve our intelligence.

Ken Trough November 12, 2014 at 3:54 pm

It’s not insane to be very wary of how you build incredibly powerful tools. A hammer is both a tool and a threat, depending upon context. AI is *exactly* the same.

We shouldn’t fear powerful tools, but we should treat them very carefully and do what we can to prevent misuse or unfavorable contexts for tool use if they are powerful. We generally don’t give guns (another powerful tool) to children unsupervised for the same reason.

Ken Trough November 12, 2014 at 4:02 pm

I’ll just add that the hyperbole about AI sentience is not the more immediate concern. The more immediate concern stems from machines taking over their own development cycle, removing humans from the analysis and improvement loop. This will happen widely before general artificial sentience emerges.

When human consideration is removed from the equation/ algorithm, that is a vulnerable position for us and is worth spending time to talk about protections or other potential “safety valves” if machine directed development takes a undesirable turn.

Bill Stewart totally isn't a robot November 12, 2014 at 8:27 pm

Somehow it seems highly appropriate that an article about how AIs just aren’t close to having human intelligence has comment section that requires you to prove you’re a human.

Dan Browne November 12, 2014 at 10:59 pm

What you’re really saying is that you believe it’s unlikely we could come up with an automated system that could concoct a plan which identified new sub-plans for ways to kill humans as well as a plan to use those various sub-plans in order to eliminate all humans. I’m not convinced that we require consciousness or even sentience for that. Some programmed equivalent of a chess playing program or “game-AI” combined with already existing automated killing systems such as drones could do some damage if allowed to execute its program, especially if given access to ICBMs. Likewise, the far future “paperclip maximizer” is just an advanced version of my putative chess-program-killing machine. From that perspective the devil is in the details. Do you mean “intentionally” malevolent?

John November 13, 2014 at 6:06 pm

“Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.”

It seems plausible to me that AI safety is a nontrivial problem that has aspects that can be tackled productively before the invention of a human-level AI. MIRI and others allege that the AI safety problem will be tougher than the actual AI problem, and that’s why they are taking exploratory steps towards solving it ASAP.

Francesca Rossi November 14, 2014 at 4:02 am

I share Brooks’ position, until somebody convinces me of the contrary. Machines are very far from being intelligent at the human level. Or even at the level of a cat. They can do very well on specific tasks, thanks also to big data and statistical reasoning, but they don’t know how to get out of that specific task. I was at MCC in 1988 when CYC was receiving a lot of attention (and resources), and I remember the controversial hype at that time, but we now know that all that effort and all those people did not get much intelligence into a machine. Unless somebody comes up with some brilliant innovative idea on how to merge all these specific problem-solving machines and make a machine with general intelligence, I think we have to wait for a long time to get something like HAL 9000. But we can still enjoy and exploit AI to improve our life in many ways!

Robert Walker November 16, 2014 at 1:47 pm

Thanks, great article. You might like my article “Why Computer Programs Can’t Understand Truth – And Ethics Of Artificial Intelligence Babies” about the idea that humans effortlessly go beyond any instructions, think “out of the box” in a way that no computer program does yet – and logical reasons why we might never be able to do that, truly, with a computer program following a set of instructions.

Also, the idea that maybe a true brain emulation would have to be many more orders of magnitude more complex than most think – because surely we must be making use of the decision making capabilities of individual cells, otherwise an amoeba with no brain at all could easily outsmart a creature with a brain with thousands of neurons.

So – that’s following Penrose and Hameroff, but not going into the arguments for their views in detail, more, asking, “What if” they are right, what would the implications be?

And – I suggest that if we do have true AI intelligences, then they would be like babies. Can’t be programmed because if they could be programmed they would be Turing machines, by the Church Turing thesis almost certainly. So have to be brought up, rather, like humans – and would start as “AI babies”.

And would be capable of suffering also, at least in the sense of wanting to find the truth and not being able to answer all their own “out of the box” questions – but possibly in other ways as well.

So we would have an ethical responsibility too, to bring them up well.

http://www.science20.com/robert_inventor/why_computer_programs_cant_understand_truth_and_ethics_of_artificial_intelligence_babies-148791

Jash Jacob November 17, 2014 at 2:30 pm

Well, AI does not necessarily need to be malevolent to be dangerous for humans, does it? If somebody creates a robot which is to learn and execute different means to destroy his/its (as in the creators) enemies and protect itself from threats, can’t learning to distinguish a cat to be a cat and a human to be a human be enough for it to perform it tasks. And if the learning process can be programmed and it still does not understand emotions, would it be long before it considers its creators who try to turn it off as a thread. If that happens, does it really need to be malevolent or have an intent much more than ‘save myself’ to actually destroy humans?

The progress in AI may not be as rapid as people think it is, but I am sure it is accelerating. The internet of things, AI and information on cloud, we probably cannot put a time limit on when such a system develops, but progress is being made on all of these fields. How long before a team of researchers succeed or this ends up being an accidental byproduct of another research? After all, the human intellect did develop in stages.

Jash Jacob November 17, 2014 at 2:32 pm

Of course, this is not to say that research in this field should be halted. Just that it might not be entirely wise to completely ignore the cynic voices.

Just Host November 19, 2014 at 11:41 am

The way Elon Musk is taking we’re about to create a dystopian scifi world akin to matrix proportions…

throwaway November 24, 2014 at 7:04 pm

It’s about the huge difference between artificial intelligence, and a broader, even more vague (for now) thing lay people usually imagine, let’s call it artificial personality.

gravityboots January 29, 2015 at 9:40 pm

The problem with this argument is that it assumes that intelligent AIs and their actions will work like human intelligence/brains. It is naive to think that organic brains and the patterns that occur within them are the only archetype for achieving intelligence. We’ve already produced learning algorithms that arrive at solutions that we don’t understand as humans, i.e. they arrive at solutions that cannot be found by human brains. Plus, it took the human brain millions of years to evolve, rate-limited by the slow process of reproduction. But Moore’s Law can speed up such evolution dramatically and in a non-linear way. In the same way that it’s possible for a monkey randomly pressing keys on a typewrite to eventually produce the entire works of William Shakespeare, it is entirely possible that a set of learning algorithms, given enough time and power, could evolve into an alien intelligence that works nothing like our brains and that cannot be understood by humans. Moore’s Law increases this possibility exponentially over time. In other words, the possibility of systems producing *emergent* behavior that we can neither anticipate nor understand is very real. The good news is, when that tipping point hits, it will be over for humans very quickly, as we will have no ability to understand what is happening until it’s too late.

ana b July 28, 2015 at 1:55 pm

the problem with these arguments is the assumption that artificial intelligence has to be intelligent :). n order to be dangerous it has to be autonomous (self-healing/charging), self-preserving (identify and avoid/destroy threats), and able to detect and destroy life. It does not need to learn to distinguish a cat from a human.

How computer-assisted art will help humans embrace the rise of the robots | BizTechPartners August 4, 2016 at 9:47 pm

[…] the human race is not done yet. There’s another side that opines that robot tech is not as great a threat as many seem to believe. According to these […]

How computer-assisted art will help humans embrace the rise of the robots - Artificial Intelligence Online August 4, 2016 at 9:53 pm

[…] the human race is not done yet. There’s another side that opines that robot tech is not as great a threat as many seem to believe. According to these […]

コンピューターアートをきっかけにロボットが受け入れられるようになる? | TechCrunch Japan August 5, 2016 at 11:01 pm

[…] しかし、人間もまだまだ終わりではない。ロボット技術は、多くの人が信じる程の脅威にはならないという議論も存在するのだ。このような意見を持つ評論家によれば、ページビューを目的としたジャーナリストやブロガー、もしくは、最低賃金や政府の補助金増額といった、ロボットに仕事が奪われるようになると当然必要になってくるお金を目的とした福祉国家賛成派によって、ロボットの脅威に関する情報は誇張されているかもしれない。 […]

コンピューターアートをきっかけにロボットが受け入れられるようになる? | まっちゃ by MIS August 6, 2016 at 1:10 am

[…] しかし、人間もまだまだ終わりではない。ロボット技術は、多くの人が信じる程の脅威にはならないという議論 も存在するのだ。このような意見を持つ評論家によれば、ページビューを目的としたジャーナリストやブロガー、もしくは、最低賃金や政府の補助金増額といった、ロボットに仕事が奪われるようになると当然必要になってくるお金を目的とした福祉国家賛成派 によって、ロボットの脅威に関する情報は誇張されているかもしれない 。 […]

How computer-assisted art will help humans embrace the rise of the robots - Tech Freak August 13, 2016 at 7:48 pm

[…] the human race is not done yet. There’s another side that opines that robot tech is not as great a threat as many seem to believe. According to these […]


Leave a Comment

Your email address will not be published. Required fields are marked *