Skip to main content

Is Artificial Intelligence an existential threat?

Is Artificial Intelligence an existential threat?

I don't know what an AI (Artificial Intelligence) really is, so I can't figure out if it's an existential, society ending threat to humans. The only way I know to measure intelligence is to pass the Turing test and take an IQ (Intelligence Quotient - powered by a guy from Stanford) test and pass it indistinguishably from an average human. Something that some people. like Ray Kurzweil, think will happen in 2029.

I do know what ML (Machine Learning) is. It's the capability of taking a set of inputs and making a program to produce a set of desired outputs. You can do this yourself (figure out how to configure up the control module by running a bunch of optimization code on a large set of data.) You could take an ML MOOC (Massive Open Online Course) to learn how to do this, there's one here:
Created by:   Stanford University
Hosted by: Cousera
Taught by: Andrew Ng
That ML entity that you build looks at a bunch of data and decides what to do, but it can't really change it's mind and learn new things.  To make it learn new things, it has to be constantly processing new data and getting a feedback signal so that it can modify itself to maximize its desired output. In other words, you can tell the program what output you want and it will take into account new data to optimize its output given the new inputs. (I bet you didn't know that ML was just a digital simulation of an analog computer for brute force solving of differential equations? Optimization of inputs and feedback can be described with differential equations and the most typical techniques to solve them come from Newton's original methods, invented in the 1700's.)

Once you hook the system up to an error signal and have it optimize the output in real time, it can change its response to different inputs. But it's like an analog computer, it has no 'consciousness', it responds in a fixed way, even when you feed in the error signal for real time modification. Even so I think there's a good argument to say that it's alive.

When a ML program is running with feedback, it can be assumed to be 'alive' as it's modifying itself. It's changing what it does and learning. If it can change enough about itself, we'd have to say it's growing and maturing. How can it not be alive? If it's alive, then the program is like a soul: the record of what it did, wants to do, and how to do it, which comes to life if it's put in the right vessel. And what if the right vessel is your printer?

But who cares if your printer is alive? 

Maybe you think the real question is: "Does my printer have free will?" It doesn't matter. You can't tell the difference except by spending a lot of time and energy. Would you even be asking the question if my printer wasn't alive?  It acts like it's alive. I can't predict what my printer will do. It is essentially so complicated that nobody can predict what it will do. How can I tell that my printer isn't alive? I can't. With a good enough processor in there and the right program, maybe around 2029, the printer will be able to argue with me and try to convince me it's alive and I won't be able to tell the difference between that and if the printer was connected to a call center in India. Call centers in India may disappear [0] faster than non-self-driving cars.

I am confident that we can build up systems of many of these ML entities [1] that can easily pass the Turing test (See Watson [2]) in a particular knowledge domain (technical support for a particular program, for instance. Yeah, that'd be my job...) In the brain these ML systems are like reflexes or emotions. Since they can modify themselves you can't know what state they are in until you go look, which takes a lot of energy, is not worth it, and you can get close enough by making a model that is not as complicated and faster to run, but, its predictions will be delayed and it will not be as accurate as looking at all the code and all the inputs. It will effectively appear that the device has free will because it is doing things that you can't predict. [3]

Since it appears as if it has free will, this implies that it must have a 'will' or a mind to change.  It doesn't matter whether it's actually conscious or not or has a 'will' or a mind. It quacks like a duck, it walks like a duck and it flies like a duck: it's a duck. The best model you can make to represent it assumes that it's alive and conscious. Hence, you're safest and most accurate stance is to treat it as if it is alive and conscious. Dead things do not have free will. Unconscious things do not have free will. Your printer appears to have free will and it makes unpredictable decisions. Something must have changed its mind. And if it has a mind it must be alive.

Printers can be considered alive in in two ways: The printer has a bunch of inputs and measurements (cartridge temperature, color, volume, paper volume and size, light intensity, scan position, next scan position, button depress, power voltage, amperage, scanning density, light intensity, etc.) but the typical printer doesn't write its own code and improve its own functionality. Some programmer in Korea writes, changes and fixes the soul of code that makes my printer do something different: the Korean programmer changes the printer's personality. Was my printer an inkaholic? We can change the ink delivery methods to reduce that. And we can clone that soul into all the other printers, improving all of them. Pretty neat, eh?  You didn't realize that programmers were adding to the amount of consciousness in the world, did you? That's why programming is one of the highest callings. (Yes, that was a religious reference. A future blog post explaining this is in the works.)

The entire system of printer, programmer, hardware designer and manufacturer can be considered alive [4]. And when my printer starts to talk to me intelligently in 2029 it's hard to think that anyone won't think it's alive. 

Are printers alive?

I don't care if these printers or facebook bots [5] are alive or conscious, I argued above that they are, I'll handle the question of: 'Is AI an existential threat' by actually ignoring it and answering the question: 'Is ML an existential threat'? 

The answer to that question is: Yes. Hell, Yes! Hell and damnation fire, Yes.

This means we shouldn't care if AI or the Singularity [6] occurs, because something we are doing today will change society before either of those are past their infancy.  ML needs to be carefully controlled. ML has already affected society in unbelievable ways. Forget about self-driving cars, those will only save about 1.3 million lives a year. We're talking about affecting billions of people. Okay, that's for a section below. Back to the consciousness argument. 

Why would someone claim that my printer is not alive today? It communicates to me the same way that a dog or a cat does. It almost understands me as much as the dog or cat, but not quite. I have to use my phone or one of the new agents (Google Home, Alexa, Siri, Ok Google, Cortana) if I want to talk to the printer, or the rest of the world. As far as I can tell, my printer has free will. I can only predict what it's going to do most of the time, sometimes it does things I don't expect.

The only difference between the printer and a nematode (soon to be completely simulated down to the sub cell level [7]) is that the way the nematode gets a new soul is through sex.  It's soul is coded up in its DNA. That DNA needs a little bit of scaffolding and it can actually build a new, differently programmed nematode.

Typical 3-D printers don't build copies of themselves, but they will. Another difference is that nematodes souls are randomly changed (not designed, but guided by evolution, the survival of the fittest) while printer souls are intelligently designed.  So if you say that the printer is alive, you have to include the printer factory and the design cycle as sex. The printer is such a complicated system that it takes humans (printer gods) to guide it. Eventually that 3-D printer can build copies of itself. Will it ever be able to have sex programmed into it? Will we be able to successfully put in an ML module that tries to improve the program's soul? It could offer versions of itself to other printers. The printer would attempt to run the ML modules more efficiently. At that point, I think you have to say it's alive. 

Are Machine Learning Algorithms Changing the World?

Yes, they are. Yes, they have. Yes, they will.

That means the real question is "How can we make sure that they change the world in the right direction?"  But what is the right direction? That's a very, very, very interesting question. Glad you asked.

One thing to observe is that there are many, many, many ways to get worse and only a few ways to get better. That's essentially the second law of thermodynamics [6a]: unless you put information into a system (this costs energy) it gets worse (less able to do work.) It takes concerted effort to make things better, and when you do make it better, you produce a lot of waste heat or entropy or confusion or make it harder to extract energy: you essentially create pollution in order to get anything done. Life consumes and transforms resources and degrades them so they can't be used again as efficiently.

Life steals entropy from the sun, uses it to live and then outputs the waste in several forms. To improve this cycle (hmm, sounds like an ML device) something alive takes energy, information, creativity or brute force methods and uses that energy to improve its improvement process. Then tries to improve itself. This is where life gets really interesting; and impactful.

It's way cheaper to copy a particular process rather than inventing it from scratch. This is why the nation states, that can now observe and learn from each other, can grow really fast for a long time, until everything is copied that's useful. Then they have to again think for themselves, which takes far more effort and energy and degrades their growth rate. It happened in Japan. It's happening in India. It's happening in China. It happened in America. It happened in Russia. It's a fundamental law of ML systems, which nations states sometimes act as. Copying improvements is much cheaper than inventing new improvements. Probably by a factor of 5 or 10.

But enough of that. If you notice, the world economy continues to grow. Most of the nation states try to maximize their economies. We've set up systems with people in them that act like ML systems that are trying to increase their economic output. Increasing efficiency and increasing volume of things people want, increasing the capabilities of all of its citizens to do new things.  We can take citizens to space, we can fly them to Newark, we can feed them hamburgers, hot dogs, farm grown salmon and test tube grown steak. They can read any book ever published in any language ever spoken. They increase their capabilities every year. Hmm. Nation states seem to be alive, too.

This has been going on for a long, long, long time. We can see this ML system in action throughout history, even prehistory: Take stone tools. The first stone tools are dated at around 3.3 M years ago. The tool was made by smashing two rocks together and using the chips that came out. Theses tools didn't change much over a million years. The next type of stone tool showed up 1.7 M years ago.  It was made by banging two rocks together, but not using the chips, but shaping one of the rocks by knocking chips out of it. This gave one a much sharper and longer blade edge. It took human's ancestors over a million years to go from knocking out flakes to carving out hand axes. This didn't change for another million years or so. Then they made carefully sharpened flints with grooves that could be attached to wooden shafts to make arrows or sharpened spears. Then about 200,000 years after that, about 50,000 years ago we started making knife shaped stone tools. The changes are coming faster now. Villages. Towns. Farms. Reading and Writing. Bronze age. Iron Age. Steam Age. Oil Age. Information Age. These are large Machine Learning loops. Large system optimization loops with humans playing the role of system controller. And they keep turning faster. The industrial revolution changed the world overnight (50 years) compared to how long it took us to figure out we should use the stone we knocked things out of rather than using the stones knocked out (1.6M years.) When things are about information they can change really quickly, in fact they tend to double in effectiveness every year or so. All the technologies that the Information age depends up is now changing so fast that no one person can keep up. Specialization came had to come into human society when we became farmers. Soldiers and farmers are different. By specializing you can become much better at what you do (this is why free trade is really better for everyone except those that aren't the best specialists in the world.)

We learned how to make clothes about 100,000 years ago (determined by the evolutionary history of head vs. cloting lice.)  We probably learned to talk about the same time. When we started to talk we could pass around a lot more information. We could become much more efficient at improving technology as a society, that huge ML system just continued to get faster and faster as we learned how to communicate information faster and cheaper. So we started to evolve faster and faster. 10,000 years ago we invented farming. We're really good at farming. Really good. Really good. One example: if we had not invented a way to manufacture nitrates in the early 1900's, almost all of us would be dead.  That's right, without this unnatural process invented by humans, most humans would be dead. And it's always about that slowly improving and ML based system that humans are a part of.

Eventually during WWII we built some non-human ML systems: first we had things like the Jacquard Looms [8], Babbage's difference engine, then 'real' computers. When did computers start being used for Machine Learning? This was first labelled as the field of cybernetics: it was the ability to control and aim guns using an automated feedback system, but this system was static, it didn't learn for many years. Probably the first learning system was for credit card fraud evaluation. There's no way a single banker can guarantee that some random person is credit worthy. You have to have an algorithm that runs on a computer to decide this question. It's more accurate, it's got more data and the risk can be managed, unlike a human. Human's make very, very, very bad machines. Computers and Machine Learning systems have affected almost every part of life. They bring together more information than any one person could possibly know and use it all to make intelligent decisions better than any single person or group of people could make. And when they are set up to control some output with a feedback system they can act faster and more accurately than a human ever can. Seems like the singularity might already be here.

For instance, Google uses a Deep Mind Machine Learning system that controls the power consumption of a data center. It can do it way better than some of the most intelligent humans in the world. Machine Learning systems can make all of the things we do much more efficient.  Making a system that can make Machine Learning systems better, more efficient and able to learn what the inputs and outputs should be is much, much, much harder.  This is the hard AI problem. The real world is infinitely more complicated than the integers, infinitely more complicated than the real numbers and infinitely more complicated than the space of all possible functions. Really, really, really complicated. Making an intelligent AI is three orders of infinity beyond what we can do today.

It doesn't matter how many doublings of computer power you do, you can't get infinitely more computing power. And you certainly can't do it three times. You can only solve the simplest problems in these spaces. Minimizing the power usage in a data center is way, way, way easier than writing the Deep Mind Machine Learning system that figures out what sensors and controls it needs to add to the data center to make it even more efficient. But it's basically within reach. You could probably sell this as a service to computer companies. Designing a system that can design these systems is something we don't know how to do yet. But so what. Since Machine Learning systems actually represent an existential threat to the human race, we need to worry about this threat before we waste resources worrying about the threat of Artificial Intelligence.

Power consumption controlled by humans (right and left edges) vs. controlled by Deep Mind.

Who cares if the Singularity happens in 2040? I'm worried about what happens in 2020 and 2028 when bots and humans will be indistinguishable over the Internet.

The predicted time when a bot can fool half of the people half of the time is soon approaching. This would means that for every four bots you meet, you will think one of them is human. That's a scary thing to grok. There's no way to know that someone you meet on the internet isn't a dog [9] or a bot. Do not befriend anyone you haven't met. And pretty soon, you'll have to have met them in person as they will be able to fake a video call.[10]

Information on the 2015 Loebner Prize

The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination.

Two of the bots in the 2007 contest fooled 30% of the human judges. Ray Kurzweil is convinced that a bot will pass the Turing test in 2029. He used to say 2020, but scientists weren't making as much progress as he predicted. Ray thought it was going too slowly so he joined Google to speed up the process. That's the definition of a useful member of society in my eyes (see the similar stories of Elon Musk and Craig Venter.)

I wonder how we can measure this? It's a lot of work to set up a contest, get the judges, get the contestants, run the contest. A better way is to just show random people FaceBook accounts and chat with the owners and see if they can tell if it's a real account or a fake account.

And this is critical work that we have to do. Trump's campaign has been rumored to have used millions of bots that pretended to be real people but were really fake insulters, peddling fake news, or disparaging certain politicians while supporting others, Republicans fell for it hard. Some Democrats fell for some of the bots. (Although 1/3rd of Republicans still believe Obama is a muslim.) Why did we have to suffer through this blatant stealing of my attention? This is the worst of all worlds of embedded advertisement in news: it's an embedded human in life.

We must make it obvious whether we are talking to a bot or a human.

It must be enshrined as a fundamental human right. To lie about this should be a felony. To put up fake accounts should be a felony if they are not identified as bots and not human.

It must be a requirement that every bot identifies themselves. Without this requirement humans will no longer be in control of their society.

Bots must not take over society.

Bots must not take over society.

Bots must not take over society.

Who can point me to some resources that are focussing on this most important question of our time.
 -thanks for reading!
 Dr. Mike

[0] Note to self: How many support cases do you need to teach a Machine Learning system how to solve them? This is independent of getting the program 'smart' enough to pass the Turing test and probably independent.

[1] "Type 1" neural nets are what humans think of as emotions or instincts as described in the book: "Thinking Fast and Slow" by Daniel Kahneman and discussed in a previous blog. These are what make experts, these type of circuits let us make decisions without conscious thinking. The unconscious brain: alive but not thinking and not conscious, but pretty darn smart. It's the smartness and they ability to condense done to a usable nugget a huge amount of information, much more than any single human, or group of humans could.

[2] Watson (from IBM) easily beat the two previous best humans to ever play Jeopardy! in 2011, a decidedly language intensive task.

[3] This is derived from the 'Halting problem' as described by Turing.  He showed that there exist programs that you can't predict what they will do until you run them, there is no shorter (faster) version of the program, like a smaller program or model of the original program that would predict all the possible outcomes for all possible inputs. If it did, it just means the original program hasn't been compressed enough. There is a smallest program for every possible problem. You will want to deploy this program for all your ML systems. Actually, you want to deploy the program that gets you to the solution the fastest given the hopefully known or less ideal, predicted outcome state distribution. This is much, much harder, which is why computer engineering is actually an interesting field.

You can do a huge amount of compression by only measuring the Service Level Indicators (5-10 variables per service). This tells you if your service is meeting the SLO (Service Level Objective.) SLOs are typically rated by 9's.  One 9 is 10%, 2 9s is 1%, 3 9s is 0.1%, 4 9s is 10^-4. The hardest problem in cloud is that you need to scale every application (within given hardware limitations.)  So the critical issue is the speed of additional resources you can add. There are some very sharp jumps in the availability of resources by size that affect the time you have to wait to increase capacity in a cloud system. Typically, for a large data center, there's a fixed investment in the minimum power and cooling and a variable cost as we install processing power. Until we fill up the space, then we need to install a new data center. At a huge fixed cost. With a factor of 2 increase in capacity of processing power and storage (Moore's Law on steroids) and network bandwidth (actually network power = bandwidth * delay.)

I plan to do a post on a paper that Van Jacobson was a co-author and was published in the ACM last month (February, 2017?). The paper shows how a team at Google changed the TCP/IP driver to recognize congestion, queue sizes and latency and to improve performance by a factor of ten in the region that large data center's network's usually operate.

[4] Kevin Kelly explains technology evolution appears to be 'intelligently created' but follows the same type of rules as the blind evolutionary system that drives the evolution of life. He points out that thinking of an entire technological system it acts like it's a live species and when looked at from the global level follows the same type of evolutionary equations that species of living things do.

[5] This link talks about official facebook bots. The scary ones are those that try to imitate humans. Russians do it by using humans. This is very expensive, but when you can make a bot that can imitate a human, then you will have more 'bot' friends than real friends. It's a scary thought. And when it is used by politicians (where all the real money goes) to influence voters, you have to worry about the stability of the government. This is the existential problem that we need to worry about before we worry about a super-genius artificial intelligence exterminating humanity.  My favorite super-genius is Wile E. Coyote (Jim Polizo's most famous example of engineering gone wrong.). Anyone know what the E. stands for? It's a pun for wily! But it stands for Ethelbert.

[6] The singularity or 'nerd rapture' is when intelligent agents become capable of programming themselves to get 'smarter'. At this point, all bets are off. We aren't smart enough to figure out what happens here, just like we don't know what happens beyond the event horizon in a black hole which surrounds a singularity: where Einstein's equations of General Relativity break down. Why is this rapture any scarier than the rapture that thousands of religions talk about? This rapture has a large group of people actually working on it, millions of people trying to make it come true. It might actually happen. Certainly the odds of the 'nerd rapture' happening are greater than the chance of the Christian rapture happening: waiting for 2000 years vs. waiting for only 50 years for the 'nerd rapture' so far. A forty to one ratio.

[6a] And this was formalized by Josiah Willard Gibbs, a fellow Yalie and the first PhD in physics in the United States. Some of his writings and instruments were in the labs on the third floor of the Physic building at Yale (the building was also named after Josiah.)  Yes, actual physical instruments that he had built were still there, and used, 100 years later. Okay, it was only a thermometer, and it was only on ceremonial occasions. Still, it gave gravitas to the physics department.

[7] And a human is only a few orders of magnitude more complicated. Think you can't simulate an entire human brain? Maybe not at full speed, but there's plenty of computer power in today's data centers to simulate many, many human brains. Maybe not 1000's, but certainly 100's.

[8] First preprogrammed controller for a complicated design system (how to put a pattern on a rug.) These programs were written on cards that were eventually adapted to be used by digital electronic computers.

[9] Name dropping again: turns out I went to school with the original cartoonist's son. But that was way before he wrote that cartoon.

[10] First book ever to make a computer come alive on TV: "The Moon is a Harsh Mistress" by Robert Heinlein. Max Headroom to the max.


Post a Comment

Popular posts from this blog

Grand Vacations or Burning Man on Mars. It could happen.

Precursors No, it will happen. At least if the aliens don't get here first. But we'll talk about that in another blog post. What I want to talk about today is the mistaken idea that progress will end. There's a neat book by John Brockman , actually a series of books that I call 'blog fodder.' The one that drove me to write about Grand Vacations is: " This Idea Must Die: Scientific Theories that are Blocking Progress ." John has put together several books of this type where he asks scientists, economists and others to write something provocative on his chosen topic. These books are great fun to scan. Many of the articles are great fun, many are just crazy, some are so bad they aren't even wrong (as Wolfgang Pauli would say.) The article that started my train of thought that has led to this blog post was: "Economic Growth." Cesar Hidalgo , an Associate Professor at MIT claims the idea of economic growth must die. He makes two arg

The Final Great Awakening: Religion Without Lies.

  Religion Without Lies: Is this even possible?  Religious symbols from left to right, top to bottom: Christianity , Islam , Hinduism , Buddhism , Judaism , the Baháʼí Faith , Eckankar , Sikhism , Jainism , Wicca , Unitarian Universalism , Shinto , Taoism , Thelema , Tenrikyo , and Zoroastrianism .

It's been fixed! There is no longer an insane narrowing of the 280 Freeway in Santa Clara County.

UPDATE: BREAKING NEWS (8/15/21): They painted the final lines and messed it up again at Pagemill road (going down to three lanes and taking one as a dedicated exit and barely fixed the entry of the diamond lane... so it's still a little bit screwed up at one place... backups will not be eliminated entirely. DOH! It seems the California Department of Transportation has taken my suggestions made four years ago and decided to repave the 280 freeway so that there are four continuous lanes for the entire trip between San Francisco and San Jose.  My last report on this crazy backup causing 'improvement' was here: I pointed out that there was space to stop the backups and increase the traffic flow and was flabbergasted as to why they had designed the freeway to impede traffic flow. Now they've gone back and repaved it and fixed it. I have no need to repaint the lines anymore as the DOT has done it thems