Is Artificial Intelligence an existential threat?
I do know what ML (Machine Learning) is. It's the capability of taking a set of inputs and making a program to produce a set of desired outputs. You can do this yourself (figure out how to configure up the control module by running a bunch of optimization code on a large set of data.) You could take an ML MOOC (Massive Open Online Course) to learn how to do this, there's one here:
Once you hook the system up to an error signal and have it optimize the output in real time, it can change its response to different inputs. But it's like an analog computer, it has no 'consciousness', it responds in a fixed way, even when you feed in the error signal for real time modification. Even so I think there's a good argument to say that it's alive.
When a ML program is running with feedback, it can be assumed to be 'alive' as it's modifying itself. It's changing what it does and learning. If it can change enough about itself, we'd have to say it's growing and maturing. How can it not be alive? If it's alive, then the program is like a soul: the record of what it did, wants to do, and how to do it, which comes to life if it's put in the right vessel. And what if the right vessel is your printer?
But who cares if your printer is alive?
I am confident that we can build up systems of many of these ML entities  that can easily pass the Turing test (See Watson ) in a particular knowledge domain (technical support for a particular program, for instance. Yeah, that'd be my job...) In the brain these ML systems are like reflexes or emotions. Since they can modify themselves you can't know what state they are in until you go look, which takes a lot of energy, is not worth it, and you can get close enough by making a model that is not as complicated and faster to run, but, its predictions will be delayed and it will not be as accurate as looking at all the code and all the inputs. It will effectively appear that the device has free will because it is doing things that you can't predict. 
Since it appears as if it has free will, this implies that it must have a 'will' or a mind to change. It doesn't matter whether it's actually conscious or not or has a 'will' or a mind. It quacks like a duck, it walks like a duck and it flies like a duck: it's a duck. The best model you can make to represent it assumes that it's alive and conscious. Hence, you're safest and most accurate stance is to treat it as if it is alive and conscious. Dead things do not have free will. Unconscious things do not have free will. Your printer appears to have free will and it makes unpredictable decisions. Something must have changed its mind. And if it has a mind it must be alive.
Printers can be considered alive in in two ways: The printer has a bunch of inputs and measurements (cartridge temperature, color, volume, paper volume and size, light intensity, scan position, next scan position, button depress, power voltage, amperage, scanning density, light intensity, etc.) but the typical printer doesn't write its own code and improve its own functionality. Some programmer in Korea writes, changes and fixes the soul of code that makes my printer do something different: the Korean programmer changes the printer's personality. Was my printer an inkaholic? We can change the ink delivery methods to reduce that. And we can clone that soul into all the other printers, improving all of them. Pretty neat, eh? You didn't realize that programmers were adding to the amount of consciousness in the world, did you? That's why programming is one of the highest callings. (Yes, that was a religious reference. A future blog post explaining this is in the works.)
The entire system of printer, programmer, hardware designer and manufacturer can be considered alive . And when my printer starts to talk to me intelligently in 2029 it's hard to think that anyone won't think it's alive.
Are printers alive?
The answer to that question is: Yes. Hell, Yes! Hell and damnation fire, Yes.
This means we shouldn't care if AI or the Singularity  occurs, because something we are doing today will change society before either of those are past their infancy. ML needs to be carefully controlled. ML has already affected society in unbelievable ways. Forget about self-driving cars, those will only save about 1.3 million lives a year. We're talking about affecting billions of people. Okay, that's for a section below. Back to the consciousness argument.
The only difference between the printer and a nematode (soon to be completely simulated down to the sub cell level ) is that the way the nematode gets a new soul is through sex. It's soul is coded up in its DNA. That DNA needs a little bit of scaffolding and it can actually build a new, differently programmed nematode.
Typical 3-D printers don't build copies of themselves, but they will. Another difference is that nematodes souls are randomly changed (not designed, but guided by evolution, the survival of the fittest) while printer souls are intelligently designed. So if you say that the printer is alive, you have to include the printer factory and the design cycle as sex. The printer is such a complicated system that it takes humans (printer gods) to guide it. Eventually that 3-D printer can build copies of itself. Will it ever be able to have sex programmed into it? Will we be able to successfully put in an ML module that tries to improve the program's soul? It could offer versions of itself to other printers. The printer would attempt to run the ML modules more efficiently. At that point, I think you have to say it's alive.
Are Machine Learning Algorithms Changing the World?
Yes, they are. Yes, they have. Yes, they will.That means the real question is "How can we make sure that they change the world in the right direction?" But what is the right direction? That's a very, very, very interesting question. Glad you asked.
One thing to observe is that there are many, many, many ways to get worse and only a few ways to get better. That's essentially the second law of thermodynamics [6a]: unless you put information into a system (this costs energy) it gets worse (less able to do work.) It takes concerted effort to make things better, and when you do make it better, you produce a lot of waste heat or entropy or confusion or make it harder to extract energy: you essentially create pollution in order to get anything done. Life consumes and transforms resources and degrades them so they can't be used again as efficiently.
Life steals entropy from the sun, uses it to live and then outputs the waste in several forms. To improve this cycle (hmm, sounds like an ML device) something alive takes energy, information, creativity or brute force methods and uses that energy to improve its improvement process. Then tries to improve itself. This is where life gets really interesting; and impactful.
It's way cheaper to copy a particular process rather than inventing it from scratch. This is why the nation states, that can now observe and learn from each other, can grow really fast for a long time, until everything is copied that's useful. Then they have to again think for themselves, which takes far more effort and energy and degrades their growth rate. It happened in Japan. It's happening in India. It's happening in China. It happened in America. It happened in Russia. It's a fundamental law of ML systems, which nations states sometimes act as. Copying improvements is much cheaper than inventing new improvements. Probably by a factor of 5 or 10.
This has been going on for a long, long, long time. We can see this ML system in action throughout history, even prehistory: Take stone tools. The first stone tools are dated at around 3.3 M years ago. The tool was made by smashing two rocks together and using the chips that came out. Theses tools didn't change much over a million years. The next type of stone tool showed up 1.7 M years ago. It was made by banging two rocks together, but not using the chips, but shaping one of the rocks by knocking chips out of it. This gave one a much sharper and longer blade edge. It took human's ancestors over a million years to go from knocking out flakes to carving out hand axes. This didn't change for another million years or so. Then they made carefully sharpened flints with grooves that could be attached to wooden shafts to make arrows or sharpened spears. Then about 200,000 years after that, about 50,000 years ago we started making knife shaped stone tools. The changes are coming faster now. Villages. Towns. Farms. Reading and Writing. Bronze age. Iron Age. Steam Age. Oil Age. Information Age. These are large Machine Learning loops. Large system optimization loops with humans playing the role of system controller. And they keep turning faster. The industrial revolution changed the world overnight (50 years) compared to how long it took us to figure out we should use the stone we knocked things out of rather than using the stones knocked out (1.6M years.) When things are about information they can change really quickly, in fact they tend to double in effectiveness every year or so. All the technologies that the Information age depends up is now changing so fast that no one person can keep up. Specialization came had to come into human society when we became farmers. Soldiers and farmers are different. By specializing you can become much better at what you do (this is why free trade is really better for everyone except those that aren't the best specialists in the world.)
We learned how to make clothes about 100,000 years ago (determined by the evolutionary history of head vs. cloting lice.) We probably learned to talk about the same time. When we started to talk we could pass around a lot more information. We could become much more efficient at improving technology as a society, that huge ML system just continued to get faster and faster as we learned how to communicate information faster and cheaper. So we started to evolve faster and faster. 10,000 years ago we invented farming. We're really good at farming. Really good. Really good. One example: if we had not invented a way to manufacture nitrates in the early 1900's, almost all of us would be dead. That's right, without this unnatural process invented by humans, most humans would be dead. And it's always about that slowly improving and ML based system that humans are a part of.
It doesn't matter how many doublings of computer power you do, you can't get infinitely more computing power. And you certainly can't do it three times. You can only solve the simplest problems in these spaces. Minimizing the power usage in a data center is way, way, way easier than writing the Deep Mind Machine Learning system that figures out what sensors and controls it needs to add to the data center to make it even more efficient. But it's basically within reach. You could probably sell this as a service to computer companies. Designing a system that can design these systems is something we don't know how to do yet. But so what. Since Machine Learning systems actually represent an existential threat to the human race, we need to worry about this threat before we waste resources worrying about the threat of Artificial Intelligence.
Power consumption controlled by humans (right and left edges) vs. controlled by Deep Mind.
Who cares if the Singularity happens in 2040? I'm worried about what happens in 2020 and 2028 when bots and humans will be indistinguishable over the Internet.
The "standard interpretation" of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination.
I wonder how we can measure this? It's a lot of work to set up a contest, get the judges, get the contestants, run the contest. A better way is to just show random people FaceBook accounts and chat with the owners and see if they can tell if it's a real account or a fake account.
And this is critical work that we have to do. Trump's campaign has been rumored to have used millions of bots that pretended to be real people but were really fake insulters, peddling fake news, or disparaging certain politicians while supporting others, Republicans fell for it hard. Some Democrats fell for some of the bots. (Although 1/3rd of Republicans still believe Obama is a muslim.) Why did we have to suffer through this blatant stealing of my attention? This is the worst of all worlds of embedded advertisement in news: it's an embedded human in life.
We must make it obvious whether we are talking to a bot or a human.
It must be enshrined as a fundamental human right. To lie about this should be a felony. To put up fake accounts should be a felony if they are not identified as bots and not human.
It must be a requirement that every bot identifies themselves. Without this requirement humans will no longer be in control of their society.
Bots must not take over society.
Bots must not take over society.
Bots must not take over society.
 Note to self: How many support cases do you need to teach a Machine Learning system how to solve them? This is independent of getting the program 'smart' enough to pass the Turing test and probably independent.
 Watson (from IBM) easily beat the two previous best humans to ever play Jeopardy! in 2011, a decidedly language intensive task.
You can do a huge amount of compression by only measuring the Service Level Indicators (5-10 variables per service). This tells you if your service is meeting the SLO (Service Level Objective.) SLOs are typically rated by 9's. One 9 is 10%, 2 9s is 1%, 3 9s is 0.1%, 4 9s is 10^-4. The hardest problem in cloud is that you need to scale every application (within given hardware limitations.) So the critical issue is the speed of additional resources you can add. There are some very sharp jumps in the availability of resources by size that affect the time you have to wait to increase capacity in a cloud system. Typically, for a large data center, there's a fixed investment in the minimum power and cooling and a variable cost as we install processing power. Until we fill up the space, then we need to install a new data center. At a huge fixed cost. With a factor of 2 increase in capacity of processing power and storage (Moore's Law on steroids) and network bandwidth (actually network power = bandwidth * delay.)
I plan to do a post on a paper that Van Jacobson was a co-author and was published in the ACM last month (February, 2017?). The paper shows how a team at Google changed the TCP/IP driver to recognize congestion, queue sizes and latency and to improve performance by a factor of ten in the region that large data center's network's usually operate.
 Kevin Kelly explains technology evolution appears to be 'intelligently created' but follows the same type of rules as the blind evolutionary system that drives the evolution of life. He points out that thinking of an entire technological system it acts like it's a live species and when looked at from the global level follows the same type of evolutionary equations that species of living things do.
[6a] And this was formalized by Josiah Willard Gibbs, a fellow Yalie and the first PhD in physics in the United States. Some of his writings and instruments were in the labs on the third floor of the Physic building at Yale (the building was also named after Josiah.) Yes, actual physical instruments that he had built were still there, and used, 100 years later. Okay, it was only a thermometer, and it was only on ceremonial occasions. Still, it gave gravitas to the physics department.
 And a human is only a few orders of magnitude more complicated. Think you can't simulate an entire human brain? Maybe not at full speed, but there's plenty of computer power in today's data centers to simulate many, many human brains. Maybe not 1000's, but certainly 100's.
 Name dropping again: turns out I went to school with the original cartoonist's son. But that was way before he wrote that cartoon.
 First book ever to make a computer come alive on TV: "The Moon is a Harsh Mistress" by Robert Heinlein. Max Headroom to the max.