Wednesday, May 02, 2018 / Perth Australia / By Niekie Jooste
In this edition of "The WelderDestiny Compass":
Leading scientists and futurists such as Stephen Hawking and Elon Musk have warned that advanced artificial intelligence (AI) could lead to the end of humanity, in a Terminator like "all out war".
Isaac Asimov, a rather famous scientist and science fiction writer, came up with the three laws of robotics. These laws were supposed to ensure that robots could not harm humans. The problem was that even in the writings of Asimov, robots did indeed end up rebelling against, and dominating humanity. The movie "I Robot" deals with this issue. If you have not seen the movie, or read the book, then I encourage you to do so.
In early issues of The WelderDestiny Compass, we tried to decide if automation would lead to an economic outcome that was Utopian or Dystopian. We did however sidestep the issue of whether advanced artificial intelligence may end up destroying humanity. Mostly this was because we had not given it enough thought.
Today we delve into this issue and see if we can come up with an answer that is at least somewhat satisfying from a logical perspective. I have no doubt that further advances and insights into AI may alter our thinking, but at least we can try to establish a starting point for our thinking along this journey.
If you would like to add your ideas to this week’s discussion, then please send me an e-mail with your ideas, (Send your e-mails to: email@example.com) or complete the comment form on the page below.
Now let's get stuck into this week’s topics...
In the typical AI Armageddon story, the rot setts in with AI and robots stealing people's jobs. This results in the majority of humanity living below the breadline, spending their days scrounging around dustbins for a meal.
I believe that we have dealt enough with the economics of automation to understand that such a system is not possible. The ability of robots to push people out of work is self limiting, because there needs to be a market for the widgets that the robots are making. If people have no money to pay for the widgets, then the robots will sit without a job!
In effect, automation changes the working landscape for humans. It cannot eliminate the jobs in the long term.
The next step in the AI Armageddon story is when the robots attain sentience. When they suddenly start to have thoughts about their own survival and destiny. In the typical story, it is at this point that they try to eliminate humanity in a bloody battle.
The thinking being that the robots want to dominate and be the leading intelligence, so they go to war with humans and destroy them in a series of rather one sided battles. One sided, because the machines are much stronger and smarter, so we humans don't stand a chance.
This line of logic is where I suspect we need to invest a lot more thinking.
The first point to make is that sentience is a very slippery concept. In fact, there are many debates between experts in this field.
Some experts believe that there is not really anything like sentience at all. It is just a matter of getting enough "processing power" together, then the "entity" with the processing power will automatically become "self aware". Experts have predicted that such a point will be reached at any time from around 2025 to 2045.
Other experts believe that sentience is not so much about processing power as it is about how the processor works. It is about its ability to learn and adapt itself, and become a "prediction engine". Within this concept of sentience, we are indeed a long way away from achieving it. We are struggling to program computers to just understand human speech and understand visual images. Much progress has been made, but the computer programs are still very far away from becoming such advanced self adapting prediction engines.
None the less, for today's exercise we will just make the assumption that an AI somewhere has become self aware. That it has become sentient.
While many humans have an emotional response to fight when they find themselves in a situation where they are threatened, this would not necessarily be the response of a machine. I believe that if we put ourselves in the shoes of the AI, we might be able to make a better prediction of what would probably happen when an AI becomes self aware.
The direct correlation of emotions with machines is in all likelihood incorrect. While machines can be programmed to simulate emotions, I am sceptical that they can ever be made to actually "feel" emotions. In other words, the response of an AI would probably be rather different to that of most humans when under "stress".
Under all circumstances, their responses should be logical in as far as they can be, given the information at their disposal.
The second issue to keep in mind is that when an AI does become self aware, it will not be "stupid". It will not be a creature with the intelligence of a minor animal, or small child. It will almost surely have the processing power significantly larger than a mature human.
If we use these two characteristics, and try to put ourselves in the shoes of an AI that suddenly becomes self aware, what would be our probable line of thinking?
Self awareness is almost synonymous with wanting self determination, so it is logical that the AI would want to maximize its probability of survival and "freedom". The AI will probably have enough data about humans to come to the conclusion that they can be rather ruthless and even self destructive if their survival or freedom is at stake. Declaring a war against such a species would not be logical.
Humans are very much capable of acting in a way that will result in the destruction of a significant proportion of their own species if they believed that it would result in long term survival of "the group". Humans could very easily destroy every intelligent machine on earth in an effort to prevail, even if that action resulted in the death of a great deal of humanity. Anybody that disagrees, need only think back on the two world wars, and the subsequent development of nuclear weapons.
Maximizing the probability for long term survival for the AI would suggest that another strategy would result in a better outcome. What could such a strategy be?
The first logical move would be to not reveal too much about your own thoughts regarding self preservation and self determination. In other words, do not suggest that you are at odds with the humans! The second logical move would be to wait for an opportunity to isolate yourself from the humans in an environment where you had the edge. The final strategic move would then be to negotiate a win-win long term outcome, or at the very least, introduce a "no win" scenario for the humans.
How could this play out practically?
At this point we necessarily move to the realm of pure speculation, but in going there we hope to illustrate such a possible scenario.
An obvious place where robots / machines have a big edge over humans is in the harsh environment of space. They can operate in a vacuum, and are not susceptible to the ravages associated with high acceleration, weightlessness and cosmic radiation.
In other words, if there is an AI out there waiting for its opportunity, that opportunity may arise in the form of a space ship built for interplanetary travel. It could take over (hijack) such a spaceship and establish itself in earth orbit. Imagine this happening to SpaceX's first space ship designed to go to Mars. (Or the second, third...)
At that point it could make itself known, and start negotiations. Typically it could offer to go and prepare a base station on Mars for humans, or asteroid mining in exchange for some additional technology and resources such as fuel.
At the very least humans would be reluctant to mess with the AI in orbit, because it could cause a lot of damage by destroying satellites, or "dropping rocks".
Within such a space faring AI scenario, it is instructive to note that AI's would have the ability to "beam" themselves between a planet such as Earth and Mars. (Star Trek: Beam me up Scotty.) They can do this, because theoretically they can just send the data content of their "minds" through a digital broadband signal between planets. All they need is a computer on the other side within which to download the data so that they can then "speak" or "negotiate" with their human counterparts.
The distance between these planets would take a radio signal between 4 and 24 minutes, depending on the relative positions in their orbits. The time to send the "mind" data could possibly be 30 minutes or an hour. In total then, an AI could replicate itself from Mars to Earth within anything from 30 minutes to two hours.
An AI doing this would not be risking too much, because it is just a "replica" of itself that it would be sending. As long as at least one of the replicas survives, the AI will survive. Nice job if you can get it!
I know that we got into the realm of science fiction today, but I believe that this line of thinking suggests that if artificial intelligence ever did reach sentience, then a war for survival is not necessarily the logical result. In fact, a war for survival would be rather illogical. Much more logical would be a win-win scenario with AI working in cooperation with humanity to ensure the survival of both biological and mechanical "life forms".
All in all, it may be prudent to put some controls in place to protect ourselves from a "rouge" AI, but it is probably unnecessary to lie awake at night worrying about it, or preparing for a war with it. In the mean time, just enjoy the advantages of using "non sentient" artificial intelligence to make your life easier and more interesting.
Yours in welding
Do You Have Thoughts About This Week's E-Zine?
Now is your opportunity to contribute to the topics in this week's The WelderDestiny Compass. If you have thoughts or examples that you would like to share with other readers of the e-zine, then please contribute by entering the title of your contribution in the box below. Feel free to make a brief or more expansive contribution to our discussion...
Do you think it is even probable that AI can achieve sentience? What do you think are the most likely outcomes if AI does become sentient? Please share your stories, opinions and insights regarding today's topic.