Artificial Intelligence –Threat or Possibility?

 

As last blog post ended, I am going to dig a bit deeper into the incredible speed at which the world is learning right now and how it is learning.

For every day, a smaller fraction of the world’s combined intelligence is in the brain of an animal (eg. a human, monkey or a fish), and more intelligence is captured in dead objects. Yes, I am talking about Artificial Intelligence or AI, which simply means a system capable of learning, such as Google, that recognizes your pattern and improves, voice recognition software, improving itself through usage, or SIRI in your iPhone. It’s handy and so far it is limited to being best at a specific task, such as chess or voice recognition. It is also only a threat to us in a limited context. If Google maps fail, the consequence is that the person doesn’t find her way, but if the AI is in a nuclear reactor or transport systems, the consequence is far worse. While the consequence of the latter is truly devastating it is not a threat to humanity. But imagine for a second that we had artificial intelligence capable of thinking and learning on a human intelligence level across the board (AGI) what would the consequence of a failure in that system be?

Is this even possible?

Well the best answer is certainly, I don’t know because I am not a psychic, but I am going to develop a bit further on why we can’t possibly imagine anything about the future, and why you are probably underestimating development that is to happen (I assume you reason roughly like people in general reason –and I know, you think you are more clever than society at large but that’s just another sign that you reason like people in general reason). Let me tell you why:

1) Most people don’t realize how limited their own imagination is. In the 1950’s self declared imaginative people made predictions about how the transport industry would revolutionize itself, with ideas such as flying cars or personal helicopters. Few people foresaw that, while we still drive cars that look quit similar to those in the 50s, our life has completely changed by the internet revolution. Similarly, our future life might not change dramatically in either transportation or communication tools, but it some other way that we just are incapable to imagine.

2) Most people view the development during past decades in order to do predictions, ignoring the fact that development is not linear – it’s exponential. The development of the 20th century far exceeds that of the 19th century, simply because all available knowledge form the base on which the current innovations can be built upon. Never has the sum of humanity’s knowledge been larger, and never have more people had access to it. It is today free and accessible to stand on the shoulder’s of giants more people do so, and humanity sees further.

3) Real innovation, happen when great minds with great ideas meet, and today this can happen anywhere. Back in the days, it was at a university campus, within the military or at a company with large R&D resources where innovative people could meet and build upon each other’s ideas. That excluded, all people (most of the humanity) who did not, for any reason, social, economical or other get access to those closed rooms. Today, anyone with Internet connection and a drive to discuss an idea, will find the possibility to do so. Most discussions will certainly not lead to ground breaking innovations, but the increase of number of discussions, increases the possibility for some of them to lead to something big.

So, information is free, people can meet and connect, and your imagination is a poor tool to make predictions about the future which puts us in a context where the seemingly impossible is possible, and suddenly AGI doesn’t seem that far-fetched. Artificial intelligence on a general human level should not be confused with a software being like a human. There are tons of things that humans do that an AI system doesn’t including, eating, sleeping, getting bored or asking for a raise– because dead objects are not conscious. Even though some people evidently will lose their job, the productivity increase caused by using AI is very beneficial for society at large.

There are also major possible pitfalls coming with not having a conscience, including no consideration about the consequences of one’s actions. AGI programmed to perform a task (say build houses), will learn the most efficient way to do so (maybe using wood) regardless of the consequences of it (cutting down an entire rainforest) in a situation where humans would have decided not to.

Then think of a system far more intelligent than humans,  it could be both more productive than humans and more capable of solving complex problems. The grand challenges humanity is facing today, including fullfilling people’s need for food, water and electricity may be solved by ways of using resources effectively that the sum of human intelligence is just not capable to innovate.

But if things go wrong, the consequences are unimaginable, as we humans just do not posses the intelligence to grasp what intelligence above our own can cause. In 1993, Vernor Vinge wrote that when artificial intelligence exceeds our own normal rules will not apply anymore. That thought is at the same time breathtaking, thinking of all possible solutions, and intimidating thinking about the possible wrong goings. Bill GatesNick Bostrom and Elon Musk all view AI as the single greatest threat against humanity.

Artificial General Intelligence could be reality, and we have to think about ways to use its benefits without taking too many risks. Important ways to minimize risks include having transparent research to avoid humans errors, build in precautionary systems such as a safe off switch, and to keep AGI technology far from brutal dictator regimes (or more likely prevent them from using it by a terror balance of some sort).

So long blog!