Artificial Intelligence: the ultimate guide
Artificial intelligence, or AI, is the ability of a digital computer or computer-controlled machine to perform tasks commonly associated with intelligent beings, such as visual perception, speech recognition, decision-making, or translation between languages.
The idea that the human thinking process could be mechanized has been studied for thousands of years by Greek, Chinese, Indian and Western philosophers. But researchers consider the paper A Logical Calculus of the Ideas Immanent in Nervous Activity (McCulloch and Pitts, 1943) as the first recognized artificial intelligence work.
In 1946, the US Army unveiled ENIAC, the first programmable general-purpose electronic digital computer. The giant machine was initially designed to calculate artillery-firing tables, but its ability to execute different instructions meant it could be used for a wider range of problems.
In 1950, the renowned computer scientist and mathematician Alan Turing formally introduced the concept of artificial intelligence in his paper “Computing Machinery and Intelligence”. He proposed the Turing test, which would test a machine’s ability to mimic human intelligence.
In 1956, the Dartmouth Artificial Intelligence conference, proposed by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, gave birth to the field of AI and awed scientists about the possibility that electronic brains could actually think.
In 1964, Joseph Weizenbaum built ELIZA, an interactive program that carries on a dialogue in English and on any topic. It became a popular toy when a version that simulated the dialogue of a psychotherapist was programmed.
Since then, AI has been repeatedly featured in sci-fi movies and TV shows that captivated the public’s imagination. Who doesn’t remember HAL 9000, the sentient and malevolent computer that interacts with astronauts in 2001: A Space Odyssey?
Despite scientists’ initial enthusiasm, practical applications for artificial intelligence had been lacking for many decades and led many to dismiss the impact of AI on our society.
Only in the late 1980s, a new area of research called deep learning began to show early promise about the potential of AI. Unfortunately, the computational power at the time was still too slow for scientists to reach any meaningful breakthroughs.
It was only in 1997, when the IBM Deep Blue computer defeated the world chess champion, Gary Kasparov, that artificial intelligence started to be taken more seriously.
In 2005, DARPA sponsored The Grand Challenge competition to promote research in the area of autonomous vehicles. The challenge consisted of building a robot-car capable of navigating 175 miles through desert terrain in less than ten hours, with no human intervention. The competition kickstarted the commercial development of autonomous vehicles and showcased the practical possibilities of artificial intelligence.
In 2011, IBM’s Watson computer defeated human players in the Jeopardy! Challenge. The quiz show, known for its complex, tricky questions and very smart champions, was the perfect choice to demonstrate the advance of artificial intelligence.
in 2016, DeepMind’s Alpha Go beat Lee Sedol, one of the world’s best Go players, in a contest that left scientists and researchers speechless due to the complexity of the Chinese board game.

Due to the exponential advance in AI in the last years, top scientists and entrepreneurs such as Bill Gates, Elon Musk, Steve Wozniak, and Stephen Hawking began making doomsday predictions and warning society about the dangers of a superintelligent AI that could potentially be very dangerous to humanity.
What happened? Why did some of the smartest people on Earth sound the alarm about the perils of artificial intelligence? How could something like a computer that played Go be a threat to civilization?
To answer these questions and to illustrate how AI will be the most important technology in the next decades, we need to understand the various types of AI in existence, where we are in terms of technology development, and how they work.
Weak Artificial Intelligence (WAI)
Weak artificial intelligence, also known as narrow artificial intelligence, is the only type of AI we have developed so far. WAI specializes in just one area of knowledge and we can experience it everyday, even though we rarely notice its presence.
Simple things like email spam filters are loaded with rudimentary intelligence that learns and changes its behavior in real time according to your preferences. For instance, if you tag a certain sender as junk several times, the WAI automatically understands it as spam and you’ll never need to flag it again.
Google is also a sophisticated WAI. It ranks results intelligently by figuring out among millions of variables, which ones are relevant to your specific search and context.
Other examples of WAI are voice recognition apps, language translators, Siri or Cortana, autopilots in cars or planes, algorithms that control stock trading, Amazon recommendations, Facebook friends’ suggestions, and computers that beat chess champions or Jeopardy! players. Even autonomous vehicles have WAIs to control their behavior and allow them to see.
Weak artificial intelligence systems evolve slowly, but they’re definitely making our lives easier and helping humans to be more productive. They’re not dangerous at all. In case they misbehave, nothing super-serious would happen. Maybe your mailbox would be full of spam, a stock market trade would be halted, a self-driving car would crash, or a nuclear power-plant would be deactivated.
WAIs are stepping stones towards something much bigger that will definitely impact the world.
Strong Artificial Intelligence (SAI)
Strong artificial intelligence, also referred as general artificial intelligence, is a type of AI that allows a machine to have an intellectual capability or skillsets as good as humans. Another idea often associated with SAI is the ability to transfer learning from one domain to another.
Recently, an algorithm learned how to play dozens of Atari games better than humans, with no previous knowledge on how they worked. It is an amazing milestone for artificial intelligence, but it is still far away from being a SAI.
We need to master a myriad of weak artificial intelligence systems and make them really good at their jobs before taking the challenge to build a SAI with human-like capabilities.
Computers might be much more efficient than humans for logical or mathematical operations, but they have trouble understanding simple tasks such as identifying emotions in facial expressions, describing a scene, or distinguishing the nuanced tones or sarcasm.
But how far are we from allowing computers to perform tasks that only humans could do? To answer this question, we need to make sure we have affordable hardware at least as powerful as the human brain. We are almost there.
Scientists estimate the speed of a human brain to be about twenty PetaFLOPS. Currently, just one machine, the Chinese supercomputer Tianhe-2, can claim it is faster than a human brain. It cost $400,000,000, and of course it is not affordable and accessible for researchers in AI. Just for the sake of comparison, in 2015, an average $1,000 PC is roughly 2,000 times less powerful than a human brain.
But wait a few years, and exponential technologies will work their magic. Futurists like Ray Kurzweil are very optimistic that we’ll achieve one human brain capability for $1,000 around the 2020s and one human race capability for $1,000 in the late 2040s. In the early 2060s we’ll have the power of all human brains on Earth combined for just one cent.
As you can infer from these calculations from Kurzweil, computing power won’t be an obstacle to achieving a strong artificial intelligence. Actually, even if we make pessimistic predictions following the current trends dictated by Moore’s Law, we’ll achieve those capabilities several decades later. It is just a matter of time until hardware becomes billions of times more powerful than all human brains combined.
The major difficulties to inventing a strong artificial intelligence lie in the software part, or how we’ll be able to replicate the complex biological mechanisms and the connectome of the brain so a computer can learn to think and do complex tasks.
There are many companies, institutions, governments, scientists, and startups working on reverse engineering the brain using different techniques and counting on the help of neuroscience. Optimists believe we’ll able to have a complete brain simulation around the 2030s. Pessimists think we’ll have achieved it by the 2070s.
Theoretically, it may take decades to have a computer as smart as a five-year-old kid, but a strong artificial intelligence system will be the most revolutionary technology ever built.
A future SAI will be more powerful than humans at most tasks because it will run billions of times faster than our brains, with unlimited storage and no need to rest. Initially, a SAI will make the world a better place by doing human jobs more efficiently.
The reason why tech visionaries and scientists are concerned with the invention of a strong artificial intelligence is because a computer intelligence doesn’t have morals; it just follows what is written in its code.
If an AI is programmed, for instance, to get rid of spams, it could decide that eliminating humans is the best way to do its job properly.
Also, it is feared that an SAI would be on the human-intelligence threshold for just a brief instant in time. Its capability to program itself, recursively, would make it exponentially more powerful as it gets smarter.
Recursive self-improvement works like this. Initially, the SAI is programmed by itself, which would be the equivalent of, let’s say, two humans. As the machine has access to abundant computing power, it can multiply the number of “programmers” by the dozens in just a matter of hours.
Within some days, these human-equivalent “programmers” would have discovered so many scientific breakthroughs that the artificial intelligence would become smarter than an average human. That will affect its “programmers” as well, that could be smarter than Einstein.
At some point in time, this recursive capability will allow any strong artificial intelligence to become orders of magnitude more intelligent than us, giving birth to the first superintelligence.
A Superintelligence
The moment a SAI becomes a superinterlligence is the moment we might lose control of our creation. It can “come to life” in just hours, without our knowledge. What happens after a superintelligence arises is anyone’s guess. It could be good, bad, or ugly for the human race.
The bad scenario we all know from movies such as The Terminator or The Matrix. In this case, humans would be destroyed or enslaved because they present a threat to the superintelligence’s survival.
The ugly scenario is more complicated. Imagine what would happen if multiple superintelligences arise at the same time in countries such as the United States or China. What would happen? Would they fight for supremacy, be loyal to the countries or the programmers who created them, or coexist peacefully and share power? Nobody knows.
The good scenario would remind us of paradise. The artificial superintelligence would be like an altruistic God that exists only to serve us. All humanity’s problems would be fixed and our civilization would go to infinity and beyond.
Brilliant entrepreneurs such as Elon Musk, founder of Paypal, Tesla, SpaceX and SolarCity, are raising awareness about the consequences of our advanced technology, specifically in artificial intelligence.
In his own words:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.
I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
All these scenarios sound like science fiction now, but they might become real one day. The comparison with nuclear weapons is a good one. We’ve almost been annihilated during the cold war by a technology that most people thought was science fiction.
So, even if there is the slightest chance that a superintelligence might arise in the next 20 years, we should be worried, because it could be our last invention.
We must not be afraid of being ridiculed, and proceed to discuss these questions openly. The destiny of our civilization will entirely depend on the safeguards and regulations we now put on our technology in order to avoid catastrophic scenarios.