This article is purely my opinion about dangers present in Artificial Intelligence, based on arguments between two amazing person in computer fields, Mark Zuckerberg and Elon Musk about it. In short, Elon says that there is danger present in non-regularized A.I. development, while Mark says not (one example article).
I don’t want to discuss or guessing who is right or wrong, and considering that application and fields in Artifical Intelligence is very wide, we cannot decide anything based on several fields. Furthermore my understanding about A.I. is far more limited than theirs are, so saying that one is right and another isn’t, is nonsense. But let’s say that I agree with Elon that in some cases, AI can be devastating, and it is because our lack of control and security concern.
One of A.I. design is to mimic us
The above video is MarI/O, machine learning (a part of Artificial Intelligence) for classic game Super Mario World in SNES (oh, how nostalgic). It shows how it learns to beat a stage using experience gained from losing each time. It is seems like a very simple application of AI, though at current technology, it’s kind of amazing.
Now what if we can somehow develop the same system with purpose of bypassing captcha? Kind of similar with virus vs antivirus, it is a game of cat and mouse, captcha vs bot is a game of cat and mouse. Captcha must evolve to keep able to detect bots, and bots need to evolve to keep able to fool Captcha. It’s a constant battle.
Assuming that bot can be evolved using supercomputer and machine learning and it’s so advanced so it can mimic us, humans flawlessly, then Captcha is fighting a losing battle. At worst, captcha will even prevent human from access.
Now what will happen if that resource is focused towards development of hacking tools, powered by supercomputer? That hacking tools are being armed with skills from clever hackers and improved overtime from experience and trial and error. One of the existing application of programmed hacking, by brute-force attack, is the example. Fortunately, brute-force attack is being greatly mitigated with account locking and captcha.
Now that when the tools are ready, it then mounted on hundreds of supercomputers to exploit the vulnerabilities existing in internet. How much outrage will happen at that time? Many sites will be hacked at same time, many other will be down. Considering our credit card and banking information exists there, it is also at risk to be blown over. Combine with Ransomware like WannaCry, it’ll make matters worse. What’ll happen if GitHub fall over because of that?
Lack of Control
Soon, we will have autonomous driving car in the streets. They’ll become common. Now what will happen if for one time there occurs a system failure? Maybe due to short circuit, deteriorating hardware or even cosmic ray. Usually it’s being handled by changing the control to manual mode, but how if at the same time, due to how advanced the A.I. has become, that the A.I. decided to not give away control? Or more realistically, if the passenger / driver is too unaware or distracted to act on specific time. It’ll be bad.
In a current, simpler case will be when a gas pedal is stuck at automatic car. In manual car, pressing the clutch will definitely cancel the acceleration. Put the gear to natural for further safety, then no danger will present. However in automatic car, it needs the gear to change to neutral, still with a chance of error. It’s not much, but the more we lose control over something, the more dangerous it will be.
Even in today’s lifestyle, companies will try to use A.I. to sell you more goods. Sometimes you don’t realize, but for they who don’t notice it, they can be powerless in front of those companies, only to burn more bucks for them.
So let’s say that terrorist already develop an evolving-tightly-encrypted, anti-spy messaging service via internet. They can use that platform to communicate each other, freely, without able to be tracked by government or officials. Worse if they actually can hack into officials (police or army) communication line, they can get the security hole to perform some action.
AI robots situation like Terminator or I, Robot maybe possible in distant future, but not in near future. Limitation of energy supply and processing power is one of the big cause they cannot be realized soon. Furthermore the lacking of robot-shaped humans hinders them in place / specific area. As xkcd has explained, it’s very unlikely for it to happen.
The way A.I. can be dangerous is not in a physical form that is popular in movies, like robot apocalypse. It is more related to our daily life, our interaction with computers and security concern.