Robots Vs. AI: Is Artificial Intelligence a Threat?
“A robot may not injure a human being or, through inaction, allow a human being to come to harm,” this is the first Law of Robotics, according to the author Isaac Asimov, who jotted down the three rules in 1942. Even then, at the dawn of the age of robotics, humans were concerned, “is artificial intelligence a threat?”
These fears were not allayed by 1984 when the blockbuster Terminator hit the big screen. Nor today, with rumors of robot killers at a Japanese factory. Both of these proved to be nothing more than works of fiction, but nonetheless captured the attention and illustrated how people are both fascinated and terrified of the role of technology and the dangers of artificial intelligence and robotics to humankind.
Yet, at the same time, as technology advances, we find ourselves interacting more and more with the digital world. Certain studies even predict that up to 40% of the world’s workers could be replaced. So, what is the real deal with robots and AI., and should we be worried about the negative effects of artificial intelligence? Let’s explore.
The 3 Laws of Robots
Before we reveal the problem, we will travel back to its roots and those three laws of robotics and break them down to reveal the inherent and supposed risks of artificial intelligence.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Just in the same way that tools have parts that make them essential and safe to use, so too must robots be fit for purpose. Essentially, Asimov believes that this rule is the only logical conclusion to ensure that robots as tools are safe. Good news for those in fear of the negative effects of artificial intelligence.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
But safety isn’t the only benefit. It should be effective too — hence, obeying orders given to it by a human — and have a failsafe in case of emergency. This is why the second law reinforces the first and tries to ensure safety above all but also includes functionality.
3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
The final law gives instructions on how to protect itself from harm while not breaching the first two laws. This third law, similar to how technical equipment is designed to work, seeks to ensure the robot functions as expected and is up to the job. If not, it should note this and follow a destruction process.
Although these laws may be thought of as science fiction, they not only draw upon the fears of the dangers of artificial intelligence but also on how we expect technology to act.
But what does this mean in context? And how do we get science fiction into reality?
Examples of Artificial Intelligence And How Robots Are Used Today
Back when Asimov penned these Laws, robotics, as we know it today, was a mere dream. Today, robots and AI technology are a very real concept and one that has been put into action. Although not always in the way people think it is. There are loads of different types of AI and robots, for example:
- Robotic operators in factories – these optimize processes and perform simple procedures;
- Transport robots — for example, Loomo, the Segway-like transport device;
- Robot vacuum cleaners — smart cleaning devices that clean your floor with pre-set programming;
- Defense robots — drones and anti-missile devices whose job it is to protect;
- Medical robots — aid in surgeries to increase accuracy;
- Educational robots — building interaction with children to improve learning outcomes.
Now let’s apply the three laws here. If we look at these examples, we can see that they follow the first law and are for human benefit*. Secondly, they follow instructions either by coding or verbally. And thirdly, once they fail to meet their purpose, they should (ideally) stop working.
*The one exception is defense bots, which is a gray area in terms of the laws of robots.
- AI-powered assistants — helping to fulfill repetitive tasks;
- Autonomous vehicles — self-driving machines to make the human experience easier;
- Personalized apps — tailored to suit the needs of the user;
- Smart financial technology — including fraud detection software, risk prediction tools (GiniMachine), smart trading, etc.;
- Automated processes — from form-filling to processing to data storage, AI is up the task.
Depending on the need, the type of AI — and there are seven — may differ. When speaking about these types, it’s important to remember that the technologies behind them differ, and so too, do the use cases. These seven types of AI include:
- Artificial Narrow Intelligence (ANI)
- Artificial General Intelligence.
- Artificial Super Intelligence (ASI)
- Limited Memory
- Theory of Mind
- Reactive Machines
Just for Fun! Robots Vs. AI: Which Is More Likely to Take Over Humanity?
Now that we’ve uncovered a little more about AI and robotics, the next challenge is to understand where the technology is right now and what examples of artificial intelligence actually work.
When it comes to AI and robots, the reality is often a whole lot more boring than the truth. Technology, while impressive, has not yet reached the standards that we may expect upon blockbuster viewing. But does that mean that it’s worthless? Absolutely not.
Instead, current robot and AI solutions are proving vital to many businesses: from fintech to relaxation to sports to work and everything in between. Developers often seek to solve human problems using innovative tech solutions, resulting in major strides forward both from AI and robot technology and humanity itself.
But, when it all boils down to it — robots vs. AI: which one is more likely to take over humanity? — we believe that both will continue to grow and impact humanity. Yet, we hope ‘control’ remains out of the picture for some time to come.
Robot And AI Wrap Up
Are robots going to take over the world? Not likely. At least not soon. Are robots and AI going to take over our jobs? Well, there may be some truth in this one. With automation estimates at around 30-40%, some roles may likely become more automated as opportunities arise to dive deeper into other spheres.
For example, in the financial world, the time spent on form-filling could be reduced and allow staff to train or take on more important tasks that require genuine human traits. So, when it comes to the question, “is artificial intelligence a threat?” To live, no, but some likelihoods are likely to be affected by the emergence of new technology.
Curious about AI and how it works? Sign in and start a free trial to build your first model with GiniMachine.