Help Ukrainian Ukraine economy and refugees by hiring Ukrainian Software Developers - we donate a lot to charities and volunteer foundations

Ukraine

Artificial and Supreme? Top 3 Risks of Artificial Intelligence

risks of artificial intelligence
Table of Contents

    Artificial Intelligence will not destroy this planet, irresponsible human intelligence will.

     Abhijit Naskar

    From search algorithms and smart devices to robotics and weapons, it seems like Artificial Intelligence has marked a revolution in almost every industry. The scientific society is even more excited about the future advancements in AI we are expecting to see in the coming decades. 

    But aside from the apparent benefits of AI, today the central topic of heated arguments is about the risks Artificial Intelligence encompasses. 

    It’s true that currently, there are barely any preconditions that Artificial Intelligence is about to put the Matrix scenario into life or follow Skynet’s destruction path from the Terminator. 

    Yet, the fact that Bill Gates, Elon Musk, Steve Wozniak and Stephen Hawking have openly expressed their concerns about the risks AI poses, makes AI control problem an urgent question for a scientific community, rather than just a dystopian theory. 

    What are the odds?

    The thing is that Artificial Intelligence flourishing today is known as narrow AI, which means that AI technology is used to perform very specific, in other words, narrow tasks. For instance, voice/speech recognition, spam filtering, recommendation services, self-driving cars, etc. 

    The long-term prospects include developing the so-called general AI, known as AGI or strong AI. Its ultimate goal is to fully imitate human’s capacity to understand and learn any intellectual task. And this is where all the controversies and concerns about Artificial Intelligence stem from. 

    ai

    We Provide AI-Powered Solutions from Concept to Launch.

    The key danger in AGI is the risk of a global catastrophe, where Artificial Intelligence goes out of control. Thanks to AI’s ability to self-replicate and to self-improve, AI can theoretically reach a superintelligence level to dominate humans

    Artificial Intelligence certainly has all the potential to be smarter than any human possible, and at this point, it’s almost impossible to predict AI behavior, since we have never dealt with this kind of technological development. 

    Will AI Listen To Us?

    Stuart Russel, an AI computer scientist at Berkeley, sees the major risk tied to AI in poor goals description or its misalignment with human motives and morals. Artificial Intelligence will try to accomplish the set task by any means, which may not necessarily include the way we see it. To some extent, Artificial Intelligence resembles a genie in a bottle that understands the wish quite literally and doesn’t bother that much about how exactly it’ll come true.

    Per Russel, the tasks AI can perform today are relatively limited. Indeed, it has been proven that Artificial intelligence can beat us at Go or Jeopardy!, even compose music and texts, but when it comes to more comprehensive goals, an AI machine can hardly interpret all the sub-goals, exceptions and possible caveats correctly, as a human implies them. 

    AI developers should not only work on a better goal description but also program Artificial Intelligence first to satisfy human preferences. To achieve that, Russel suggests three core AI principles to consider

    1. The AI machine is from the beginning uncertain about human preferences. 
    2. The ultimate source of information for Artificial Intelligence about human preferences is human behavior. 
    3. The AI machine’s goal is to maximize the realization of human preferences. 

    Relying on these AI criteria, Rusell, together with his team, has been conducting practical AI studies. They train robots at human behavior patterns, where the expressed preference isn’t precisely articulated. In this way, Rusell’s team tries to see if Artificial Intelligence, in fact, can grasp the human mindset on the fly and if AI can “read between the lines”.

    AI systems pass the data through various layers of the neural networks to find its pattern. Such an AI approach is known as deep learning and is an efficient method that made a breakthrough in Artificial Intelligence. But as powerful AI is, it can hardly guarantee unambiguous results. Far more research is needed to turn it into reality, and there are a few reasons for it. 

    First and foremost, people are imperfect and irrational. Our subconscious desires and beliefs are often not that logical, which is the main obstacle for Artificial Intelligence. 

    Secondly, we can change our minds and act impulsively under emotions, which may contradict the standards we claim to stick to. We can say things we don’t mean and do things we don’t believe in. So, if we can’t figure that out properly for ourselves, can we teach Artificial Intelligence to do that?

    And that’s the big AI question the world’s great minds in the Artificial Intelligence field are dwelling on

    The studies confronting this challenge are well underway. But as for now, it’s safe to say that formalizing this problem along with the desired result, is an important milestone in Artificial Intelligence. 

    Weapon or Tool? 

    Artificial Intelligence has opened up an unprecedented number of new opportunities for the humankind. 

    Smart AI systems can recognize dementia, skin cancer, or diabetes faster than a doctor may even raise suspicions about a potential disease. Large retailer platforms such as Amazon have AI algorithms that can predict a user’s shipping preferences, while Stanford researchers are using AI to predict voting behavior. 

    AI researchers say that Artificial Intelligence is still at its infancy. Yet McKinsey reports that even at this stage, 80 percent of executives claim to integrate Artificial Intelligence in their businesses, and by 2030 AI is expected to deliver an additional global economic output of $13 trillion per year.

    The list of industries benefiting from AI technologies adoption keeps growing, but one must admit, that so are the hazards Artificial Intelligence entails

    These AI concerns, among others, appeal to the instances of confidentiality breaches like it was with Facebook; autonomous weapons for armies that can afford them; deep fakes represented by completely realistic but made-up pics and videos, etc.

    Enhanced surveillance in public places combined with AI progress in face recognition has made the invasion of privacy one of the central AI-related risks in the public eye. What makes it an actual Artificial Intelligence problem is that some countries, such as China, have gone as far as implementing a social credit system, that plans to give 1.4 billion citizens a score reflecting a person’s trustworthiness basing on the personal data gathered by Artificial Intelligence. The AI system analyzes the content people post online, whether they violate traffic or any other public places rules, pay electric bills on time, etc. 

    According to the most recent ranking, around eleven million Chinese citizens are banned from flying, and four million are not allowed to use trains. If the concept rings a bell, you must have watched that episode from Black Mirror.

    Another vivid example of Artificial Intelligence risks is related to YouTube recommendations algorithm. With the abundance and diversity of content available on the web, it’s natural to think of the new ways to entertain users, stimulating them to spend more time using a product, which AI is known to be good at.

    Nonetheless, YouTube’s AI algorithm has been recently noticed to promote and suggest videos that contain more and more inflammatory content. The AI recommendations engine seems to boost content radicalization. For example, one may select a clip about jogging health benefits and end up watching ultramarathons footages; start off with videos about vegetarianism that leads to veganism. 

    This is an especially hazardous AI pattern when it comes to daunting social and political content. Such a mechanism may provoke forming quite extreme opinions. Given the number of people, especially young, turning to YouTube as a primary source of information, the situation is becoming an obvious AI safety issue. 

    There is hardly a conspiracy theory behind the curtain. In fact, it’s just a classic example of AI development side effect resulting from inaccurate goal description. From the development standpoint, the task to keep the audience glued to the platform - is completed. But now that we’ve seen Artificial Intelligence being good at goals accomplishment, it’s high time we make sure they are synchronized with ours. 

    Summary

    As a company providing AI Services, we are sure that with AI technologies, we won’t be living in the Age of Ultron. Artificial Intelligence is powerful, and only in human’s hands, it becomes either a tool or a weapon. 

    Today AI helps us cure heavy diseases, makes our vehicles and homes safer, boosts innovation and science. Summing up all the concerns expressed by AI experts, the core problem about AI safety is not its malevolence but its competence

    The truth is that any robot doesn’t have any objectives of its own. AI only copies human behavior and is entirely altruistic, aiming to fulfill a task no matter what. As Stuart Russel once said: "In teaching robots to be good, we might find a way to teach ourselves.”

    That’s why producing unbiased results from biased data, developing new ways to detect deep fakes, enforcing law regulation on AI methods, - are just a few main directions in Artificial Intelligence these days.

    ai

    We Provide AI-Powered Solutions from Concept to Launch.

    image description

    Roman Korzh

    VP of Development

    image description

    Anna Slipets

    Business Development Manger

    Let's Talk