What are the risks of artificial intelligence (AI)?

There’s a lot of information out there about AI. Some of it fact, some fiction, and some inspired by fiction. And since there’s so much information out there, it can be hard to know what to believe. Is AI dangerous? Will it take over all our jobs someday? Are we destined to live in the Matrix someday?

In this article, we’ll discuss some of the biggest risks we face in the development of more advanced AI technologies. And we’ll also discuss which common beliefs are simply myths, or based on hype. 

In this article, we’ll cover:

  1. What is artificial intelligence?
  2. Can AI be dangerous?
  3. What are the risks of artificial intelligence?
    1. Real-life risks
    2. Hypothetical risks
  4. Why research AI safety?
  5. Do the benefits outweigh the risks?

What is artificial intelligence?

AI is a specific branch of computer science concerned with mimicking human thinking and decision-making processes. These programs can often revise their own algorithms by analyzing data sets and improving their own performance without needing the help of a human. These are often programmed to complete tasks that are too complex for non-AI machines. 

Learn more about AI.

Can AI be dangerous?

As with most things to do with AI, the answer to this question is complicated. There are some risks associated with AI, some pragmatic and some ethical. Leading experts debate how dangerous AI could be in the future, but there is no real consensus yet. However, there are a few dangers that experts agree upon. Many of these are purely hypothetical situations that may happen in the future without proper precautions, and some are real concerns that we deal with today.

What are the risks of artificial intelligence?

We talked briefly about real-life and hypothetical AI risks above. Below, we’ve outlined each in detail. Real-life risks include things like consumer privacy, legal issues, AI bias, and more. And the hypothetical future issues include things like AI programmed for harm, or AI developing destructive behaviors.

Real-life AI risks

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

Privacy

One of the biggest concerns experts cite is around consumer data privacy, security, and AI. Americans have a right to privacy, established in 1992 with the ratification of the International Covenant on on Civil and Political Rights. But many companies already skirt data privacy violations with their collection and use practices, and experts worry this may increase as we start utilizing more AI.

Another major concern is that there are currently few regulations on AI (in general, or around data privacy) on the national or international level. The EU introduced the “AI Act” in April 2021 to regulate AI systems considered of risk; however, the act has not yet passed.

AI bias

It’s a common myth that since AI is a computer system, it is inherently unbiased. However, this is blatantly untrue. AI is only as unbiased as the data and people training the programs. So if the data is flawed, impartial, or biased in any way, the resulting AI will be biased as well.  The two main types of bias in AI are “data bias” and “societal bias.”

Data bias is when the data used to develop and train an AI is incomplete, skewed, or invalid. This can be because the data is incorrect, excludes certain groups, or was collected in bad faith.

On the other hand, societal bias is when the assumptions and biases present in everyday society make their way into AI through blind spots and expectations that the programmers held when creating the AI.

Create beautiful visualizations with your data.

Try Tableau for free

Graphic of visualizations

Human interactivity

In the past when AI was just spitting out predictions and robots navigating rooms full of chairs, the question of how humans and AI interacted was more of an existentialist query than a concern. But now, with AI permeating everyday life, the question becomes more pressing. How does interacting with AI affect humans?

There are physical safety concerns. In 2018, a self-driving car used by the rideshare company Uber hit and killed a pedestrian in a driving accident. In that particular case, the court ruled that the backup driver of the self-driving car was at fault, as she was watching a show on her phone instead of paying attention to her surroundings. 

Beyond that scenario, there are others that could cause physical harm to humans. If companies rely too much on AI predictions for when maintenance will be done without other checks, it could lead to machinery malfunctions that injure workers. Models used in healthcare could cause misdiagnoses. 

And there are further, non-physical ways AI can harm humans if not carefully regulated. AI could cause issues with digital safety (causing defamation or libel), financial safety (this could be misuse of AI in financial recommendations, credit checks, or the opposite, such as complex schemes that steals or exploits financial information), or equity (biases built into AI that can cause unfair rejections or acceptances in a multitude of programs). 

Legal responsibility

And, lastly, the question of legal responsibility, which has to do with almost all the other risks discussed above. When something goes wrong, who is responsible? The AI itself? The programmer who developed it? The company that implemented it? Or, if there was a human involved, is it the human operator’s fault?

We talked above about a self-driving car that killed a pedestrian, where the backup driver was found at fault.  But does that set the precedent for every case involving AI? Probably not, as the question is complex and ever-evolving. Different uses of AI will have different legal liabilities if something goes wrong. 

Hypothetical AI risks

Now that we’ve covered the everyday risks of AI, we’ll talk a little about some of the hypothetical risks. These may not be as extreme as you might see in science fiction movies, but they’re still a concern and something that leading AI experts are working to prevent and regulate right now.

AI programmed for harm

Another risk that experts cite when talking about the risks of AI is the possibility that something that uses AI will be programmed to do something devastating. The best example of this is the idea of “autonomous weapons” which can be programmed to kill humans in war. 

Many countries have already banned autonomous weapons in war, but there are other ways AI could be programmed to harm humans. Experts worry that as AI evolves, it may be used for nefarious purposes and harm humanity. 

AI develops destructive behaviors

Another concern, somewhat related to the last, is that AI will be given a beneficial goal, but will develop destructive behaviors as it attempts to accomplish that goal. An example of this could be an AI system tasked with something beneficial, such as helping to rebuild an endangered marine creature’s ecosystem. But in doing so, it may decide that other parts of the ecosystem are unimportant and destroy their habitats. And it could also view human intervention to fix or prevent this as a threat to its goal.

Making sure that AI is fully and completely aligned to human goals is surprisingly difficult and takes careful programming. AI with ambiguous and ambitious goals are worrisome, as we don’t know what path it might decide to take to its given goal. 

Why research AI safety?

Not that many years ago, the idea of superhuman AI seemed fanciful. But with recent developments in the field of AI, researchers now believe it may happen within the next few decades, though they don’t know exactly when. With these rapid advancements, it becomes even more important that the safety and regulation of AI be researched and discussed at the national and international levels.

In 2015, many leading technology experts (including Stephen Hawking, Elon Musk, and Steve Wozniak) signed an open letter on AI that called for research on the societal impacts of AI. Some of the concerns raised in the letter cover things like the ethics of autonomous weapons being used in war, and safety concerns around autonomous vehicles. In the longer term, the letter posits that unless care is taken, humans can easily lose control of AI and its goals and methods.

The importance of AI safety is to keep humans safe and to ensure that proper regulations are in place to ensure that AI acts as it should. These issues may not seem immediate, but addressing them now can prevent much worse outcomes in the future.

Do the benefits outweigh the risks?

After reading through all the risks and dangers of AI outlined in this article, you may ask yourself, Is it even worth it?

Well, the same open letter mentioned above also talks about the possible benefits that AI could have for society if used correctly. An attached article on research priorities states, “...we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty is not unfathomable.”

The potential benefits of continuing forward with AI research are significant. And while, of course, there are risks to consider, the reward can be considered well worth it. Learn more about the advantages and disadvantages of AI here