The Ultimate Guide To Artificial Intelligence (AI): Definition, How It Works, Examples, History, & More

For hundreds of years, humans have been interested in the possibility of artificial intelligence, long before Alan Turing asked the crucial question: “Can machines think?” The concept of “nonhuman intelligence” traces back to ancient Greek philosophers. The concept of robots goes back to the Renaissance. 

In our modern age, AI is everywhere. It’s in how we navigate transportation, how we communicate, and where we get our entertainment, just to name a few. And AI continues to grow, becoming a more integral part of modern societies by the day. Companies invest billions of dollars into the development of more advanced AI.

As a business, it can be confusing to navigate the ever-evolving world of AI. You want to be on the cutting edge, but don’t know exactly where to start. That’s why we created this complete guide. This article covers a range of topics, including:

  1. What is Artificial Intelligence?
  2. How does AI work?
  3. Types of AI
  4. Artificial intelligence, machine learning, and predictive analytics - What’s the difference?
  5. Algorithms
  6. Examples 
  7. Advantages and disadvantages 
  8. Risks
  9. Ethical questions
  10. History
  11. Current state
  12. Future possibilities
  13. Business and AI

What is Artificial Intelligence?

The original definition of AI, defined in 1955 by John McCarthy, one of the original creators of the field, was pretty broad and all-encompassing: “The science and engineering of making intelligent machines.”

A slightly more modern definition of AI is: a broad branch of computer science concerned with creating machines that can learn, make decisions, and perform tasks to a human-like level. Advanced AI machines can learn and grow on their own, independent of human intervention. Even basic AI can handle complex tasks that would normally need a human touch but may need the help of a programmer to learn from its mistakes and improve. 

How Does AI Work?

At the most basic level, AI functions by taking in data and using an iterative processing system and different algorithms to learn from patterns found in the data, and then react to it in a specific way. Advanced AI can also measure its own performance each time this sequence runs and start iterating and improving its own performance.

AI systems use something called the propensity model to make predictions based on the data it processes, and then use those predictions to respond to or initiate actions. 

Different types of AI run off different baseline AI algorithms, which make them react and learn in different ways. Some do simple tasks of categorizing data or making predictions. Some do much more complex tasks, such as driving a car without a human at the wheel. 

Types of AI

There are four main types of AI, and each type is defined based on how much data it can store, and how it uses that data. Some cannot store data at all and can only react to the stimulus directly in front of it. Some can store a limited amount of data. Some have the ability to store much data and use it to improve.

Of the four types of AI that are established, at present, the last two are simply theoretical. Researchers and programmers are still working toward achieving those levels. 

The four types of AI are: 

  1. Reactive
  2. Limited Memory
  3. Theory of Mind
  4. Self-Awareness

Reactive

The most basic level of AI functions on a “reactive” system. These machines cannot store data in their memory, and (as the name suggests) can only react to the data in front of them. These machines can’t learn or form any kind of memories, and always react to the same input with the same output.

Examples of reactive machines include:

  • Game-playing AI machines (such as AlphaGo, or Deep Blue)
  • Spam filters on email websites
  • Recommendation functions on e-commerce websites

Limited Memory

A step up from reactive machines, limited memory AI has the ability to temporarily store data input and use that to decide its next course of action. The limited memory machine works by taking the input data and making predictions of how the data will affect a given outcome, then using that to determine how to react.

An important distinction between this type of AI and the more advanced types is that once they are programmed and trained to act, they will not improve on their own. The data input and memory functions are simply to decide between actions, not to actually help the AI improve.

Examples of limited memory AI include:

  • Self-driving cars take in data (such as driving conditions, traffic, and nearby pedestrians) to make decisions and avoid accidents. 
  • Self-working robots take in limited data on their surroundings in order to make decisions. 

Theory of Mind

This level of AI is, at present, simply theoretical. The thought behind the design of these systems is that AI must be programmed and trained to understand that humans (and animals as well) have thoughts and feelings that impact their mental state and decisions. 

Theory of mind AI will be better equipped to interact with humans as they will be able to adjust their responses and decisions based on “non-objective” data. That means that AI will be able to have a two-way relationship with humans and handle more complex interactions.

Self-Awareness

Another theoretical level of AI, coming after the theory of mind is established, is self-awareness. This is exactly what it sounds like. The AI programmed becomes aware of itself and its place in the world, as well as the function it serves and the place of humans around it. It will possess a human-level consciousness, and be able to think and make its own decisions.

Artificial Intelligence, Machine Learning, and Predictive Analytics - What’s the Difference?

 There are a lot of terms and acronyms thrown around when talking about AI. And many of the terms are used somewhat interchangeably, even though they don’t necessarily mean the same thing. So we’ll walk you through some quick definitions of common terms in AI and business:

  • Artificial intelligence: As we’ve already defined, AI is the branch of computer science that wants to create machines that can mimic human brains. It’s also a broad field and has other areas of study folded into it.
  • Machine learning (ML): ML is a subset of AI that extracts algorithms and models for learning using statistical methods and data analysis. ML systems can often learn from their own experiences or historical data.
  • Predictive analytics: Narrower still than ML, predictive analytics is a tool used in data analytics that functions with an AI engine that uses historical data to predict future outcomes and trends. 

Key differences

There’s a lot of overlap between these three terms, so here is a simple guide to the differences between each:

  • AI versus ML: There’s much overlap, but AI is broadly concerned with creating thinking programs, whereas ML is concerned with training machines to do specific tasks.
  • AI versus predictive analytics: AI is broadly autonomous and learns on its own, whereas predictive analytics needs human interaction to help vet data and test assumptions to ensure results are accurate. 
  • ML versus predictive analytics: Much like AI, ML is mostly autonomous (though it may need some refining from programmers on occasion) and predictive analytics needs human support to work accurately. 

Learn more about the differences between AI, ML, and predictive analytics.

Artificial Intelligence Algorithms

All AI programs run off of varyingly complex algorithms that determine how they react in certain situations. Some simple algorithms allow AIs to categorize data, some allow them to make a series of decisions based on stimulus data, and some allow them to learn and grow. 

An algorithm is a series of rules that a calculation or operation must follow in order to be completed correctly. This applies to math and computer programs. Of course, math algorithms are usually simpler than an AI program algorithm. AI algorithms work by taking in training data to learn and then completing its tasks. As previously mentioned, some can also measure their own progress and improve independently. 

Types of AI algorithms

Just as AI has many applications, there are many different algorithms that allow AI to do its myriad of tasks. However, there are three major categories of algorithms that function in similar ways. 

A note: these are not all types of algorithms, and each algorithm doesn’t fit neatly into one category. Sometimes it can fit into several, depending on the goal of the program. 

The three major kinds of AI algorithms are:

  • Supervised learning: Algorithms with clearly-labeled training data, allowing the program to learn off the labels. 
  • Unsupervised learning: Algorithms with unlabeled training data, forcing them to learn independent of the labels. 
  • Reinforcement learning: Algorithms that learn by taking in feedback data from their previous actions. 

Learn more about AI algorithms.

Create beautiful visualizations with your data.

Try Tableau for free

Graphic of visualizations

Artificial Intelligence Examples and How It’s Used

In our modern day and age, AI is everywhere. Businesses use it in production lines, analytics, reports, and more. Consumers use it to navigate, search for things, and make their lives easier. But many people may not even realize they’re using AI.

A few common examples of AI include:

  • Digital assistants (Siri, Alexa, etc.)
  • Self-driving cars
  • Navigation apps
  • Social media algorithms
  • Advertisements 

Learn more about AI examples.

Advantages and Disadvantages of Artificial Intelligence

As AI becomes more broadly used and applicable, it’s logical to weigh the risks and rewards of using it. After all, the use of AI can help to save time, free up labor, and save workers from dangerous tasks. But it can also put workers at risk if not carefully maintained, and cause people to lose their jobs to automation. 

Advantages

There are undeniable advantages to using AI as it becomes more advanced. It doesn’t make mistakes or get tired the way human workers can, after all. Some of the key advantages of AI include:

  • Eliminating human error
  • Saving workers from risky tasks
  • Cost reduction
  • Unbiased decision making

Disadvantages

With all advantages come possible disadvantages. With AI (and many other types of technology) this includes things like degradation over time, the cost to implement, and what you lose when handing tasks of a machine instead of a human. Some of the common disadvantages include:

  • Cost of implementation
  • Lack of creativity or emotion
  • Doesn’t improve with experience (depending on the program)
  • Job automation

Learn more about the advantages and disadvantages of AI

Risks of Artificial Intelligence

We already talked about the disadvantages of AI. But what about the actual risks of using it? There are always dangers to using new technology, particularly something as advanced as AI. How it interacts with humans could cause harm or death if not carefully monitored. Not to mention, the future of AI as weapons of war, or possible superhuman AI. 

Some risks are hypothetical, and some are very real things we deal with today. Three risks of AI include:

  • Human interactivity: The interactions between humans and AI become more frequent by the day. If not carefully monitored, AI has the ability to malfunction and hurt or kill humans who are nearby.
  • Autonomous weaponry: Though many AI experts have already signed an open letter asking governments not to use autonomous weapons in war, their development continues. These can cause catastrophic harm to civilians, and if they malfunction, could cause even more destruction.
  • Superhuman AI: This hypothetical risk is not that AI will become sentient and take over the world necessarily. However, there is a very real risk that AI will develop sentience and decide to take a path to a goal that causes extreme harm to the environment or humans.

Learn more about the risks of AI.

Ethics of AI

The risks above are practical issues that researchers discuss to this day. But there are other, ethical questions about the use of AI. Such as how it impacts humans, perceptions versus realities, and what the rampant collection of data means for the average consumer.

The top ethical issues around AI include:

  • Privacy: With the collection and storage of data at an all-time high, consumers may find their rights to privacy violated by companies using their data to train AI. And the longevity of the data may find it passed from company to company, long after the original consumer passes away.
  • Bias: There’s a misconception that AI is inherently unbiased because they’re machines. But in reality, most modern AI is quite biased because the data used to train it was biased. The misconception means that people use these programs expecting unbiased results, so no one catches the biases in the programs.
  • Regulations: Currently, there are no regulations on AI at the national or international level, even though AI is prevalent in consumers' lives. This leaves room for AI to take advantage of consumers and leaves them little room to do anything about it if it happens.

Learn more about the ethics of AI

History of Artificial Intelligence

So how did we get here, with AI touching our daily lives every day? Contemplating robots and self-driving cars as a part of everyday life, instead of out of science fiction? 

As mentioned in the introduction, the idea of fake intelligence isn’t new. It dates back hundreds of years. But it wasn’t until the 1900s that the idea became more reasonable to scientists. Even as early as 1921, playwrights were writing about robots, and scientists contemplated the idea of computers being able to think on their own. But the work on what turned into modern AI didn’t begin until 1950. 

There’s a ton of work that went into AI between 1950 and the current time (2022), but here is a high-level overview:

  • 1950-1979: Important AI questions were asked. The first programming language for AI was created. Basic AI programs that could answer questions or sort data were made. The first example of an autonomous vehicle was created. 
  • 1980-1987: The first AI boom. Breakthroughs in AI research led to increased funding, which led to more breakthroughs. Conferences on AI were held for the first time, and commercial AI came onto the market.
  • 1987-1993:  A stagnation in AI research led to the first AI winter. Private and public investors halted their investments, convinced AI was a fad and wouldn’t go anywhere. Of course, AI researchers continued to work regardless, just with fewer resources.
  • 1993-2011: New breakthroughs led to more AI funding. The first AI system beat a human world champion at chess. The first Roomba hit the market. Nasa landed rovers on Mars, and the first digital assistant (Siri) was released by Apple.
  • 2012-present: We’ve reached the modern age of general intelligence. AI has become commonplace, and billions of dollars are still being poured into modern research. AI programs can play games, design their own languages, and are taught to recognize images.

Learn more about the history of AI.

Current state of AI

As of this article’s publication (2022), AI is a part of everyday life for businesses and consumers. People of all kinds interact with AI every day, either through social media, online shopping, search engines, or in other places.

Businesses of all sizes use AI to help streamline and give them a leg up. And businesses that don’t need to start thinking about it, or risk being left behind as the technology continues to evolve.

Future of Artificial Intelligence

Of course, now that we’ve covered the history and today’s state, it begs the question: what does the future hold for AI?

The answer to that is disappointing: we don’t fully know yet. AI is a complex study that involves a lot of research and breakthroughs at random times. Researchers have tried to predict when we will hit certain breakthroughs in the past, only to be dramatically wrong in the present. Truthfully, we hit most AI milestones dozens of years before researchers thought we would.

There are some things we know, of course. We know that as AI is adopted more and more heavily by businesses, it will impact the workforce – for better or worse. And while studies show that AI will likely create more jobs than it automates, people will still need to be trained to do those new jobs, or be left in the past. 

We also know that companies will continue to research and refine their AI technology. It’s likely that the future will bring more robots into the workforce, replacing “low skilled” jobs in more and more places. It’s likely we’ll see more and more AI-powered vehicles, and a refinement of the self-driving cars that are already on the road. We’ll likely see more integration of existing technologies an smart AI.

But if (or when, depending on what expert you listen to) we see superhuman AI? That’s still a mystery.

Businesses and AI

So where does all of that leave us? Well, we know as consumers that AI touches our lives. Business owners can see the importance of investing in AI in any way they can. But what does that look like? 

An easy and effective way to integrate AI into a business is through AI-powered data analytics. It’s a great way to give your business a leg-up, by combining real-time analytic data with the power of AI-predictions for future trends. Learn more about Tableau’s AI analytics