Machine learning vs AI: Everything you need to know
Machine learning. Artificial intelligence. Big data. Deep learning. This is the vocabulary that has already started to define our present—and future.
But they’re often used interchangeably among the public, to the point the questions must be raised: are they the same thing? What are the key differences? How are they entwined? And why are they important for the advancement of the human race?
If we’ve whetted your curiosity so far—whether it’s for market intelligence at the workplace, or just as a conversation driver at your next university reunion—then sit back and take note as we delve into the ever-evolving world of machine learning and artificial intelligence.
What is artificial intelligence?
To understand the relationship between AI and machine learning, let’s begin with a simplified definition of both.
Artificial intelligence refers to computers and robots that are capable of mimicking human capabilities — with the possibility of surmounting them, although the latter is still subject to scrutiny from both researchers and the wider sci-fi community.
In other words, AI-enabled programs are being developed with the ability to reason, discover meaning, generalise or learn from past experience. You’ll recognise these in everyday technology, from voice assistants like Siri to smart devices such as the black box.
What is machine learning?
Machine learning is actually a subcategory of AI—using algorithms to automatically recognise data patterns, enabling the AI to make faster, better decisions.
Deep learning takes this concept even further, using large neural networks (similar to the human brain) to decipher even more complex patterns and bring us ever closer to the sentient robots we’ve only thus far encountered in films and novels.
How AI and machine learning work together
To make sense of how artificial intelligence and machine learning work together, let’s take the following fictitious example:
There’s an alien threat from outer space and scientists have been tasked with building a machine to protect Earth, free from human influence. The instructions are clear; the machine must be able to recognise the extra-terrestrial species — and seek to destroy them.
First, the scientists develop a system that’s able to attack when fired upon, by recognising patterns of aggression and intimidation. But these machines still can’t tell the difference between extra-terrestrials and humans, and as such don’t respond fast enough to take on the enemy.
Next, the scientists feed vast amounts of data—pictures of the aliens, videos showing their attack methods and formations, sounds they make and so on—in order to train algorithms to distinguish between aliens and humans and build models to attack when they recognise the former.
But the aliens are still faster and more intuitive than these machines. So the scientists program large neural passageways, which allow the machines to predict where and how the aliens will attack, so they can launch pre-emptive strikes of their own.
In other words, AI is the broader goal (build a defence robot) for machine intelligence, whereas machine learning (and deep learning) are the specific scientific methods (making a machine that’s able to distinguish aliens from humans and thereafter to predict alien threats) towards building an AI.
All machine learning is artificial intelligence. But not all artificial intelligence is machine learning.
The key differences between machine learning and AI
Still can’t quite grasp the nuances between AI and machine learning? We’ve summarised the key differences in the table below:
Artificial intelligence (AI) | Machine learning |
---|---|
Technology that enables machines to simulate human behaviour, with the aim of developing a smart computer system to help solve complex problems | Subset of AI, allowing machines to learn from past data without needing to be programmed, with the aim of improving accuracy and speed of decision-making |
Broader in scope, with a wider goal e.g. self-driving cars | More specific in scope, with a narrower goal e.g. using historic data to establish a pattern for cars to drive themselves |
Deals with structured, semi-structured and unstructured data | Deals only with structured and semi-structured data |
Examples: Smart assistants, chatbots, AI opponents in games such as chess | Examples: Online recommendations, search algorithms, auto-tagging people in images shared on social media |
Classified into: Weak AI |
Classified into: Supervised learning |
Machine learning and neural networks
Retired figure skater Michelle Kwan once said, “I skated, fell down and learned to pick myself up a million times.” To err is to be human, and the human brain is designed to learn from past mistakes with the view of not repeating them.
Similarly, a neural network establishes a system that allows computers to learn from past errors and continuously improve. It’s a type of machine learning—more commonly known as deep learning—that uses interconnected nodes in a layered structure, closely resembling the human brain.
You may have recognised instances of neural networks in the following applications:
Image recognition
There’s a reason why an image is worth a thousand words. Humans are able to deduce vast amounts of information from a single still frame: important landmarks, people, and even situations.
With neural networks, computers can also recognise and distinguish between images, with features such as auto-tagging faces, or image labelling to identify brands, among the most common applications.
Speech recognition
Presently, one of the roadblocks for AI in realising the goal of fully sentient machines that are able to seamlessly interact with human beings is language recognition.
AI researchers are using neural networks to analyse speech across varying patterns, pitch, tone and even accent, to provide appropriate responses. You’ll have seen instances of this in autonomous subtitling, for example, which takes an on-screen interaction and produces accurate text.
Natural Language Processing (NLP)
Not to be mistaken for Neuro Linguistic Programming, which is widely used in therapy to improve the way people communicate with other people, Natural Language Processing actually helps humans communicate better with machines.
In a nutshell, NLP enables computers to understand text through syntactic and semantic analyses. A good example of this is pro-active chatbots that are able to actually hold active text conversations with humans, rather than dig up pre-meditated responses based on a person’s selections.
Types of machine learning
Now that we’ve identified the differences between AI and machine learning, let’s take a closer look at the latter and the different types that fall under it.
Before we do that, it’s important to understand how machine learning works. Essentially, two kinds of data—labelled and unlabelled—are fed into a machine to determine algorithmic patterns, which can then be used to build models to accomplish future tasks.
When it comes to labelled data, scientists define input and output parameters (labels) and assign them to raw data (images, text, videos) at the pre-processing stage, so when it’s fed into a machine, they become completely machine-readable.
Conversely, unlabelled data only has one (or none) of the parameters predefined, which effectively means the machine will figure the labels out on its own.
Labelled data requires supervised learning, the first of three types of machine learning, whereas unlabelled data does not. Let’s take a closer look at these:
Supervised learning
Let’s say a data company wants to forecast the future market share of a product in ten years’ time. What they’ll have to do is feed raw historic sales data from reliable sources—we’ll call this the training data—into machines to establish a predictive model for future datasets.
At the pre-processing stage, data scientists will assign input and output labels to this raw sales data—and use this labelled data to train and test the model. When the model has gauged the relationship between input and output, it can then reliably classify new and unseen datasets to predict outcomes: forecasting.
It’s therefore called “supervised learning” because humans are supervising at least part of the learning process, the labelling of inputs and outputs. It’s often labour-intensive and expensive, but still offers a high degree of accuracy.
Unsupervised learning
As the name suggests, this method of learning is more “hands-off” and is used to identify patterns and trends in raw, unlabelled training data, or to cluster similar data into groups.
People are still involved in the learning process, but not to the same degree as in supervised learning. They’ll mostly set hyperparameters for the models themselves, such as the number of cluster points, but they won’t be involved in the data processing itself.
While this form of learning is less labour-intensive than its supervised counterpart, the outputs still need to be scrutinised for accuracy. Still, there’s a place for unsupervised learning in better understanding datasets, as it’s able to answer questions about unseen trends and relationships within the data itself.
Reinforcement learning
This form of learning trains models to make a sequence of decisions, with the aim of achieving a certain goal in an uncertain, complex environment.
It’s sort of like an AI playing the game Prince of Persia, where it uses trial and error to try and finish the game by beating all the levels. Every time the machine overcomes a challenge in this “game,” it is rewarded. And every time it fails to circumvent one, it’s penalised. The end goal is to maximise the total reward, which is basically to complete the game.
In this interaction, the designer sets the parameters for reward and penalty. They do not, however, give the model any hints as to how to actually solve the game—the onus is entirely on the machine to perform the task successfully, and to progress.
A good example of reinforcement learning can be found in the film “Her” starring Joaquin Phoenix. Phoenix’s character receives an AI-powered smart assistant that uses reinforced learning to navigate his likes and dislikes, until it’s effectively able to date him like a normal person—with similar real-life complications.
Real life applications of machine learning and AI
Having covered all the basics of machine learning, let’s now take a look at some of its real-life applications:
Chef Watson
Sector: Culinary/creative
If you’re wondering whether or not a computer can actually be creative, well, it’s definitely up for debate.
But IBM’s Chef Watson certainly makes a good case for it. This master of the culinary arts can automatically design and discover recipes that are healthy, tasty and, not to make too fine a point of it, entirely new.
Basically, Chef Watson sifts through thousands of digital recipes to develop models that will help it understand food at the molecular level, including cultural nuances and what people like and dislike. Its then able to take these insights to develop new recipes based that are sure to delight foodies.
Chef Watson is also a published author, with its book “Cognitive Cooking with Chef Watson” available to purchase on Amazon.
AI: a machine that can come up with recipes on its ownMachine learning: training an algorithm to read and decipher information from existing recipes to come up with original ones
American Express
Sector: Finance/banking
When it comes to banking technology, American Express has always been ahead of the curve. It was among the first companies to actually purchase a computer back in 1961 and was a pioneer in online banking.
Nowadays, Amex has taken it a step further: using AI in the detection and prevention of financial fraud, which was responsible for over £700 million in losses back in 2020. Using recurrent neural networks coupled with short-term memory networks, AMEX has developed a deep learning model that’s able to identify and flag anomalies in millions of daily transactions — all in real time.
If only they were accepted at more independent retailers in the UK.
AI: a system to tackle cyberthreatsMachine learning: training algorithms to detect fraudulent activity online and immediately flag it to cybersecurity teams
Infervision
Sector: Healthcare
Radiologists around the world must review hundreds of scans daily to look for early signs of cancer, which is a tall order. Tediousness aside, fatigue can cause fatal errors through negligence.
Infervision is a leading AI medi-tech company driven towards the development of products designed to detect and prevent disease, manage patients and contribute meaningfully to medical research.
To that end, they’ve trained algorithms to mimic the work of radiologists, allowing them to diagnose cancer more accurately and efficiently.
AI: a machine that can detect cancer faster and more efficientlyMachine learning: training algorithms to find cancerous patterns from previously existing data and instantly flag them to medical practitioners
EV Volumes
Sector: Automotive
Internal Combustion Engine vehicles have been around for long enough for us to have a firm grasp of how they operate, including the various engine and fuel types. The same can’t really be said for electric vehicles, or EVs for short.
From poor charging infrastructure to deplorable battery performance, there’s still a lot left to the imagination in this automotive niche...
EV Volumes is attempting to close this knowledge gap by using big data to produce market intelligence insights and forecasts for sales volumes, market penetration, battery shipment and demand, charging infrastructure, and current and future offerings, working towards a better understanding of EV adoption.
AI: an accurate vehicle forecasting tool for Electric VehiclesMachine learning: training algorithms to differentiate between fuel types and classify electric vehicles, thereby producing accurate forecasts
Global Fishing Watch
Sector: Service/Geolocation
As the population continues to grow and with the threat of depleting resources ever-looming, a three-way partnership between Oceana, SkyTruth and Google resulted in the formation of “The Global Fishing Watch”, the first open-access digital platform for visualising and analysing vessel-based activity at sea.
Combining big data with satellite data, they’ve created 22 million data points to show where ships around the world are at any given point in time. The machine learning algorithm was trained to identify why a vessel was out at sea in the first place, helping to determine illegal fishing patterns and protect the ocean.
AI: a system that can detect illegal fishing activityMachine learning: training algorithms to differentiate reasons why vessels are out at sea, thereby flagging illegal fishing activity
Although terms like AI, machine learning and neural networks are sometimes used interchangeably, knowing the differences between them is crucial to understanding future developments in these technologies and how we, as a population, are adapting to them. In a nutshell:
AI is the wider, top-level goal, whether it’s creating a system to protect the oceans, or a machine that can save lives.
Machine learning is the specific scientific method put in place to accomplish this goal, by training algorithms to produce models that can then mimic human activity.
We hope that now, the next time the conversation comes up at the dinner table, you’ll know exactly what to say.