How Data Mining Works: A Guide

Data mining is the process of understanding data through cleaning raw data, finding patterns, creating models, and testing those models. It includes statistics, machine learning, and database systems. Data mining often includes multiple data projects, so it’s easy to confuse it with analytics, data governance, and other data processes. This guide will define data mining, share its benefits and challenges, and review how data mining works. Data mining has a long history. It emerged with computing in the 1960s through the 1980s. Historically, data mining was an intensive manual coding process — and it still involves coding ability and knowledgeable specialists to clean, process, and interpret data mining results today. Data specialists need statistical knowledge and some programming language knowledge to complete data mining techniques accurately. For instance, here are some examples of how companies have used R to answer their data questions. However, some of the manual processes are now able to be automated with repeatable flows, machine learning (ML), and artificial intelligence (AI) systems.

Data mining isn’t precisely data analytics

As discussed, data mining may be confused with other data projects. The data mining process includes projects such as data cleaning and exploratory analysis, but it is not just those practices. Data mining specialists clean and prepare the data, create models, test those models against hypotheses, and publish those models for analytics or business intelligence projects. In other words, analytics and data cleaning are parts of data mining, but they are only parts of the whole.

Benefits of data mining

Data mining is most effective when deployed strategically to serve a business goal, answer business or research questions, or be a part of a solution to a problem. Data mining assists with making accurate predictions, recognizing patterns and outliers, and often informs forecasting. Further, data mining helps organizations identify gaps and errors in processes, like bottlenecks in supply chains or improper data entry.

How data mining works

The first step in data mining is almost always data collection. Today’s organizations can collect records, logs, website visitors’ data, application data, sales data, and more every day. Collecting and mapping data is a good first step in understanding the limits of what can be done with and asked of the data in question. The Cross-Industry Standard Process for Data Mining (CRISP-DM) is an excellent guideline for starting the data mining process. This standard was created decades ago and is still a popular paradigm for organizations that are just starting.

The 6 CRISP-DM phases

The CRISP-DM comprises a six-phase workflow. It was designed to be flexible; data teams are allowed and encouraged to move back to a previous stage if needed. The model also provides opportunities for software platforms that help perform or augment some of these tasks.

1. Business understanding

Comprehensive data mining projects start by first identifying project objectives and scope. The business stakeholders will ask a question or state a problem that data mining can answer or solve.

2. Data understanding

Once the business problem is understood, it is time to collect the data relevant to the question and get a feel for the data set. This data often comes from multiple sources, including structured data and unstructured data. This stage may include some exploratory analysis to uncover some preliminary patterns. At the end of this phase, the data mining team has selected the subset of data for analysis and modeling.

3. Data preparation

This phase begins with more intensive work. Data preparation involves preparing the final data set, which includes all the relevant data needed to answer the business question. Stakeholders will identify the dimensions and variables to explore and prepare the final data set for model creation.

4. Modeling

In this phase, you’ll select the appropriate modeling techniques for the given data. These techniques can include clustering, predictive models, classification, estimation, or a combination. Front Health used statistical modeling and predictive analytics to decide whether to expand healthcare programs to other populations. You may have to return to the data preparation phase if you select a modeling technique that requires selecting other variables or preparing some different sources.

5. Evaluation

After creating the models, you need to test them and measure their success at answering the question identified in the first phase. The model may answer facets of things not accounted for, and you may need to edit the model or edit the question. This phase is designed to allow you to look at the progress so far and ensure it’s on the right track for meeting the business goals. If it’s not, there might be a need to move backwards to previous steps before a project is ready for the deployment phase.

6. Deployment

Finally, once the model is accurate and reliable, it is time to deploy it in the real world. The deployment can take place within the organization, be shared with customers, or be used to generate a report for stakeholders to prove its reliability. The work doesn’t end when the last line of code is complete; deployment requires careful thought, a roll-out plan, and a way to make sure the right people are appropriately informed. The data mining team is responsible for the audience’s understanding of the project.

Types of data mining techniques

Data mining includes multiple techniques for answering the business question or helping solve a problem. This section is just an introduction to two data mining techniques and is not currently comprehensive.

Classification

The most common technique is classification. To do this, identify a target variable and then divide that variable into appropriate level of detail categories. For example, the variable ‘occupation level’ might be split into ‘entry-level’, ‘associate’, and ‘senior’. With other fields such as age and education level, you can train your data model to predict what occupation level a person is more likely to have. You may add an entry for a recent 22-year-old graduate, and the data model could automatically classify that person in an ‘entry-level’ position. Insurance or financial institutions such as PEMCO Insurance used classification to train their algorithms to flag fraud and to monitor claims.

Clustering

Clustering is another common technique, grouping records, observations, or cases by similarity. There won’t be a target variable like in classification. Instead, clustering just means separating the data set into subgroups. This method can include grouping records of users by geographic area or age group. Typically, clustering the data into subgroups is preparation for analysis. The subgroups become inputs for a different technique.

How to avoid data mining mistakes

Data mining is a powerful and useful process for exploring data to predict patterns or outcomes. Unfortunately, it’s easy to do data mining incorrectly. You shouldn’t use data mining if your leaders do not have analytical or statistical knowledge to oversee the software. Inaccurate mining techniques can create incorrect models, resulting in inaccuracies. Further, if the team is using personally identifiable information in data mining activities, they must ensure they are following compliance regulations and governance standards.

Who does data mining in an organization?

Data mining specialization is most often a function or capability of data scientist or data analyst roles. Data mining tends to require large projects with far-reaching, cross-functional project management, and it can ladder up to analytics or business analysis teams. Some organizations look to data mining specialists to build machine learning or artificial intelligence scripts, so proficiency and knowledge of these is often a core competency. Within research organizations or in academia, data mining specialists are likely to be called data scientists or analysts and they can exist either as a part of a single lab or as a part of a service center or center of excellence team for many labs.

Data mining and R

Our customers, partners, and researchers have used data mining and R to innovate and maximize productivity. For example, Wells Fargo needed to clean up user data from 70 million customers to gain clear insights. Their data team was able to use Tableau and R to maximize their computing power and complete major projects much faster than with traditional tools. Modern platforms empower users to get deep into data mining without overwhelming data teams. Learn more about using R in your data mining projects.