Skip to main content

AI 101

AI, Machine Learning, Neural Networks, and Deep Learning

Artificial intelligence (AI), machine learning, neural networks, and deep learning are now ubiquitous. Unfortunately, the relationships and definitions of these disciplines remain unclear for many. To help alleviate this confusion, this explainer will explain the interrelation between these fields and give a broad overview of each discipline and its relevance to policymakers.

Relationship Between AI, Machine Learning, Neural Networks, and Deep Learning

AI is the foundation from which the other subdisciplines originated. Machine learning (ML) is a subset of AI, neural networks are a subset of machine learning, and deep learning is a subset of neural networks. As such, machine learning is a type of AI, but not all AI is machine learning. Understanding this difference is important because it allows one to understand if a policy will impact a specific discipline, like ML, or if it will impact all the disciplines and industries that use them.

Figure 1 – Graphical Representation of the relationship between AI and select subfields
Figure 1 – Graphical Representation of the relationship between AI and select subfields.
Read Next

Examples of AI Subdiscipline Applications

AI Subfield
Note: These subfields are not exhaustive

Example Use Case
Note: These are not exhaustive.

Machine Learning Ad targeting programs
Neural Network Speech-to-text transcription
Deep Learning Self-driving cars
AI All of the above

Figure 2 – Examples of AI applications in the real world.

Key Takeaways

Understanding the differences between machine learning, neural networks, and deep learning on a detailed level quickly becomes challenging due to the complexity of these technologies. However, understanding a few high-level takeaways can help policymakers craft more effective and nuanced legislative frameworks. Key takeaways include:

  • AI is the broadest foundational category that the rest of these other technologies fit into, like a series of nesting dolls.
    • Deep learning is a subset of neural networks, which are a subset of machine learning, which is a subset of AI.
  • Machine learning is a process by which a program, in an automated fashion, learns which variables are most important in completing a task and creates an algorithm to reflect that importance.
    • This learning can create novel solutions, or unintended behaviors/outputs.
  • Neural networks are a form of machine learning in which the program creates a secondary level of variables referred to as a hidden layer.
    • Some neural networks have a tradeoff between explainability and accuracy.
  • Deep learning is a subset of neural networks with three or more hidden layers.

Artificial Intelligence

Understanding AI is tricky, in part, because there is no universally accepted definition.  Recognizing this, many governments and multilateral organizations use similar language to describe AI.

The OECD defines AI as:
“…a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

Citing the National Artificial Intelligence Act of 2020, the State Department defines AI as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

House Resolution 1250, which established the principles that should guide the national artificial intelligence strategy of the United States, defines AI as:
“…the ability of a computer system to solve problems and to perform tasks that would otherwise require human intelligence.”

Lastly, the paper “What is Artificial Intelligence?” authored by John McCarthy, a founding father of AI, describes AI as:

“…[T]he science and engineering of making intelligent machines, especially intelligent computer programs…. Intelligence is the computational part of the ability to achieve goals in the world.”

The expansive nature of these definitions is important because differentiating between AI use cases can help policymakers narrowly tailor legislation and mitigate potential unintended negative impacts.

The field of AI has existed for decades. In fact, the earliest AI program was arguably invented in 1951. Today, there is some form of AI in many popular websites, apps, and programs, some examples of which are identified in Figure 3.

Popular Applications of AI

Application Type

AI Use Case

Web Browser Returning pertinent search results given idiomatic queries
Social Media Showing a user content that the user is more likely to engage with
Music/Movie Streaming Categorizing content by sub-genre

Figure 3 – A table with a how some popular applications leverage AI.

AI is generally categorized as either “narrow/weak” or “general/strong.” While there is no universally agreed-upon definition, narrow or weak AI solves specific problems but has trouble functioning outside the areas for which it was designed. General or strong AI is considered a step above narrow AI, performs well outside of the areas for which it was designed, and can learn how to fulfill new objectives. There is still debate as to how close we are to achieving general or strong AI, but in a recent survey of experts, half gave a date before 2061, and 90% gave a date in the next 100 years.

Machine Learning (ML)

ML is a discipline where the program or machine analyzes datasets, identifies patterns, and “learns” how to identify an optimal solution. This branch of AI is frequently leveraged in content recommendation and helps many technology platforms decide which posts to show, in which order, and to which users.

To understand how machine learning functions, it can be helpful to consider how people make predictions. If, for instance, at 6:15 pm, your roommate says, “I want a burrito,” and their favorite Mexican restaurant has free deliveries, you could predict that your roommate will order a burrito. This prediction may seem trivial, but the abstract process is surprisingly complicated. Having access to a vast amount of data, you instantaneously assess which variables are important and discard the rest. For example, you understand that “time of day” is important, and “color of your roommate’s shirt” is not. This process, weighing some variables more and others less, is central to machine learning.

Like a person, an ML model needs plenty of observations (“data”) to make an accurate prediction. The first  in creating an ML prediction is to feed the computer program observations about the scene in question. The program will then divide the data into two groups – training data and testing data. After separating the data, the program will create or train a model using the training data. To create this model (“algorithm”), the program will generate countless simulations where it changes the importance (“weight”) of different observations within the training dataset. For example, the program may initially make predictions in which shirt color is weighed the most. It will then see how accurate this model is by comparing its predictions to the actual outcomes seen in the testing data (i.e., How many times is the model correct when shirt color is the most important variable? What about when verbal interest is most important?) This process of changing the importance of observations and testing the results will repeat over and over until the machine “learns” which variables are essential (e.g., verbal interest) and which can be minimized (e.g., shirt color). Having learned what matters most, the program can proceed more accurately with its task, in this case, prediction.

One reason for the popularity of ML models is that a human does not need to manage the machine through each step. The machine can follow a pre-established ML process, and the program will automatically generate the final algorithm. Being at least partially self-directed, an ML model can, in many cases, process much more data than a team of humans.

To summarize, machine learning tends to work as follows:

  1. The program is given lots of data (e.g., roommate food ordering observations);
  2. It is given an objective (e.g., predict roommate burrito orders);
  3. Using an iterative process, it figures out the most critical variables and creates a model accordingly;
  4. This model, or algorithm, is then used to fulfill the objective (e.g., predict roommate burrito orders) in the real world.

While ML predictions can improve performance in some applications, it is vital to remember that ML models reflect the data they are “trained” on. This means the model may suffer if the data is biased or incomplete. Additionally, ML models can learn behaviors their programmers never intended. The ability to learn allows ML to create novel solutions, but it also can lead to biased or otherwise malformed systems, hence the need for proper deployment and regulation. Understanding how machine learning works, we can now review neural networks and deep learning, both subsets of the ML field.

Neural Networks

Neural networks are an advanced subset of machine learning. Like the standard ML discussed in the prior section, neural networks analyze datasets, identify patterns and “learn” how to identify optimal solutions. What sets neural networks apart from basic machine learning is they take the list of variables from the data set and create new variables to increase the accuracy of the end prediction.

Neural networks are organized into a series of nodes and connections, which are analogous to the human brain’s neurons and synapses (see Figure 4).

Fig 4 – Neural Network illustration of nodes and connections.
Fig 4 – Neural Network illustration of nodes and connections.

In machine learning, we had a series of inputs (e.g., the observations on a roommate’s burrito ordering behavior) and an output (e.g., the prediction of the roommate’s ordering behavior). With neural networks, we are adding a new type of variable called a “hidden layer”. The hidden layer is another set of variables created by the program based on the first set of inputs. This hidden layer allows neural networks to, in some instances, model more complex behavior than the standard machine learning model described in the prior section.

One tradeoff that occurs with neural networks is that as the model tries accurately to predict complex behavior, it can lose some explainability (i.e., the ability of a human user to understand why an algorithm produced a particular output). Sometimes, the most accurate model will leverage variables that are both difficult to interpret and to understand why they were used. In situations like this, whoever is employing the model may have to choose between using a “black box” model that has limited explainability or using a less accurate model that is more intelligible. While this tradeoff does not always exist, in some instances, requiring a system to be explainable can reduce the model’s accuracy, and regulations like this may negatively impact commercial activity and stifle innovation. Appreciating this, we can move on to the final discipline – deep learning.

Deep Learning

While the implementation and processing requirements for deep learning models are the most complex of the disciplines covered in this piece, their description is the simplest. Deep learning is simply a neural network with three or more hidden layers. Deep learning models are responsible for many of the most exciting innovations in the AI space such as self-driving cars, advanced robotics, and cutting-edge chatbots.

Fig 5 – Visualization of a simple deep learning model. Like neural networks, real-world deep learning models can have substantially more layers and variables than are shown here.
Fig 5 – Visualization of a simple deep learning model. Like neural networks, real-world deep learning models can have substantially more layers and variables than are shown here.


Navigating the intricacies of AI, machine learning, neural networks, and deep learning is daunting. However, one can learn important nuances that can be applied to policymaking without digging into the math. Understanding the nesting doll relationship between the subdisciplines, how the “learning” in machine learning happens, and the explainability implications of hidden layers are important concepts that policymakers can learn and apply to their work. AI is a complex field, but with a little effort, all policymakers can learn enough to engage effectively in this area.

For more on AI, please visit the Bipartisan Policy Center’s page on Artificial Intelligence, which includes additional information on this important technology.

Support Research Like This

With your support, BPC can continue to fund important research like this by combining the best ideas from both parties to promote health, security, and opportunity for all Americans.

Give Now