https://www.upgrad.com/blog/naive-bayes-explained/

Naive Bayes is a machine learning algorithm we use to solve classification problems. It is based on the Bayes Theorem. It is one of the simplest yet powerful ML algorithms in use and finds applications in many industries. 

Suppose you have to solve a classification problem and have created the features and generated the hypothesis, but your superiors want to see the model. You have numerous data points (lakhs of data points) and many variables to train the dataset. The best solution for this situation would be to use the Naive Bayes classifier, which is quite faster in comparison to other classification algorithms. 

In this article, we’ll discuss this algorithm in detail and find out how it works. We’ll also discuss its advantages and disadvantages along with its real-world applications to understand how essential this algorithm is.  Dreaming to Study Abroad? Here is the Right program for you

Let’s get started:

Naive Bayes Explained

Naive Bayes uses the Bayes’ Theorem and assumes that all predictors are independent. In other words, this classifier assumes that the presence of one particular feature in a class doesn’t affect the presence of another one. 

Here’s an example: you’d consider fruit to be orange if it is round, orange, and is of around 3.5 inches in diameter. Now, even if these features require each other to exist, they all contribute independently to your assumption that this particular fruit is orange. That’s why this algorithm has ‘Naive’ in its name. 

Building the Naive Bayes model is quite simple and helps you in working with vast datasets. Moreover, this equation is popular for beating many advanced classification techniques in terms of performance. 

Here’s the equation for Naive Bayes:

P (c|x) = P(x|c) P(c) / P(x)

P(c|x) = P(x1 | c) x P(x2 | c) x … P(xn | c) x P(c) 

Here, P (c|x) is the posterior probability according to the predictor (x) for the class(c). P(c) is the prior probability of the class, P(x) is the prior probability of the predictor, and P(x|c) is the probability of the predictor for the particular class(c). 

Apart from considering the independence of every feature, Naive Bayes also assumes that they contribute equally. This is an important point to remember. 

How does Naive Bayes Work?

To understand how Naive Bayes works, we should discuss an example. 

Suppose we want to find stolen cars and have the following dataset:

Serial No. ColorTypeOriginWas it Stolen?
1RedSportsDomesticYes
2RedSportsDomesticNo
3RedSportsDomesticYes
4YellowSportsDomesticNo
5YellowSportsImportedYes
6YellowSUVImportedNo
7YellowSUVImportedYes
8YellowSUVDomesticNo
9RedSUVImportedNo
10RedSportsImportedYes

According to our dataset, we can understand that our algorithm makes the following assumptions:

  • It assumes that every feature is independent. For example, the colour ‘Yellow’ of a car has nothing to do with its Origin or Type. 
  • It gives every feature the same level of importance. For example, knowing only the Color and Origin would predict the outcome correctly. That’s why every feature is equally important and contributes equally to the result.

Now, with our dataset, we have to classify if thieves steal a car according to its features. Each row has individual entries, and the columns represent the features of every car. In the first row, we have a stolen Red Sports Car with Domestic Origin. We’ll find out if thieves would steal a Red Domestic SUV or not (our dataset doesn’t have an entry for a Red Domestic SUV).

We can rewrite the Bayes Theorem for our example as:

P(y | X) = [P(X | y) P(y)P(X)]/P(X)

Here, y stands for the class variable (Was it Stolen?) to show if the thieves stole the car not according to the conditions. X stands for the features. 

X = x1, x2, x3, …., xn)

Here, x1, x2,…, xn stand for the features. We can map them to be Type, Origin, and Color. Now, we’ll replace X and expand the chain rule to get the following:

P(y | x1, …, xn) = [P(x1 | y) P(x2 | y) … P(xn | y) P(y)]/[P(x1) P (x2) … P(xn)]

You can get the values for each by using the dataset and putting their values in the equation. The denominator will remain static for every entry in the dataset to remove it and inject proportionality.

P(y | x1, …, xn) ∝ P(y) i = 1nP(xi | y)

In our example, y only has two outcomes, yes or no. 

y = argmaxyP(y) i = 1nP(xi | y)

We can create a Frequency Table to calculate the posterior probability P(y|x) for every feature. Then, we’ll mould the frequency tables to Likelihood Tables and use the Naive Bayesian equation to find every class’s posterior probability. The result of our prediction would be the class that has the highest posterior probability. Here are the Likelihood and Frequency Tables:

Frequency Table of Color:

ColorWas it Stolen (Yes)Was it Stolen (No)
Red32
Yellow23

Likelihood Table of Color:

ColorWas it Stolen [P(Yes)]Was it Stolen [P(No)]
Red3/52/5
Yellow2/53/5

Frequency Table of Type:

TypeWas it Stolen (Yes)Was it Stolen (No)
Sports42
SUV13

Likelihood Table of Type:

TypeWas it Stolen [P(Yes)]Was it Stolen [P(No)]
Sports4/52/5
SUV1/53/5

Frequency Table of Origin:

OriginWas it Stolen (Yes)Was it Stolen (No)
Domestic23
Imported32

Likelihood Table of Origin:

OriginWas it Stolen [P(Yes)]Was it Stolen [P(No)]
Domestic2/53/5
Imported3/52/5

Our problem has 3 predictors for X, so according to the equations we saw previously, the posterior probability P(Yes | X) would be as following:

P(Yes | X) = P(Red | Yes) * P(SUV | Yes) * P(Domestic | Yes) * P(Yes)

= ⅗ x ⅕ x ⅖ x 1

= 0.048

P(No | X) would be:

P(No | X) = P(Red | No) * P(SUV | No) * P(Domestic | No) * P(No)

= ⅖ x ⅗ x ⅗ x 1

= 0.144

So, as the posterior probability P(No | X) is higher than the posterior probability P(Yes | X), our Red Domestic SUV will have ‘No’ in the ‘Was it stolen?’ section. 

The example should have shown you how the Naive Bayes Classifier works. To get a better picture of Naive Bayes explained, we should now discuss its advantages and disadvantages:

Advantages and Disadvantages of Naive Bayes

Advantages

  • This algorithm works quickly and can save a lot of time. 
  • Naive Bayes is suitable for solving multi-class prediction problems. 
  • If its assumption of the independence of features holds true, it can perform better than other models and requires much less training data. 
  • Naive Bayes is better suited for categorical input variables than numerical variables.

Disadvantages

  • Naive Bayes assumes that all predictors (or features) are independent, rarely happening in real life. This limits the applicability of this algorithm in real-world use cases.
  • This algorithm faces the ‘zero-frequency problem’ where it assigns zero probability to a categorical variable whose category in the test data set wasn’t available in the training dataset. It would be best if you used a smoothing technique to overcome this issue.
  • Its estimations can be wrong in some cases, so you shouldn’t take its probability outputs very seriously. 

Checkout: Machine Learning Models Explained

Applications of Naive Bayes Explained

Here are some areas where this algorithm finds applications:

Text Classification

Most of the time, Naive Bayes finds uses in-text classification due to its assumption of independence and high performance in solving multi-class problems. It enjoys a high rate of success than other algorithms due to its speed and efficiency. 

Sentiment Analysis

One of the most prominent areas of machine learning is sentiment analysis, and this algorithm is quite useful there as well. Sentiment analysis focuses on identifying whether the customers think positively or negatively about a certain topic (product or service).

Recommender Systems

With the help of Collaborative Filtering, Naive Bayes Classifier builds a powerful recommender system to predict if a user would like a particular product (or resource) or not. Amazon, Netflix, and Flipkart are prominent companies that use recommender systems to suggest products to their customers. 

Learn More Machine Learning Algorithms

Naive Bayes is a simple and effective machine learning algorithm for solving multi-class problems. It finds uses in many prominent areas of machine learning applications such as sentiment analysis and text classification. 

If you’re interested to learn more about AI, machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.