Natural Language Processing 101: The Basics Simplified!
Everyone of us uses computer almost on a day-to-day basis. We use it for many things – for making engaging PowerPoint Presentations, checking out emails, using spreadsheets, etc. One such use is Data Analytics. We have been using computers for deriving actionable business insights using data. However, the important point to note here is that most of the data is typically in structured format i.e. data is in form of some numbers, tables, etc. Computers can easily understand such ‘structured’ data and can provide valuable analysis out of it. But, as you might notice, most of the ‘data’ in the world is in ‘unstructured’ format. Finding it difficult to understand what that means – let me explain!
Unstructured data, in simple language, is data that cannot be expressed using rows and columns of a spreadsheet. Examples of unstructured data includes text in form of blogs, social media platforms, website content, emails, voice recordings, etc. You might ask – why to analyze such unstructured data? What can we deriver out of it? As it turns out, there is a lot of information available in unstructured data.
One example – a company has launched a new product and wants to know what customers are thinking. The traditional way of doing the same is using feedback forms/surveys. However, in case of such forms/surveys, sample size is limited. Let us look at other options. With rise in digitization, customers of the product are most probably ‘talking’ about the product on the web. For example, customers might be writing product reviews on e-commerce websites or may be writing an entire blog about it. Now, going through such huge amount of data manually can be daunting and is almost impossible. And here is what we call ‘Natural Language Processing’ will help!
Natural Language Processing or NLP is the approach that is used to understand and analyze textual data. It is a subpart of Artificial Intelligence. To explain it in layman terms, with the help of NLP, machines can also ‘directly’ understand what humans write in text format. Hence the name – Natural Language Processing!
For example, let’s continue with our previous example of a company launching a new product. Using NLP, machines can ‘read’ a customer review on an e-commerce website and classify it as a positive or negative review. Further, with advanced NLP, machine can understand which features are most important for customers.
“Ok, Google!”, “Hey Alexa, play my playlist!”, “Siri, set an alarm for 7 O’clock!”. We all know this. All this works on Natural Language Processing. Other examples are Google Translate which let’s which translates text from one language to another using NLP. Chatbots, which use NLP, are also wildly becoming popular.
Businesses are also increasingly relying on NLP for valuable ‘data-driven’ insights. ‘Sentiment Analysis’ is one such popular technique which captures sentiment of customers about a product, brand, service, etc. by analyzing text data available on social media, blogs, and other websites. Companies are using NLP for ‘Market Intelligence’ – understanding best practices and future trends in a market.
NLP algorithms ‘learn’ as more and more data becomes available. Thus, as more unstructured data is becoming available, accuracy of NLP will increase & NLP will continue to grow more powerful in the coming years.
Let us now understand how NLP works -
To understand a step-by-step approach of dealing with textual data with the help of NLP, let’s consider a following paragraph of text:
“Assume Jack writes blogs and on his platform, he has given his audience an option to review his blogs. He gets around 100–200 reviews a day on his popular blogs. One day he decides to judge how many people are happy with his thoughts on the blogs and how many dislike it. He tries to open the review section and manually judges the response of his readers. He stays there for sometime before exhausting his mind after reading some of the reviews. He gives up the manual approach and hires a firm to give him the overall liking percentage and also asks to build some infrastructure so that the incoming new reviews will also be classified into positive or negative reviews.”
Our objective here to apply NLP on the above text and extract valuable information out of it using Sentiment Analysis.
Now, there are 7 steps to do the same:
This is done by using an API to the website and ‘scrapping’ all the reviews. In another world, had there been no API, we would have built a web-crawler for scrapping.
a) Lowercase: All the reviews on Jack’s blog are not constant in terms of the case. Thus. all the reviews are converted into lowercase.
b) Removal of punctuation, URLs: Punctuation does not add any meaning to the analyses, thus, needs to be removed. URLs also not needed for performing sentiment analysis, thus are removed from the reviews.
c) Removal of ‘stop-words’: There are words that are used a lot by humans, and they are not useful from the point of performing any Sentiment Analysis. Some of the stop words are:
d) Stemming: Humans use different forms of the same word for grammatical reasons. However, all those words still have the same meaning. Thus, stemming helps in breaking all different forms of words to a single form. For example, ‘imagining’, ‘imagination’ ‘imagined’, ‘imagine’ can all be stemmed to ‘imagine’
e) Tokenizing Sentences: Splitting reviews into separate sentence tokens. A single review can have long paragraphs thus are needed to be broken down. If it was a news website, we would filter out POS(Parts-of-Speech) by keeping just the proper nouns as they are most important in this context.
This is the step in which words are either classified as positive or negative. Word frequency tables are also created in this step.
NOTE: ‘NLTK’ package provides this corpus.
This helps in creating an unbiased environment and also helps in measuring the accuracy of different models.
Till now around 70% of the work has been completed. Now, the classification model(s) are built on the training data.
Now, the test set data is used on the different models built. Confusion Matrix is used to judge all the results of the models.
Once, a classification model has been selected. All the reviews are run on the model and the firm tells Jack about the percentage of people liking and disliking his blog!
The above mention are the broad steps that are usually adopted.
Now, before applying logistic regression to an NLP problem let’s understand what is a ‘word frequency table’.
Based on the annotated corpus, a word frequency table is created which counts the occurrence of specific words in a positive and negative context.
Now, let’s get down and build a small project to classify sentences as positive or negative.
We will use the NLTK library and for data, the twitter_sample included in the library.
A) Loading the necessary libraries in Python
B) Saving positive and negative tweets
import nltk from nltk.corpus import twitter_samples import matplotlib.pyplot as plt import random
C) Preprocessing the raw-text, downloading the define ‘stop-words’ data
nltk.download('twitter_samples') all_positive_tweets = twitter_samples.strings('positive_tweets.json') all_negative_tweets = twitter_samples.strings('negative_tweets.json')
D) Removing URLs and hashtags
nltk.download('stopwords') import re import string from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.tokenize import TweetTokenizer
E) Tokenizing the tweets: Tweets will get broken into individual words
# remove old style retweet text "RT" tweet2 = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet2 = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet2) # remove hashtags # only removing the hash # sign from the word tweet2 = re.sub(r'#', '', tweet2)
F) Removing stop words and punctuations: All stops words mentioned above will be removed along with the punctuations. Stemming of tweets: Root word will be used to all the other forms of that word.
# instantiate tokenizer class tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) # tokenize tweets tweet_tokens = tokenizer.tokenize(tweet2)
Now, preprocessing is done!
stopwords_english = stopwords.words('english') tweets_clean =  for word in tweet_tokens: if (word not in stopwords_english and word not in string.punctuation): tweets_clean.append(word)
NOTE: Using utils() library a function process_tweet() can be used for all of the above steps.
Splitting the data into train and test sets
train_y = np.append(np.ones((len(train_pos), 1)), np.zeros((len(train_neg), 1)),axis=0) test_y = np.append(np.ones((len(test_pos), 1)), np.zeros((len(test_neg), 1)), axis=0)
“In statistics, the logistic model (or logit model) is used to model the probability of a certain class or event existing such as pass/fail, win/lose, alive/dead, or healthy/sick.”Source : Wikipedia(hyperlink)
Now on training data, a logistic model is built
Testing a new tweet on the model built
def sigmoid(z): h = 1 / (1 + np.exp(-z)) return h def gradientDescent(x, y, theta, alpha, num_iters): m = len(x) for i in range(0, num_iters): z = np.dot(x, theta) h = sigmoid(z) J = 1.0 / m * (np.dot(np.transpose(y), np.log(h)) + np.dot(np.transpose(1 - y), np.log(1 -h))) theta -= alpha / float(m) * (np.dot(np.transpose(x), (h - y))) J = float(J) return J, theta Y = train_y J, theta = gradientDescent(X, Y, np.zeros((3, 1)), 1e-9, 1500)
Well, that’s it. The tweets will now be classified. Now had there been no NLP, this seemed to be impossible!
my_tweet = 'Hi!, I am happy to tell you that I have built my first model using Logistic Regression :)' [[0.83739526]] Positive sentiment
Data Scientist - NeenOpal Analytics @Rohan Bali
Data Analytics in Election Campaigns
Application of AI for Search Engine Optimization (SEO)
February 29, 2020
Market Basket Analaysis
December 27, 2019