Stemming and Lemmatization with nltk

Read this article further to know where to use stemmers and lemmatization. Lemmatization maybe better than stemmer but is it worth your time.

Welcome, data science enthusiasts and budding coders! Today, we’re embarking on an exciting journey through the realms of text normalization, specifically focusing on stemming and lemmatization using the Natural Language Toolkit (NLTK) in Python. These techniques are fundamental in the preprocessing steps for natural language processing (NLP) and machine learning tasks, helping to transform text data into a more manageable and analyzable form. Let’s demystify these concepts with simple explanations and rich examples.

Understanding Text Normalization

Text normalization is the process of transforming text into a uniform format. This is crucial for tasks like sentiment analysis, topic modeling, and text classification, where consistency in word forms can significantly impact the performance of your models. Two primary techniques for achieving this are stemming and lemmatization.

There are two processes used in such cases.

  1. Stemming
  2. Lemmatization

Stemming

Stemming is a heuristic process that chops off the ends of words in the hope of achieving the goal correctly most of the time. It’s like cutting branches off a tree to its stem. The idea is to remove common prefixes and suffixes from a word, bringing it down to its base or root form, known as the “stem”. However, this process is relatively simple and can sometimes lead to inaccuracies or the creation of non-existent words.

While stemming only uses the word stems, lemmatization reduces the word to a root while preserving the sentence context. So “unable” wouldn’t become “able” in a lemmatizer. Stemmers and lemmatization are used to understand queries in chatbots and search engines. Stemmers may be faster, but their accuracy is not up to par with a lemmatizer.

Lemmatization

Lemmatization, on the other hand, is a more sophisticated approach. It involves morphological analysis of words, aiming to remove inflectional endings to return the base or dictionary form of a word, known as the “lemma”. Unlike stemming, lemmatization understands the context and part of speech of a word, leading to more accurate results.

Problem Statement

So we are making a chatbot to greet newcomers to the site and also. And it should give some basic answers on python. Even if we train the chatbot with a normal corpus, it won’t be that effective. So we must first process the words to their core. So this part of article will create a preprocessed tokenized and words. Which can be efficiently utilized to create a chatbot.

In the following code, we will import libraries necessary for importing a corpus on our runtime session.

Import libraries and corpus

Python
Python
Python
import nltk
# download stopwords list for cleaning documents
nltk.download('stopwords')
# download for help in tokenizing
nltk.download('punkt')
# for using in lemmatizer
nltk.download('wordnet')
from nltk.corpus import stopwords
stop = stopwords.words('english')
# part of speech tagger
nltk.download('averaged_perceptron_tagger')

from google.colab import drive
drive.mount('/content/drive')
with open('/content/drive/MyDrive/Python Course/NLP/TFIDF/corpustfidf.txt','r', encoding='utf8',errors='ignore') as file:
    study = file.read().lower()
print(study)
Output
python is a high-level, general-purpose programming language.
its design philosophy emphasizes code readability with the use of significant indentation.
python features is dynamically-typed and garbage-collected.

The actual corpus is much bigger. You can create your own corpus with your own information.

Tokenization

Tokenization is the process of breaking paragraphs into lists of sentences and sentences into lists of words. So overall, it is a double-nested list containing sentences broken into lists of words.

Python
Python
Python
from nltk.tokenize import RegexpTokenizer
tokenizer=RegexpTokenizer(r'\w+')
tokens=[tokenizer.tokenize(x) for x in study.split('\n\n')]

Stemmers

Stemming is the process of reducing a word to its basic root or meaning. The stemming algorithm removes letters from the beginning or end of words. Stemming is used in information retrieval. The first stemmer was released in 1968. The best stemming algorithm so far is called Snowball.

Following is an example of stemming with python

Python
Python
Python
from nltk.stem.porter import *
# create a porterstemmer object
stemmer = PorterStemmer()
for i in tokens:
    print(tokens)
    print([stemmer.stem(x) for x in tokens if x is not in stop])
Output
['python', 'high', 'level', 'gener', 'purpos', 'program', 'languag']
['design', 'philosophi', 'emphas', 'code', 'readabl', 'use', 'signific', 'indent']
['python', 'featur', 'dynam', 'type', 'garbag', 'collect']

Lemmatization

The lemmatization process takes a more sophisticated approach to word reduction. It groups together words with similar inflection and uses the root word. Lemmatizers are used in places where context preservation and words both hold important value. Lemmatizers provide more accuracy. If the application is simple, then a stemmer will be used since it will be faster.

Take a look at the following code; we will be using WordNetLemmatizer from nltk for the lemmatization process.

Python
Python
Python
from nltk.stem import WordNetLemmatizer
stop = stopwords.words('english')
# create a lemmatizer object
lemma = WordNetLemmatizer()
# access corpus sentence by sentece
for i in tokens:
    #this one line for loop will break down sentences into words and process them. 
    print([lemma.lemmatize(x) for x in i if x not in stop])
Output
['python', 'high', 'level', 'general', 'purpose', 'programming', 'language']
['design', 'philosophy', 'emphasizes', 'code', 'readability', 'use', 'significant', 'indentation']
['python', 'feature', 'dynamically', 'typed', 'garbage', 'collected']

the difference between the outputs below. Take “general-purpose” for an example while the stemmer removed the ‘e’ lemmatizer kept it without deforming the original word.

Compare stemmer and lemmatizer

Stemmer Output

Python
Python
Python
Output
['python', 'high', 'level', 'gener', 'purpos', 'program', 'languag']
['design', 'philosophi', 'emphas', 'code', 'readabl', 'use', 'signific', 'indent']
['python', 'featur', 'dynam', 'type', 'garbag', 'collect']

Lemmatizer output

Python
Python
Python
Output
['python', 'high', 'level', 'general', 'purpose', 'programming', 'language']
['design', 'philosophy', 'emphasizes', 'code', 'readability', 'use', 'significant', 'indentation']
['python', 'feature', 'dynamically', 'typed', 'garbage', 'collected']

Lemmatization with POS tag

You might’ve also noticed that in the second sentence, “emphasises” didn’t change to emphasise in the lemmatizer. It may be hard for a lemmatizer to change a word if it isn’t known which part of speech it occupies. So we can use the (part of speech tag) pos_tag from the NLTK library, which creates a tuple of words and a part of speech tag. In the following code, the lemmatizer is given the word’s position tag:

Python
Python
Python
from nltk.corpus import wordnet
nltk.download('averaged_perceptron_tagger')
lemmatizer = WordNetLemmatizer()
# j= adjective , n= noun , v= verb , r= adverb 
pos_conv = {"j": 'a',"n": 'n',"v": 'v',"r": 'r'}
def lemmer(wor,tag):
    # lemmatizer takes two arguments word and it's pos tag
    return lemmatizer.lemmatize(wor,pos_conv.get(tag[0],'v'))
lem_doc=[[lemmer(x[0],x[1]) for x in nltk.pos_tag(i) if x[0] not in stop] for i in tokens]
for i in lem_doc:
    print(i)
Output
['python', 'high', 'level', 'general', 'purpose', 'program', 'language']
['design', 'philosophy', 'emphasize', 'code', 'readability', 'use', 'significant', 'indentation']
['python', 'feature', 'dynamically', 'type', 'garbage', 'collect']

As you can see in the output above, we managed to have a significant difference in output. (“emphasize”, “program”, “type”)

Part of speech tagging was briefly demonstrated here; you may expect further articles on that topic.

Questions about stemmers and lemmatization

  1. Is lemmatization better than stemmer?
    • Depends on the use case, if you need accuracy go for lemmatization and stemmer for speed.
  2. How useful is stemming?
    • By using stemming, we can remove unnecessary parts of words and be free to retrieve information with good frequency.

Conclusion:

We can conclude that:

  • We learned how to remove affixes from words efficiently and accurately.
  • We learned when to use a stemmer and lemmatizer.
  • We learned to use POS tags with a lemmatizer for greater accuracy.

TF-IDF from scratch

TF-IDF method belongs to the domain of information retrieval, where several statistical methods are used to convert text into quantitative vector of fractals.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

  • ANCOVA: Analysis of Covariance with python

    ANCOVA is an extension of ANOVA (Analysis of Variance) that combines blocks of regression analysis and ANOVA. Which makes it Analysis of Covariance.

  • Learn Python The Fun Way

    What if we learn topics in a desirable way!! What if we learn to write Python codes from gamers data !!

  • Meet the most efficient and intelligent AI assistant : NotebookLM

    Start using NotebookLM today and embark on a smarter, more efficient learning journey!

  • Break the ice

    This can be a super guide for you to start and excel in your data science career.

  • Hypothesis Testing: A Comprehensive Overview

    This article delves into the application of hypothesis testing across diverse domains

  • Versions of ANCOVA (Analysis Of Covariance) with python

    To perform ANCOVA (Analysis of Covariance) with a dataset that includes multiple types of variables, you’ll need to ensure your dependent variable is continuous, and you can include categorical variables as factors. Below is an example using the statsmodels library in Python: Mock Dataset Let’s create a dataset with a mix of variable types: Performing…

  • Python Variables

    How useful was this post? Click on a star to rate it! Submit Rating Average rating 0 / 5. Vote count: 0 No votes so far! Be the first to rate this post.

  • A/B Testing Quiz

    Complete the code by dragging and dropping the correct functions

  • Python Functions

    Python functions are a vital concept in programming which enables you to group and define a collection of instructions. This makes your code more organized, modular, and easier to understand and maintain. Defining a Function: In Python, you can define a function via the def keyword, followed by the function name, any parameters wrapped in parentheses,…

  • Python Indexing: A Guide for Data Science Beginners

    Mastering indexing will significantly boost your data manipulation and analysis skills, a crucial step in your data science journey.

  • Diffusion Models: Making AI Creativity

    Stable Diffusion Models: Where Art and AI Collide Artificial Intelligence meets creativity in the fascinating realm of Stable Diffusion Models. These innovative models take text descriptions and bring them to life in the form of detailed and realistic images. Let’s embark on a journey to understand the magic behind Stable Diffusion in a way that’s…

  • Quiz Challenge: Basics with Python [Questions]

    Solve These Questions in Following Challange

  • Introducing Plethora of Stable Diffusion models: Part 1

    Generate AI images as good as DALL-E completely offline.

Instagram
WhatsApp
error: Content is protected !!