Read this article further to know where to use stemmers and lemmatization. Lemmatization maybe better than stemmer but is it worth your time.
Read this article further to know where to use stemmers and lemmatization. Lemmatization maybe better than stemmer but is it worth your time.
Welcome, data science enthusiasts and budding coders! Today, we’re embarking on an exciting journey through the realms of text normalization, specifically focusing on stemming and lemmatization using the Natural Language Toolkit (NLTK) in Python. These techniques are fundamental in the preprocessing steps for natural language processing (NLP) and machine learning tasks, helping to transform text data into a more manageable and analyzable form. Let’s demystify these concepts with simple explanations and rich examples.
Text normalization is the process of transforming text into a uniform format. This is crucial for tasks like sentiment analysis, topic modeling, and text classification, where consistency in word forms can significantly impact the performance of your models. Two primary techniques for achieving this are stemming and lemmatization.
There are two processes used in such cases.
Stemming is a heuristic process that chops off the ends of words in the hope of achieving the goal correctly most of the time. It’s like cutting branches off a tree to its stem. The idea is to remove common prefixes and suffixes from a word, bringing it down to its base or root form, known as the “stem”. However, this process is relatively simple and can sometimes lead to inaccuracies or the creation of non-existent words.
While stemming only uses the word stems, lemmatization reduces the word to a root while preserving the sentence context. So “unable” wouldn’t become “able” in a lemmatizer. Stemmers and lemmatization are used to understand queries in chatbots and search engines. Stemmers may be faster, but their accuracy is not up to par with a lemmatizer.
Lemmatization, on the other hand, is a more sophisticated approach. It involves morphological analysis of words, aiming to remove inflectional endings to return the base or dictionary form of a word, known as the “lemma”. Unlike stemming, lemmatization understands the context and part of speech of a word, leading to more accurate results.
So we are making a chatbot to greet newcomers to the site and also. And it should give some basic answers on python. Even if we train the chatbot with a normal corpus, it won’t be that effective. So we must first process the words to their core. So this part of article will create a preprocessed tokenized and words. Which can be efficiently utilized to create a chatbot.
In the following code, we will import libraries necessary for importing a corpus on our runtime session.
import nltk
# download stopwords list for cleaning documents
nltk.download('stopwords')
# download for help in tokenizing
nltk.download('punkt')
# for using in lemmatizer
nltk.download('wordnet')
from nltk.corpus import stopwords
stop = stopwords.words('english')
# part of speech tagger
nltk.download('averaged_perceptron_tagger')
from google.colab import drive
drive.mount('/content/drive')
with open('/content/drive/MyDrive/Python Course/NLP/TFIDF/corpustfidf.txt','r', encoding='utf8',errors='ignore') as file:
study = file.read().lower()
print(study)
python is a high-level, general-purpose programming language.
its design philosophy emphasizes code readability with the use of significant indentation.
python features is dynamically-typed and garbage-collected.
The actual corpus is much bigger. You can create your own corpus with your own information.
Tokenization is the process of breaking paragraphs into lists of sentences and sentences into lists of words. So overall, it is a double-nested list containing sentences broken into lists of words.
from nltk.tokenize import RegexpTokenizer
tokenizer=RegexpTokenizer(r'\w+')
tokens=[tokenizer.tokenize(x) for x in study.split('\n\n')]
Stemming is the process of reducing a word to its basic root or meaning. The stemming algorithm removes letters from the beginning or end of words. Stemming is used in information retrieval. The first stemmer was released in 1968. The best stemming algorithm so far is called Snowball.
Following is an example of stemming with python
from nltk.stem.porter import *
# create a porterstemmer object
stemmer = PorterStemmer()
for i in tokens:
print(tokens)
print([stemmer.stem(x) for x in tokens if x is not in stop])
['python', 'high', 'level', 'gener', 'purpos', 'program', 'languag']
['design', 'philosophi', 'emphas', 'code', 'readabl', 'use', 'signific', 'indent']
['python', 'featur', 'dynam', 'type', 'garbag', 'collect']
The lemmatization process takes a more sophisticated approach to word reduction. It groups together words with similar inflection and uses the root word. Lemmatizers are used in places where context preservation and words both hold important value. Lemmatizers provide more accuracy. If the application is simple, then a stemmer will be used since it will be faster.
Take a look at the following code; we will be using WordNetLemmatizer from nltk for the lemmatization process.
from nltk.stem import WordNetLemmatizer
stop = stopwords.words('english')
# create a lemmatizer object
lemma = WordNetLemmatizer()
# access corpus sentence by sentece
for i in tokens:
#this one line for loop will break down sentences into words and process them.
print([lemma.lemmatize(x) for x in i if x not in stop])
['python', 'high', 'level', 'general', 'purpose', 'programming', 'language']
['design', 'philosophy', 'emphasizes', 'code', 'readability', 'use', 'significant', 'indentation']
['python', 'feature', 'dynamically', 'typed', 'garbage', 'collected']
the difference between the outputs below. Take “general-purpose” for an example while the stemmer removed the ‘e’ lemmatizer kept it without deforming the original word.
['python', 'high', 'level', 'gener', 'purpos', 'program', 'languag']
['design', 'philosophi', 'emphas', 'code', 'readabl', 'use', 'signific', 'indent']
['python', 'featur', 'dynam', 'type', 'garbag', 'collect']
['python', 'high', 'level', 'general', 'purpose', 'programming', 'language']
['design', 'philosophy', 'emphasizes', 'code', 'readability', 'use', 'significant', 'indentation']
['python', 'feature', 'dynamically', 'typed', 'garbage', 'collected']
You might’ve also noticed that in the second sentence, “emphasises” didn’t change to emphasise in the lemmatizer. It may be hard for a lemmatizer to change a word if it isn’t known which part of speech it occupies. So we can use the (part of speech tag) pos_tag from the NLTK library, which creates a tuple of words and a part of speech tag. In the following code, the lemmatizer is given the word’s position tag:
from nltk.corpus import wordnet
nltk.download('averaged_perceptron_tagger')
lemmatizer = WordNetLemmatizer()
# j= adjective , n= noun , v= verb , r= adverb
pos_conv = {"j": 'a',"n": 'n',"v": 'v',"r": 'r'}
def lemmer(wor,tag):
# lemmatizer takes two arguments word and it's pos tag
return lemmatizer.lemmatize(wor,pos_conv.get(tag[0],'v'))
lem_doc=[[lemmer(x[0],x[1]) for x in nltk.pos_tag(i) if x[0] not in stop] for i in tokens]
for i in lem_doc:
print(i)
['python', 'high', 'level', 'general', 'purpose', 'program', 'language']
['design', 'philosophy', 'emphasize', 'code', 'readability', 'use', 'significant', 'indentation']
['python', 'feature', 'dynamically', 'type', 'garbage', 'collect']
As you can see in the output above, we managed to have a significant difference in output. (“emphasize”, “program”, “type”)
Part of speech tagging was briefly demonstrated here; you may expect further articles on that topic.
We can conclude that:
ANCOVA is an extension of ANOVA (Analysis of Variance) that combines blocks of regression analysis and ANOVA. Which makes it Analysis of Covariance.
What if we learn topics in a desirable way!! What if we learn to write Python codes from gamers data !!
Start using NotebookLM today and embark on a smarter, more efficient learning journey!
This can be a super guide for you to start and excel in your data science career.
Solve this quiz for testing Manova Basics
Test your knowledge on pandas groupby with this quiz
Observe the dataset and try to solve the Visualization quiz on it
To perform ANCOVA (Analysis of Covariance) with a dataset that includes multiple types of variables, you’ll need to ensure your dependent variable is continuous, and you can include categorical variables as factors. Below is an example using the statsmodels library in Python: Mock Dataset Let’s create a dataset with a mix of variable types: Performing…
How useful was this post? Click on a star to rate it! Submit Rating
Complete the code by dragging and dropping the correct functions
Python functions are a vital concept in programming which enables you to group and define a collection of instructions. This makes your code more organized, modular, and easier to understand and maintain. Defining a Function: In Python, you can define a function via the def keyword, followed by the function name, any parameters wrapped in parentheses,…
Mastering indexing will significantly boost your data manipulation and analysis skills, a crucial step in your data science journey.
Stable Diffusion Models: Where Art and AI Collide Artificial Intelligence meets creativity in the fascinating realm of Stable Diffusion Models. These innovative models take text descriptions and bring them to life in the form of detailed and realistic images. Let’s embark on a journey to understand the magic behind Stable Diffusion in a way that’s…