How To Build and Deploy an NLP Model with FastAPI: Part 2

Written by davisdavid | Published 2021/06/15
Tech Story Tags: natural-language-processing | machine-learning | data-science | fastapi | nlp | python | hackernoon-top-story | blogging-fellowship

TLDR This is the second and final part of the series on How to build and deploy an NLP model with FastAPI. In this article, we will try to use some of the features presented in FastAPI to serve our model. FastAPI is the fast and modern python web framework for building different APIs. It provides higher performance, easier to code, and comes up with automatic & interactive documentation. We will use the same function called text_cleaning() to clean the review data by removing stopwords, numbers, and punctuation, and finally, convert each word into its base form.via the TL;DR App

This is the second and final part of the series on How to build and deploy an NLP model with FastAPI. In the first part, we looked at how to build an NLP model that can classify movie reviews into different sentiments. 
In this second and final part, you will learn
  • What is FastAPI and how to install it.
  • How to deploy your model with FastAPI.
  • How to use your deployed NLP model in any Python application.
So let’s get started.🚀

What is FastAPI?

FastAPI is the fast and modern python web framework for building different APIs. It provides higher performance, easier to code, and comes up with automatic & interactive documentation.

FastAPI is built upon two major python libraries – Starlette(for web handling) and Pydantic(for data handling & validation). FastAPI is very fast compared to Flask because it brings asynchronous function handlers to the table.
If you want to know more about FastAPI, I recommend you read this article by Sebastián Ramírez.
In this article, we will try to use some of the features presented in FastAPI to serve our NLP model.

How To Install FastAPI

Firstly, make sure you install the latest version (with pip):
pip install fastapi
You will also need an ASGI server for production such as uvicorn.
pip install uvicorn

Deploy NLP Model with FastAPI

In this section, we are going to deploy our trained NLP model as a REST API with FastAPI. The code for our API will be saved in a python file called main.py, this file will be responsible for running our FastAPI app.

Import packages

The first step is to import packages that will help us to build the FastAPI app and run the NLP model.
# text preprocessing modules
from string import punctuation

# text preprocessing modules
from nltk.tokenize import word_tokenize

import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import re  # regular expression

import os
from os.path import dirname, join, realpath
import joblib
import uvicorn
from fastapi import FastAPI

Initializing a FastAPI App Instance

We can use the following code to initialize the FastAPI app.
app = FastAPI(
    title="Sentiment Model API",
    description="A simple API that use NLP model to predict the sentiment of the movie's reviews",
    version="0.1",
)
As you can see we have customized the configuration of our FastAPI application by including:
  • Title of the API
  • Description of the API.
  • The version of the API.

Load the NLP model

To load the model we use joblib.load() method and add the path to the model directory. The name of the  NLP model is sentiment_model_pipeline.pkl .
# load the sentiment model

with open(
    join(dirname(realpath(__file__)), "models/sentiment_model_pipeline.pkl"), "rb"
) as f:
    model = joblib.load(f)

Define a Function to Clean the Data

We will use the same function called text_cleaning() from Part 1 that cleans the review data by removing stopwords,  numbers, and punctuation, and finally, convert each word into its base form by using the lemmatization process in the NLTK package.
def text_cleaning(text, remove_stop_words=True, lemmatize_words=True):
    # Clean the text, with the option to remove stop_words and to lemmatize word

    # Clean the text
    text = re.sub(r"[^A-Za-z0-9]", " ", text)
    text = re.sub(r"\'s", " ", text)
    text = re.sub(r"http\S+", " link ", text)
    text = re.sub(r"\b\d+(?:\.\d+)?\s+", "", text)  # remove numbers

    # Remove punctuation from text
    text = "".join([c for c in text if c not in punctuation])

    # Optionally, remove stop words
    if remove_stop_words:

        # load stopwords
        stop_words = stopwords.words("english")
        text = text.split()
        text = [w for w in text if not w in stop_words]
        text = " ".join(text)

    # Optionally, shorten words to their stems
    if lemmatize_words:
        text = text.split()
        lemmatizer = WordNetLemmatizer()
        lemmatized_words = [lemmatizer.lemmatize(word) for word in text]
        text = " ".join(lemmatized_words)

    # Return a list of words
    return text

Create Prediction Endpoint

The next step is to add our prediction endpoint called "/predict-review" with the GET request method.
@app.get("/predict-review")
“An API endpoint is the point of entry in a communication channel when two systems are interacting. It refers to touchpoints of the communication between an API and a server.”
Then we define a prediction function for this endpoint. The name of the function is called predict_sentiment() with a review parameter.
The predict_sentiment() function will do the following tasks.
  •  Receive the movie review.
  •  Clean the movie review by using the text_cleaning() function.
  •  Make a prediction by using our NLP model.
  •  Save the prediction result in the output variable (either 0 or 1).
  •  Save the probability of the prediction in the probas variable and format it into 2 decimal places.
  •  Finally, return prediction and probability result.
@app.get("/predict-review")
def predict_sentiment(review: str):
    """
    A simple function that receive a review content and predict the sentiment of the content.
    :param review:
    :return: prediction, probabilities
    """
    # clean the review
    cleaned_review = text_cleaning(review)

    # perform prediction
    prediction = model.predict([cleaned_review])
    output = int(prediction[0])
    probas = model.predict_proba([cleaned_review])
    output_probability = "{:.2f}".format(float(probas[:, output]))

    # output dictionary
    sentiments = {0: "Negative", 1: "Positive"}

    # show results
    result = {"prediction": sentiments[output], "Probability": output_probability}

    return result
Here are all blocks of codes in the main.py file.
# text preprocessing modules
from string import punctuation

# text preprocessing modules
from nltk.tokenize import word_tokenize

import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
import re  # regular expression

import os
from os.path import dirname, join, realpath
import joblib
import uvicorn
from fastapi import FastAPI

app = FastAPI(
    title="Sentiment Model API",
    description="A simple API that use NLP model to predict the sentiment of the movie's reviews",
    version="0.1",
)

# load the sentiment model

with open(
    join(dirname(realpath(__file__)), "models/sentiment_model_pipeline.pkl"), "rb"
) as f:
    model = joblib.load(f)


# cleaning the data


def text_cleaning(text, remove_stop_words=True, lemmatize_words=True):
    # Clean the text, with the option to remove stop_words and to lemmatize word

    # Clean the text
    text = re.sub(r"[^A-Za-z0-9]", " ", text)
    text = re.sub(r"\'s", " ", text)
    text = re.sub(r"http\S+", " link ", text)
    text = re.sub(r"\b\d+(?:\.\d+)?\s+", "", text)  # remove numbers

    # Remove punctuation from text
    text = "".join([c for c in text if c not in punctuation])

    # Optionally, remove stop words
    if remove_stop_words:

        # load stopwords
        stop_words = stopwords.words("english")
        text = text.split()
        text = [w for w in text if not w in stop_words]
        text = " ".join(text)

    # Optionally, shorten words to their stems
    if lemmatize_words:
        text = text.split()
        lemmatizer = WordNetLemmatizer()
        lemmatized_words = [lemmatizer.lemmatize(word) for word in text]
        text = " ".join(lemmatized_words)

    # Return a list of words
    return text


@app.get("/predict-review")
def predict_sentiment(review: str):
    """
    A simple function that receive a review content and predict the sentiment of the content.
    :param review:
    :return: prediction, probabilities
    """
    # clean the review
    cleaned_review = text_cleaning(review)

    # perform prediction
    prediction = model.predict([cleaned_review])
    output = int(prediction[0])
    probas = model.predict_proba([cleaned_review])
    output_probability = "{:.2f}".format(float(probas[:, output]))

    # output dictionary
    sentiments = {0: "Negative", 1: "Positive"}

    # show results
    result = {"prediction": sentiments[output], "Probability": output_probability}

    return result

Run the API

The following command will help us to run the FastAPI app we have created.
uvicorn main:app --reload
Here are the settings we have defined for uvicorn to run our FastAPI app.
  • main: the file main.py that has the FastAPI app.
  • app: the object created inside of main.py with the line app = FastAPI().
  •  --reload : Enables the server to automatically restart whenever we make changes in the code.
FastAPI provides an Automatic Interactive API documentation page. To access it navigate to http://127.0.0.1:8000/docs in your browser and then you will see the documentation page created automatically by FastAPI.
The documentation page shows the name of our API, the description, and its version. It also shows a list of available routes in our API that you can interact with.
To make a prediction first click the "predict-review" route and then click on the button "Try it out", it allows you to fill the review parameter and directly interact with the API.
Fill the review field by adding a movie review of your choice. I added the following movie review about Zack Snyder's Justice League movie released in 2021.
"I loved the movie from the beginning to the end. Just like Ray fisher said, I was hoping that the movie doesn't end. The begging scene was mind blowing, liked that scene very much. Unlike 'the Justice League' the movie show every hero is best at their own thing, make us love every character. Thanks, Zack and the whole team."
Then click the execute button to make a prediction and get the result.
Finally, the result from the API shows that our NLP model predicts the review provided has Positive sentiment with the probability of 0.70.

Use NLP model in any Python Applications

To use our NLP API in any python application, we need to install the requests python package. This python package will help us to send HTTP requests to the FastAPI app we have developed.
To install the requests package run the following command.
pip install requests
Then create a simple python file called python_app.py. This file will be responsible to send our HTTP requests.
We first import requests package.
import requests as r
Add a movie review about Godzilla vs Kong (2021) Movie.
# add review
review = "This movie was exactly what I wanted in a Godzilla vs Kong movie. It's big loud, brash and dumb, in the best ways possible. It also has a heart in a the form of Jia (Kaylee Hottle) and a superbly expressionful Kong. The scenes of him in the hollow world are especially impactful and beautifully shot/animated. Kong really is the emotional core of the film (with Godzilla more of an indifferent force of nature), and is done so well he may even convert a few members of Team Godzilla."
Then add the review in a key parameter to pass to the HTTP request.

keys = {"review": review}
Finally, we send a request to our API to make a prediction of the review.
prediction = r.get("http://127.0.0.1:8000/predict-review/", params=keys)
 Then we can see the prediction results.
results = prediction.json()
print(results["prediction"])
print(results["Probability"])
This will show the prediction and its probability.
Here are the results.

Positive
0.54

Wrapping Up

Congratulations 👏👏, you have made it to the end of this part 2. I hope you have learned something new on how to deploy your NLP model with FastAPI. 
If you want to learn more about FastAPI, I recommend taking this full FastAPI course created by Bitfumes.
You can download the project source code used in this article here:
If you learned something new or enjoyed reading this article, please share it so that others can see it. Until then, see you in the next article!.
You can also find me on Twitter @Davis_McDavid and you can read more articles like this here.
Want to keep up to date with all the latest in machine learning? Subscribe to our newsletter in the footer below!

Written by davisdavid | Data Scientist | AI Practitioner | Software Developer| Technical Writer
Published by HackerNoon on 2021/06/15