How to Implement Pagination Using FastAPI in Python

Use fastapi-pagination library to paginate API responses

•

Introduction

Hello everyone! Today we are gonna discuss an intriguing topic. Have you ever wondered how huge websites like amazon load data on their web pages?

How does the data get populated as and when you scroll through your product catalog? Do they load it all at once? Well, definitely not. That is simply impossible given the volume of data. The best way to do this is in pages which is termed as Pagination.

Pagination, also known as paging, is the process of dividing a document into discrete pages, either electronic pages or printed pages.Users can navigate between these pages by clicking links, often in the form of numbers located at the bottom of a page. Paginated content is typically related by some common theme or purpose.

We will discuss how to implement pagination using fastapi. If you're new to fastapi, please read this post. This gives an introduction on fastapi.

A simple fastapi route:

Let’s begin with an obvious less complicated end-point in fastapi to get a hang of it.

my_pagination.py

from fastapi import FastAPI

app = FastAPI(title=" A simple pagination learning exercise",
              debug=True)


@app.get(path="/api/hello", name="hello endpoint")
async def hello():
    return {"exercise": "pagination"}

Run this code using uvicorn. Ohh! I almost forgot. Please do install fastapi and uvicorn if you haven't already.

$ pip install fastapi
$ pip install uvicornuvicorn

my_pagination:app --reload

INFO:     Will watch for changes in these directories: ['/Users/dinesh/Code/practice']
INFO:     Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO:     Started reloader process [13487] using StatReload
INFO:     Started server process [13489]
INFO:     Waiting for application startup.
INFO:     Application startup complete.

Database Prep:

To get started with pagination, we need

  1. A Database — We will be using mysql here
  2. Faker library to load fake data into the database.

Table Schema:

+----------+--------------+------+-----+---------+-------+
| Field    | Type         | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+-------+
| empid    | int          | NO   | PRI | NULL    |       |
| empname  | varchar(255) | YES  |     | NULL    |       |
| dept     | int          | YES  |     | NULL    |       |
| location | varchar(255) | YES  |     | NULL    |       |
| dob      | date         | YES  |     | NULL    |       |
+----------+--------------+------+-----+---------+-------+

Python Code to connect to MySQL database:

The below code is to connect to your database using sql alchemy. Just in case if your password has special characters, ensure that you encode using quote while passing to the database connection string.

from sqlalchemy import create_engine, text

from sqlalchemy.exc import SQLAlchemyError

from sqlalchemy.orm import sessionmaker

from urllib.parse import quote

def connect_to_db():

connection_url = "mysql+pymysql://<username>:<password>@<hostname>:3306/<databasename>"

try:

engine = create_engine(url=connection_url)

# create a session

session = sessionmaker(bind=engine)

conn = session()

if session:

return conn

except SQLAlchemyError as se:

print(se)

Populate fake data to mysql using Faker:

We need data if we have to test our implementation. Therefore, we will use the library to populate fake data to the database.

$ pip install Faker

Collecting faker
  Using cached Faker-14.2.0-py3-none-any.whl (1.6 MB)
Requirement already satisfied: python-dateutil>=2.4 in /Users/dinesh/VirtualEnvs/practo-env/lib/python3.9/site-packages (from faker) (2.8.2)
Requirement already satisfied: six>=1.5 in /Users/dinesh/VirtualEnvs/practo-env/lib/python3.9/site-packages (from python-dateutil>=2.4->faker) (1.16.0)
Installing collected packages: faker
Successfully installed faker-14.2.0

Use the below code to feed fake data to your mysql database. I know this could be a lot overwhelming by now especially when we haven't even started discussing pagination. However, you may not have to do all this data prep in real life scenarios. These libraries could come in handy when you write unit tests.

from faker import Faker

from sqlalchemy import create_engine, text

from sqlalchemy.exc import SQLAlchemyError

from sqlalchemy.orm import sessionmaker

from urllib.parse import quote

def connect_to_db():

connection_url = "mysql+pymysql://<username>:<password>@<hostname>:3306/<databasename>"

try:

engine = create_engine(url=connection_url)

# create a session

session = sessionmaker(bind=engine)

conn = session()

if session:

return conn

except SQLAlchemyError as se:

print(se)

def insert_fake_data_to_db():

fake = Faker()

for i in range(1000):

name = fake.name()

id = random.randint(0, 2000)

dept = random.randint(10, 20)

dob = fake.date_of_birth()

loc = fake.country()

# prepare the insert query

sql = text("insert into employee(empid, empname, dept, location, dob) values ({},'{}',{},'{}','{}')".format(id, name, dept, loc, dob))

cnx = connect_to_db()

cnx.execute(sql)

cnx.commit()

insert_fake_data_to_db()

Note: If you get duplicate key errors, try inserting fewer rows at a time with a lesser randint range. Since this is for testing purposes, having somewhere around 60 to 100 records should do.

mysql output:

This is a partial output of mysql SELECT query just so we know that the insert was successful.

+-------+-----------------+------+-------------------+------------+
| empid | empname         | dept | location          | dob        |
+-------+-----------------+------+-------------------+------------+
|     1 | Dinesh          |   20 | US                | 2022-11-11 |
|    52 | James Jones     |   13 | French Guiana     | 2007-05-31 |
|   111 | Peter Dean      |   17 | Brunei Darussalam | 1967-05-27 |
|   456 | Amanda Phillips |   13 | Panama            | 1938-12-29 |
|   793 | Victoria Chang  |   16 | Moldova           | 1945-10-27 |
+-------+-----------------+------+-------------------+------------+

A complete end-to-end fast-api route:

Now that we have all the pre-requisites, let’s write a complete API endpoint that,

  1. Gets a HTTP request
  2. Connects to a database
  3. Queries the database
  4. Returns the database results as a HTTP response.

Database Schema using SQLAlchemy:

from sqlalchemy.orm import sessionmaker, declarative_base

Base = declarative_base()

app = FastAPI(title=" A simple pagination learning exercise",

debug=True)

class Employee(Base):

__tablename__ = "employee"

empid = Column(Integer, primary_key=True)

empname = Column(String)

dept = Column(Integer)

location = Column(String)

dob = Column(Date)

The above code defines the table schema in SQLAlchemy. We will use this to query the mysql database using fastapi.

@app.get(path="/api/employees/all", name="Gets all employees")
async def get_all_employees():
    conn = connect_to_db()
    results = conn.query(Employee).all()
    return {"all_employees": results}

Output:

{

"all_employees": [

{

"dept": 17,

"empname": "Christopher Whitaker",

"location": "Antarctica (the territory South of 60 deg S)",

"dob": "1962-12-13",

"empid": 53

},

...

...

...

...

{

"dept": 11,

"empname": "Tony Good",

"location": "Liberia",

"dob": "1930-08-02",

"empid": 77

},

{

"dept": 17,

"empname": "Stephen Reed",

"location": "Hong Kong",

"dob": "1970-10-02",

"empid": 136

},

{

"dept": 13,

"empname": "David Duncan",

"location": "Costa Rica",

"dob": "1971-10-05",

"empid": 138

},

{

"dept": 16,

"empname": "Richard Morales",

"location": "Hong Kong",

"dob": "1958-03-29",

"empid": 308

},

{

"dept": 20,

"empname": "Christopher Ramirez",

"location": "Burkina Faso",

"dob": "1909-11-10",

"empid": 362

},

{

"dept": 11,

"empname": "Laura Peck",

"location": "Jordan",

"dob": "2018-06-15",

"empid": 402

},

{

"dept": 15,

"empname": "Raymond Pierce",

"location": "Congo",

"dob": "1909-10-03",

"empid": 408

},

{

"dept": 13,

"empname": "Jillian Clark",

"location": "Burundi",

"dob": "1950-04-30",

"empid": 421

},

{

"dept": 19,

"empname": "Jill Jackson",

"location": "Argentina",

"dob": "1940-04-10",

"empid": 691

},

{

"dept": 16,

"empname": "Hector Adams",

"location": "China",

"dob": "1947-03-20",

"empid": 825

},

{

"dept": 11,

"empname": "Jessica Robinson",

"location": "Mauritius",

"dob": "2005-01-12",

"empid": 867

},

{

"dept": 13,

"empname": "Nathaniel Parker",

"location": "Faroe Islands",

"dob": "1957-04-18",

"empid": 1006

},

{

"dept": 10,

"empname": "Erin Bishop",

"location": "Lithuania",

"dob": "1981-10-29",

"empid": 1064

},

{

"dept": 18,

"empname": "Vernon Wade",

"location": "Congo",

"dob": "1968-12-29",

"empid": 1118

},

{

"dept": 17,

"empname": "Gerald Robinson",

"location": "Holy See (Vatican City State)",

"dob": "1962-05-31",

"empid": 1155

},

{

"dept": 13,

"empname": "Michael Chandler",

"location": "Malawi",

"dob": "1922-06-06",

"empid": 1255

},

{

"dept": 18,

"empname": "David Miller",

"location": "Estonia",

"dob": "1993-08-24",

"empid": 1285

},

{

"dept": 15,

"empname": "Christopher Garcia",

"location": "Netherlands Antilles",

"dob": "1944-10-21",

"empid": 1383

},

{

"dept": 18,

"empname": "Donna Valdez",

"location": "Rwanda",

"dob": "1956-08-26",

"empid": 1473

}

]

}

Add Pagination to the endpoint:

To implement pagination, we need to install fastapi-pagination library.

_fastapi-pagination_ is a library that provides pagination feature for FastAPI applications.

https://uriyyo-fastapi-pagination.netlify.app/

$pip install fastapi-pagination

Collecting fastapi-pagination
  Downloading fastapi_pagination-0.10.0-py3-none-any.whl (21 kB)
Requirement already satisfied: pydantic>=1.9.1 in /Users/dinesh/VirtualEnvs/practo-env/lib/python3.9/site-packages (from fastapi-pagination) (1.9.1)
Requirement already satisfied: fastapi>=0.80.0 in /Users/dinesh/VirtualEnvs/practo-env/lib/python3.9/site-packages (from fastapi-pagination) (0.85.0)

Successfully installed fastapi-pagination-0.10.0

Now that we have the library for pagination installed, we need to prepare a response model that will be sent as a HTTP response. Usually this will either have the same set of parameters as your table schema or a subset of it. Since we don’t have any sensitive information in our Model, we will just create a replica of Employee, name it as EmployeeOut and send it as a response model.

from fastapi_pagination import Page, paginate, add_pagination

from pydantic import BaseModel

from fastapi import FastAPI

from sqlalchemy.exc import SQLAlchemyError

from sqlalchemy.orm import sessionmaker, declarative_base

from datetime import date

app = FastAPI(title=" A simple pagination learning exercise",

debug=True)

add_pagination(app)

class Employee(Base):

__tablename__ = "employee"

empid = Column(Integer, primary_key=True)

empname = Column(String)

dept = Column(Integer)

location = Column(String)

dob = Column(Date)

class EmployeeOut(BaseModel):

empid: int

empname: str

dept: int

location: str

dob: date

class Config:

orm_mode = True

@app.get(path="/api/employees/all", name="Gets all employees", response_model=Page[EmployeeOut])

async def get_all_employees():

conn = connect_to_db()

results = conn.query(Employee).all()

return paginate(results)

We leverage the Page and paginate methods here. The default minimum page size is 50 and minimum number of pages is 1. Of course you could pass the page number and size as parameters and modify the response accordingly.

Please do not forget to set the orm_mode=True in the Config class. Else pydantic will throw validation errors.

response -> 0 value is not a valid dict (type=type_error.dict)
response -> 1 value is not a valid dict (type=type_error.dict)
response -> 2 value is not a valid dict (type=type_error.dict)

Note: Please note that default paginate will require to load all data in memory.

Output:

{
    "items": [
        {
            "empid": 1,
            "empname": "Stephanie Brown",
            "dept": 11,
            "location": "Iran",
            "dob": "1956-12-16"
        },
        {
            "empid": 4,
            "empname": "James Keller",
            "dept": 20,
            "location": "Haiti",
            "dob": "1981-12-05"
        },
        ...
        ...
        ...

        {
            "empid": 15,
            "empname": "Dennis Hernandez",
            "dept": 13,
            "location": "Israel",
            "dob": "1956-05-03"
        },
    ],
    "total": 73,
    "page": 1,
    "size": 50
}

Above is the default http response for our end point with pagination. The database has a total of 73 records; since the default page size is 50, we have 50 records in our response. If you wish to have just 10 records, we could pass them as query parameters.

Docs:

image

Pagination with custom page size parameters:

API : http://localhost:8000/api/employees/all?page=1&size=10

Setting the above query params will fetch 10 records at a time. Let’s test the output. There is no change in the code, just the query params of the API.

{
    "items": [
        {
            "empid": 1,
            "empname": "Stephanie Brown",
            "dept": 11,
            "location": "Iran",
            "dob": "1956-12-16"
        },
        {
            "empid": 4,
            "empname": "James Keller",
            "dept": 20,
            "location": "Haiti",
            "dob": "1981-12-05"
        },
        ...
        ...
        ...
        {
            "empid": 14,
            "empname": "David Smith",
            "dept": 12,
            "location": "Slovakia (Slovak Republic)",
            "dob": "1948-08-28"
        },

    ],
    "total": 73,
    "page": 1,
    "size": 10
}

Pagination with LimitOffset:

If you wish to use Limit/Offset Pagination, you could import the LimitOffsetPage and use it to return the response model instead of the regular Page.

You can limit your query results if you only want to see the first few rows or implement table pagination.

The limit option allows you to limit the number of rows returned from a query, while offsetallows you to omit a specified number of rows before the beginning of the result set. Using both limit and offset skips both rows as well as limit the rows returned.

from fastapi_pagination import LimitOffsetPage, paginate, add_pagination

from pydantic import BaseModel

from fastapi import FastAPI

from sqlalchemy.exc import SQLAlchemyError

from sqlalchemy.orm import sessionmaker, declarative_base

from datetime import date

app = FastAPI(title=" A simple pagination learning exercise",

debug=True)

add_pagination(app)

class Employee(Base):

__tablename__ = "employee"

empid = Column(Integer, primary_key=True)

empname = Column(String)

dept = Column(Integer)

location = Column(String)

dob = Column(Date)

class EmployeeOut(BaseModel):

empid: int

empname: str

dept: int

location: str

dob: date

class Config:

orm_mode = True

@app.get(path="/api/employees/all", name="Gets all employees", response_model=LimitOffsetPage[EmployeeOut])

async def get_all_employees():

conn = connect_to_db()

results = conn.query(Employee).all()

return paginate(results)

add_pagination(app)

The documentation for limit/offset looks like this.

Let’s assume you want to query rows from 20 to 25. You could leverage the limit and offset to do this.

API : http://localhost:8000/api/employees/all?limit=5&offset=20

Response:

{
    "items": [
        {
            "empid": 53,
            "empname": "Rachel Mahoney",
            "dept": 13,
            "location": "Uganda",
            "dob": "1925-12-25"
        },
        {
            "empid": 58,
            "empname": "Willie Meyers",
            "dept": 12,
            "location": "Saint Barthelemy",
            "dob": "1926-01-14"
        },
        {
            "empid": 59,
            "empname": "Heather Nicholson",
            "dept": 10,
            "location": "Comoros",
            "dob": "1993-08-21"
        },
        {
            "empid": 60,
            "empname": "Frank Walters",
            "dept": 13,
            "location": "Uganda",
            "dob": "1968-09-04"
        },
        {
            "empid": 63,
            "empname": "Paul Hunter",
            "dept": 13,
            "location": "Ethiopia",
            "dob": "1990-07-08"
        }
    ],
    "total": 73,
    "limit": 5,
    "offset": 20
}

Summary:

The fastapi-pagination library makes implementing pagination easier.The implementation is pretty straight-forward. We just have to decide how we implement it, using Limit-Offset or Page class.

References:

Enjoyed this article?

Share it with your network to help others discover it

Continue Learning

Discover more articles on similar topics