Flask, SQLALchemy, and Pytest

Introduction

I recently had to build a small webapp which, on requests to a Flask endpoint, wrote data to a PostgreSQL database. There’s a nice package called Flask-SQLAlchemy which was written exactly for this stack, tying together Flask requests with SQLAlchemy objects used for database interaction – sessions and engines and such. The issue I had with this way of doing things, where now all access to the database needs to happen inside an ‘app context’ (ie in the context of a request to a Flask endpoint), was that writing unit tests for this became hard, since I wanted to be able to call functions which wrote to the database without having to call my app each time. I found some good resources online about this, which I’ll link below, however none gave a complete and up-to-date guide on how to do this. I’ve therefore put together this short-ish post, which aims to do so. This post owes most to this post by Alex Michael – much of what is below is an updated version of his implementation.

Note: I came across the package pytest-flask-sqlalchemy, written by the team at DataMade. I wasn’t able to get this plugin working (I was using python 3.11.2 – which is apparently supported), though it’s perfectly possible that this was my own fault, rather than something wrong with the package. I would encourage any reader to try for themselves and see how they get on.

The key ideas

As mentioned above, we will need to find a way to mimic the app context outside of actual Flask requests. The way we achieve this is by using fixtures, which allow us to attribute context at point of creation, which will then be cleaned up on teardown. The other motivation for using fixtures is they allow us to automatically rollback any database transactions we perform during testing, making our tests clean and self-contained.

Note all my tests here write to a local PostgreSQL database I have running inside a Docker container – this container is also automatically spun up in the container I use for CI. This makes the tests a lot quicker and much safer – tests should never interact with production databases (and ideally avoid any network requests at all).

Implementation

First I’ll set-out a skeleton of the implementation code you need – I write a basic app factory (with no endpoints, since none are needed for this demo – if we need endpoints you can add them in the usual way), which includes instantiation of the database object and its connection with the app.

from flask import Flask


def get_db_url() -> str:
    return ''


def create_app() -> Flask:
    app = Flask(__name__)
    app.config['SQLALCHEMY_DATABASE_URI'] = get_db_url()

	from database import db
	db.init_app(app)

	with app.app_context():
		db.create_all()

	return app

You’ll also need the following SQLAlchemy code:

from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column


class Base(DeclarativeBase):
    pass


db = SQLAlchemy(model_class=Base)


class ExampleTable(db.Model):
   """Define your tables in classes like this"""
	row_name: Mapped[str] = mapped_column(primary_key=True)
   ...

A few notes on the above:

  • get_db_url() obviously doesn’t work as implemented – I chose to implement it as a function because it enables me to enable automatic switching (based on an environment variable) between fetching the url for the production database, and that of the local dockerised database I use for testing
  • db should be defined wherever you set out your ORM to keep separation
  • db.create_tables() will create tables which are defined in your ORM if and only if they don’t already exist in the database (ie if the tables already exist it does nothing). Note how it needs to happen inside an app context as it is a database transaction occurring outside of a Flask request.

With this sorted, we can now write fixtures to enable us to test our database safely and outside app contexts.

First we need a fake Flask app:

@pytest.fixture(scope='session')
def app():
	app = create_app()
	app.config.update({'TESTING': True})
	yield app

Then we can setup various database fixtures: the key one is the app_ctx fixture, which ensures that db and session always believe we are inside a request. Note how the db fixture is scoped to the whole test session, so we only have to instantiate it once, whereas the session fixture is scoped to each individual test, so we rollback transactions as we go.

@pytest.fixture(scope='session')
def app_ctx(app):
	with app.app_context():
		yield


@pytest.fixture(scope='session')
def db(app, app_ctx):
	_db.app = app
	_db.create_all()

	yield _db

	_db.drop_all()


@pytest.fixture(scope='function')
def session(app, db, app_ctx):
	connection = db.engine.connect()
	transaction = connection.begin()

	session = db._make_scoped_session(options={'bind': connection})
	db.session = session

	yield session

	transaction.rollback()
	connection.close()
	session.remove()

And that’s basically all! You can now write unit tests like this (where example_row0 and example_row1 are fixturised rows of the database):

def test0_write_to_database(example_row0, session):
	session.add(example_row0)
	session.commit()

	out = select(ExampleTable).where(ExampleTable.row_name == 'row_value')
	result = session.execute(out).all()

	assert len(result) == 1

and after each test, the database will be rolled back to the state it was in at the start. Some may find the _db.drop_all() a bit over-aggressive, since we’re using a persistent local database – you can safely exclude it if you wish.

Sources