AWS is the Linux of Cloud
I’ve been a Linux user for about 25 years. I first used it at
About 3 years ago I learnt some basic Python, which I've used almost exclusively to build back-end APIs on AWS Serverless, mostly Lambda. This includes a monitoring solution on AWS, an event-driven API integration solution, a load shedding telegram bot, a Slack bot that posts AWS News, a CDK project, and other telegram bots. None of them are front-end web apps, and thats something that has always been a gap for me. Some years back I did some Ruby on Rails, but did'nt build anything meaningful, and I've since forgotten most of it. So I've decided to learn Flask as a tool to build some web apps: primarily because its still python, and a micro-framework with a minimal learning curve. And I wanted to re-use what I've learned building python back-end apps and APIs on AWS Serverless, and see how I can build front-end apps and APIs that run on AWS Serverless. Admittedly, Flask is still server-side, which means I'm still avoiding client-side web apps (React, etc), but baby steps for now.
I'm specifically focussing on Flask web apps, that return HTML, CSS and JS. For Flask APIs on AWS Serverless, there are already awesome packages like Chalice, Zappa and Powertools for AWS Lambda (Python).
There are other AWS service that can be used to run Flask on AWS, like using EC2, Elastic Beanstalk, ECS or Lightsail. But I am specifically looking to use serverless because I don't want to manage servers or containers, I only want to to pay for what I actually use without having resource on all the time (and with the generous free tier for serverless on AWS you wont pay anything to run this tutorial), I want to fully automate the deployment process, and if I eventually have to scale, I don't want to have to re-architect anything. Serverless has a much better Developer Experience, and allows you to quickly build things with less hassle.
So in this series of posts, we will learn to build some Flask apps on AWS, and figure things out along the way. I'll probably get some stuff wrong, so Errors and omissions excepted. Onwards!
In part 1, we took the app from the How to Make a Web Application Using Flask in Python 3 tutorial, and got it running on AWS Serverless: API GW, Lambda and DynamoDB. In part 2, we going to do almost the same thing, except instead of DynamoDB as the database, we going to use Amazon Aurora Serverless for MySQL. This part 2 will assume you didn't go through part 1, so you can start here if you want.
Aurora Serverless is an on-demand autoscaling DB cluster that scales compute capacity up and down based on your application's needs. It uses familiar SQL, so if the NoSQL DynamoDB was not your thing, then Aurora MySQL will be much closer to the tutorial.
Aurora Serverless (v1) for MySQL was announced in Preview in 2017, and went GA in 2018. It scales to zero (pausing), which is really awesome. You connect to it using standard SQL. It lives in a VPC, which means connecting to it from Lambda is going to be a challenge. However, the Data API, announced in 2019, changed that. Now you can connect to it from a Lambda function that does not need to be associated with your VPC, and you don't need to worry about setting-up and tearing-down connections. Which is really awesome, but there is a few issues, the key one for me being that Aurora Severless v1 was (is still) not available in many regions. But overall, its really good.
Aurora Serverless v2 was announced in preview in 2020, and went GA in 2022. Scaling improved dramatically, and its in all regions. But theres two major issues: it does not scale to zero, and it doesn't support the Data API, which means the Lambda function needs to be associated with your VPC. However, as I write this on 21 Dec 2023, AWS just announced the Data API for Aurora Serverless v2 PostgreSQL (not MySQL).
So based on these limitations, specifically that we don't have the Data API available for Aurora Serverless v2 for MySQL, I think that for a new serverless app that we are building and deploying via AWS SAM, its better to to use Aurora Serverless v1 for MySQL with the Data API, even though its limited to specific regions.
But AWS Lambda is for APIs, as it returns JSON. And for RESTfull APIs you usually serve Lambda functions behind Amazon API Gateway or a Lambda Function URL, or behind Appsync for GraphQL APIs. Yes, you can have Lambda functions returning HTML with some customisation, but how would we run Flask on Lambda without changing anything in Flask? The answer: by using the Lambda Web Adapter, which serves as a universal adapter for Lambda Runtime API and HTTP API. It allows developers to package familiar HTTP 1.1/1.0 web applications, such as Express.js, Next.js, Flask, SpringBoot, or Laravel, and deploy them on AWS Lambda. This replaces the need to modify the web application to accommodate Lambda’s input and output formats, reducing the complexity of adapting code to meet Lambda’s requirements.
I should also call out the really good awsgi package, (and tutorial) which can also be used to run Flask on AWS serverless, with just a small handler in the flask app.
In order to demonstrate how to run Flask on AWS Serverless using the Lambda Web Adapter, I'm going to take an existing Flask app, and show you how to run it on AWS. For this, we will start using a very-well written tutorial on Digital Ocean: How to Make a Web Application Using Flask in Python 3. Using this tutorial as a vehicle, I will show you how to get this Flask app running on AWS, using AWS Lambda, Amazon API Gateway and Aurora Serverless for MySQL, all deployed using AWS SAM. So to follow along, you may want to keep that tutorial open, as well as this blog post. I refer to the instructions in the tutorial, and advise what needs to be changed. In addition, or alternatively to following along, you can use the resources on this projects github part-2:
Besides a working Python 3 environment (in my case python 3.12
), you will also need:
Follow the tutorial and install Flask. In my case, the version of Flask I have locally installed is:
3.0.0
Follow the tutorial, and get the Hello World Flask app running locally. You can set the variables as the tutorial does, or alternatively specify it in the flask run
command:
flask --app app run --debug
* Serving Flask app 'hello'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
* Restarting with stat
* Debugger is active!
* Debugger PIN: 139-148-368
(I realise the tutorial is using hello.py
for this initial step, but to make it simpler for later on, I've started naming the file app.py
from now.)
Now lets see how we can get this Hello World Flask app running on AWS. We need to create a SAM app, then build and deploy it to AWS.
We first initialise a new SAM app, using the sam-cli, based of this projects part -2 repo on github:
sam init --name flask-aws-serverless --location https://github.com/jojo786/flask-aws-serverless
then change to the part-2
folder, and specifically the starter
sub-folder:
cd flask-aws-serverless/flask-aws-serverless-part-2/flask-aws-serverless-part-2-starter/
which contains these files and folders:
.
├── __init__.py
├── flask
│ ├── __init__.py
│ ├── app.py
│ ├── requirements.txt
│ └── run.sh
└── template.yaml
The flask
folder contains the python code that will run as Lambda functions - the app.py
file contains the same base application from the tutorial. The template.yaml
file describes the serverless application resources and properties for AWS SAM deployments.
We can now build the SAM app using sam build
:
sam build
Starting Build use cache
Manifest is not changed for (HelloWorldFunction), running incremental build
Building codeuri:
.../flask-aws-serverless-part-1/flask runtime:
python3.12 metadata: {} architecture: arm64 functions: HelloWorldFunction
Running PythonPipBuilder:CopySource
Running PythonPipBuilder:CopySource
Build Succeeded
and deploy it to AWS using sam deploy
. The first time we run it, we use the interactive guided workflow to setup the various parameters: sam deploy --guided
sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: flask-aws-serverless-part-2-starter
AWS Region [af-south-1]: eu-west-1
Parameter DBClusterName [aurora-flask-cluster]: aurora-flask-cluster
Parameter DatabaseName [aurora_flask_db]: aurora_flask_db
Parameter DBAdminUserName [admin_user]:
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: N
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]:
#Preserves the state of previously provisioned resources when an operation fails
Disable rollback [y/N]:
HelloWorldFunction has no authentication. Is this okay? [y/N]: y
Save arguments to configuration file [Y/n]:
SAM configuration file [samconfig.toml]:
SAM configuration environment [default]:
Looking for resources needed for deployment:
You can choose what to use for each argument. Please note, we haven't configured any authentication on Amazon API Gateway, so you will need to reply with y
in order for the deployment to proceed.
In my case, I chose to deploy this to eu-west-1
, which is the Europe (Ireland) Region, which has the Aurora Serverless v1 service. You may choose any other region, based on availability.
Once the deployment has been successful, you will find the output will list the URL of the Hello World Lambda function:
CloudFormation outputs from deployed stack
------------------------------------------------------------------------------------------------------------------
Outputs
------------------------------------------------------------------------------------------------------------------
Key HelloWorldApi
Description API Gateway endpoint URL for the Hello World function
Value https://helloabc123.execute-api.eu-west-1.amazonaws.com/
------------------------------------------------------------------------------------------------------------------
Successfully created/updated stack - flask-aws-serverless-part-2-starter in eu-west-1
Using your API Gateway URL, you can paste that into a browser, or call it from the command line using curl, and verify that the Flask app is working on AWS:
curl https://helloabc123.execute-api.eu-west-1.amazonaws.com/
Hello, World!%
You can view the logs from Amazon CloudWatch, using sam logs
:
sam logs --stack-name flask-aws-serverless-part-2-starter --region eu-west-1Access logging is disabled for HTTP API ID (gqi5xjq39i)2023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.156000 {"time": "2023-12-22T09:16:30.156Z","type": "platform.initStart","record": {"initializationType": "on-demand","phase": "init","runtimeVersion": "python:3.12.v16","runtimeVersionArn": "arn:aws:lambda:eu-west-1::runtime:5eaca0ecada617668d4d59f66bf32f963e95d17ca326aad52b85465d04c429f5","functionName": "part-2-starter-temp-HelloWorldFunction-OlVXkpFFUM5D","functionVersion": "$LATEST"}}2023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.466000 [2023-12-22 09:16:30 +0000] [12] [INFO] Starting gunicorn 21.2.02023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.466000 [2023-12-22 09:16:30 +0000] [12] [INFO] Listening at: http://0.0.0.0:8000 (12)2023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.466000 [2023-12-22 09:16:30 +0000] [12] [INFO] Using worker: sync2023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.471000 [2023-12-22 09:16:30 +0000] [13] [INFO] Booting worker with pid: 132023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.990000 {"time": "2023-12-22T09:16:30.990Z","type": "platform.extension","record": {"name": "lambda-adapter","state": "Ready","events": []}}2023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:30.992000 {"time": "2023-12-22T09:16:30.992Z","type": "platform.start","record": {"requestId": "604b817a-284e-4d6a-8508-4640e6a2a209","version": "$LATEST"}}2023/12/22/[$LATEST]b1522e565fea4016ae7f687b7ece5947 2023-12-22T09:16:31.085000 {"time": "2023-12-22T09:16:31.085Z","type": "platform.report","record": {"requestId": "604b817a-284e-4d6a-8508-4640e6a2a209","metrics": {"durationMs": 92.846,"billedDurationMs": 93,"memorySizeMB": 128,"maxMemoryUsedMB": 76,"initDurationMs": 834.174},"status": "success"}}
Your Flask app is now live on AWS! Lets review what we have accomplished thus far. We first initialised an AWS SAM app, built it, then deployed it to AWS. What SAM actually did for us in the background was to provision the following resources on AWS:
Everything in this step will be exactly the same as it is in the tutorial. After you've created all the templates in the flask
folder, the file structure will now look like:
.
├── README.md
├── __init__.py
├── flask
│ ├── __init__.py
│ ├── app.py
│ ├── requirements.txt
│ ├── run.sh
│ ├── static
│ │ └── css
│ │ └── style.css
│ └── templates
│ ├── base.html
│ └── index.html
├── samconfig.toml
├── template.yaml
to test it locally, change to the flask
directory, and use flask run
:
cd flask/
flask --app app run --debug
And to deploy these changes to AWS, simply run:
sam build && sam deploy
And once the deploy is done, you can test using the same API Gateway URL on AWS as before in your browser.
AWS Lambda functions and its storage are ephemeral, meaning their execution environments only exist for a short time when the function is invoked. This means that we will eventually lose data if we setup an SQLite database as part of the Lambda function, because the contents are deleted when the Lambda service eventually terminates the execution environment. There are multiple options for managed serverless databases on AWS, including Amazon Aurora Serverless, that also supports MySQL, just like SQLite as used in the tutorial.
We will need to make a few changes to the tutorial to use Aurora, instead of SQLite is. We will use SAM to deploy an Aurora Serverless v1 for MySQL DB (based off this serverlessland pattern). Add (or uncomment) the following config in template.yaml
:
AWS_REGION: !Ref AWS::Region
DBClusterArn: !Sub 'arn:aws:rds:${AWS::Region}:${AWS::AccountId}:cluster:${DBClusterName}'
DBName: !Ref DatabaseName
SecretArn: !Ref DBSecret
Policies: # Creates an IAM Role that defines the services the function can access and which actions the function can perform
- AWSSecretsManagerGetSecretValuePolicy:
SecretArn: !Ref DBSecret
- Statement:
- Effect: Allow
Action: 'rds-data:ExecuteStatement'
Resource: !Sub 'arn:aws:rds:${AWS::Region}:${AWS::AccountId}:cluster:${DBClusterName}'
DBSecret: # Secrets Manager secret
Type: 'AWS::SecretsManager::Secret'
Properties:
Name: !Sub '${DBClusterName}-AuroraUserSecret'
Description: RDS database auto-generated user password
GenerateSecretString:
SecretStringTemplate: !Sub '{"username": "${DBAdminUserName}"}'
GenerateStringKey: password
PasswordLength: 30
ExcludeCharacters: '"@/\'
AuroraCluster: # Aurora Serverless DB Cluster with Data API
Type: 'AWS::RDS::DBCluster'
Properties:
DBClusterIdentifier: !Ref DBClusterName
MasterUsername: !Sub '{{resolve:secretsmanager:${DBSecret}:SecretString:username}}'
MasterUserPassword: !Sub '{{resolve:secretsmanager:${DBSecret}:SecretString:password}}'
DatabaseName: !Ref DatabaseName
Engine: aurora-mysql
EngineMode: serverless
EnableHttpEndpoint: true # Enable the Data API for Aurora Serverles
ScalingConfiguration:
AutoPause: true
MinCapacity: 1
MaxCapacity: 2
SecondsUntilAutoPause: 3600
Outputs:
DBClusterArn:
Description: Aurora DB Cluster Resource ARN
Value: !Sub 'arn:aws:rds:${AWS::Region}:${AWS::AccountId}:cluster:${DBClusterName}'
DBName:
Description: Aurora Database Name
Value: !Ref DatabaseName
SecretArn:
Description: Secrets Manager Secret ARN
Value: !Ref DBSecret
That contains a number of resources:
And to deploy these changes to AWS, simply run:
sam build && sam deploy
The Output section of the sam deploy will contain the details of Aurora that we will need:
CloudFormation outputs from deployed stack
-----------------------------------------------------------------------------------------------------------
Outputs
-----------------------------------------------------------------------------------------------------------
Key SecretArn
Description Secrets Manager Secret ARN
Value arn:aws:secretsmanager:eu-west-1:1111111111:secret:cluster-temp-AuroraUserSecret-
1111111111
Key DBClusterArn
Description Aurora DB Cluster Resource ARN
Value arn:aws:rds:eu-west-1:1111111111:cluster:aurora-flask-cluster
Key DBName
Description Aurora Database Name
Value flask_db
Key HelloWorldApi
Description API Gateway endpoint URL for Hello World function
Value https://1111111111.execute-api.eu-west-1.amazonaws.com/
-----------------------------------------------------------------------------------------------------------
Successfully created/updated stack - flask-aws-serverless-part-2-starter in eu-west-1
These need to be set/exported as variables. On macOS, like this:
export DBClusterArn=arn:aws:rds:eu-west-1:1111111:cluster:aurora-flask-cluster
export SecretArn=arn:aws:secretsmanager:eu-west-1:11111:secret:aurora-flask-cluster-AuroraUserSecret-1111
export DBName=aurora_flask_db
Now to setup the schema of Aurora, we will use this schema.sql
. The only difference is that MySQL uses AUTO_INCREMENT
CREATE TABLE posts (
id INTEGER PRIMARY KEY AUTO_INCREMENT,
created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
title TEXT NOT NULL,
content TEXT NOT NULL
);
Our init_db.py
script will be as follows:
import os
import boto3
from botocore.config import Config
DBClusterArn = os.environ['DBClusterArn']
DBName = os.environ['DBName']
SecretArn = os.environ['SecretArn']
my_config = Config(
region_name = os.environ['AWS_REGION'])
client = boto3.client('rds-data', config=my_config)
with open('schema.sql') as file:
schema = file.read()
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql=schema
)
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""
INSERT INTO posts (title, content)
VALUES (:title, :content)
""",
parameters=[
{
'name':'title',
'value':{'stringValue':"First Post"}
},
{
'name':'content',
'value':{'stringValue':"Content for the first post"}
}
]
)
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""
INSERT INTO posts (title, content)
VALUES (:title, :content)
""",
parameters=[
{
'name':'title',
'value':{'stringValue':"Second Post"}
},
{
'name':'content',
'value':{'stringValue':"Content for the second post"}
}
]
)
Both files should be in the same directory, e.g. in the flask
directory. You can now execute it with:
python3 init_db.py
Instead of using raw boto3
, you can look at using libraries that make it easier in python and Flask to work with the Aurora Data API, like aurora-data-api or sqlalchemy-aurora-data-api. Alternatively, instead of using the schema and init_db scripts to create the table and tests posts, you can use built-in visual RDS Query Editor in the AWS Console, or the AWS CLI:
aws rds-data execute-statement --region eu-west-1 --resource-arn arn:aws:rds:eu-west-1:1111111:cluster:aurora-flask-cluster --secret-arn arn:aws:secretsmanager:eu-west-1:111111:secret:aurora-flask-cluster-AuroraUserSecret-11111 --database aurora_flask_db --sql "CREATE TABLE posts (id INTEGER PRIMARY KEY AUTO_INCREMENT, created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, title TEXT NOT NULL, content TEXT NOT NULL );"
aws rds-data execute-statement --region eu-west-1 --resource-arn arn:aws:rds:eu-west-1:1111111:cluster:aurora-flask-cluster --secret-arn arn:aws:secretsmanager:eu-west-1:111111:secret:aurora-flask-cluster-AuroraUserSecret-11111 --database aurora_flask_db --sql "INSERT INTO posts (title, content) VALUES ('First Post', 'Content for first post');"
Here we will make some changes to the Flask app, to read data from Aurora. We will import the boto3 package - a Python SDK for AWS. We will lookup the name of the Aurora DB and secret that was created by SAM. The boto3 execute_statement
command, using the secrets and database ARNs, will safely retrieve the database password before executing the SQL query.
Our app.py
will now look as follows:
from flask import Flask, render_template, request, url_for, flash, redirect
import os
from boto3.dynamodb.conditions import Key
from boto3 import resource
from werkzeug.exceptions import abort
import boto3
from botocore.config import Config
DBClusterArn = os.environ['DBClusterArn']
DBName = os.environ['DBName']
SecretArn = os.environ['SecretArn']
my_config = Config(
region_name = os.environ['AWS_REGION'])
client = boto3.client('rds-data', config=my_config)
app = Flask(__name__)
@app.route('/')
def index():
posts = []
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""SELECT * FROM posts"""
)
print(response)
for record in response['records']:
posts.append({
'id': record[0]['longValue'],
'created': record[1]['stringValue'],
'title': record[2]['stringValue'],
'content': record[3]['stringValue']
})
return render_template('index.html', posts=posts)
You can now see the posts on the flask app. You can use the flask run
command (remember to change to flask
directory) to run the app locally, However, you will need to provide it with the Aurora DB and secret.
The only change required here is to the get_post
method, which will retrieve a particular item from Aurora:
def get_post(post_id):
post = {}
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""SELECT * FROM posts WHERE id = :id""",
parameters=[
{
'name':'id',
'value':{'longValue':post_id}
}
]
)
for record in response['records']:
post['id'] = record[0]['longValue']
post['created'] = record[1]['stringValue']
post['title'] = record[2]['stringValue']
post['content'] = record[3]['stringValue']
if len(post) == 0:
abort(404)
return post
As usual, run sam build && sam deploy
to run it on AWS, and/or flask run
to test locally.
Step 7 — Modifying Posts
Our create
function will create a new post in Aurora:
@app.route('/create', methods=('GET', 'POST'))
def create():
if request.method == 'POST':
title = request.form['title']
content = request.form['content']
if not title:
flash('Title is required!')
else:
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""
INSERT INTO posts (title, content)
VALUES (:title, :content)
""",
parameters=[
{
'name':'title',
'value':{'stringValue':title}
},
{
'name':'content',
'value':{'stringValue':content}
}
]
)
return redirect(url_for('index'))
return render_template('create.html')
Our edit
function will work very similar, where we lookup a particular post id
, and then update that item:
@app.route('/<int:id>/edit', methods=('GET', 'POST'))
def edit(id):
post = get_post(id)
if request.method == 'POST':
title = request.form['title']
content = request.form['content']
if not title:
flash('Title is required!')
else:
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""
UPDATE posts SET title = :title, content = :content
WHERE id = :id
""",
parameters=[
{
'name':'title',
'value':{'stringValue':title}
},
{
'name':'content',
'value':{'stringValue':content}
},
{
'name':'id',
'value':{'longValue':id}
}
]
)
return redirect(url_for('index'))
return render_template('edit.html', post=post)
The delete
function is quite similiar again, where we lookup a particular post id
, then delete it:
@app.route('/<int:id>/delete', methods=('POST',))
def delete(id):
post = get_post(id)
response = client.execute_statement(
resourceArn=DBClusterArn,
secretArn=SecretArn,
database=DBName,
sql="""DELETE FROM posts WHERE id = :id""",
parameters=[
{
'name':'id',
'value':{'longValue':id}
}
]
)
return redirect(url_for('index'))
You can get all the final code from the completed folder in github.
As usual, you simply run sam build && sam deploy
to deploy to AWS.
We've taken the excellent How To Make a Web Application Using Flask in Python 3 tutorial and using AWS SAM, demonstrated how you can run a Flask app on AWS Serverless. With serverless, we dont need to think of or manage servers, or worry about other mundane tasks like installing or patching the OS, database or any software packages. The beauty of SAM is that it deploys directly to AWS for us, with very little effort. We chose to use Aurora Serverless as the serverless database, due to it supporting MySQL.