AWS Lambda is a serverless computing service that allows you to run your code without provisioning or managing servers. To get the most out of Lambda, it’s crucial to understand how to upload and manage your function’s dependencies. This article will discuss the different ways to upload dependencies, introduce the concept of Lambda Layers, and demonstrate how to create a Lambda Layer using Terraform.
Ways to Upload AWS Lambda Dependencies
There are two primary methods for uploading your AWS Lambda dependencies: zipping the dependencies with your code and extracting the dependencies into a Lambda Layer.
1. Zipping Dependencies with Code
The first method is to package your Lambda function code and its dependencies in a single zip file. This process involves the following steps:
- Install the required dependencies locally.
- Package your function code and dependencies together in a zip file.
- Upload the zip file to AWS Lambda using the AWS Management Console, AWS CLI, or SDKs. This approach is simple and works well for small applications. However, it can become cumbersome for larger projects, leading to long deployment times and issues with package size limits.
2. Extracting Dependencies into Lambda Layers
The second method is to separate your dependencies from your function code using Lambda Layers. A Lambda Layer is a package containing libraries, custom runtimes, or other function dependencies. By utilizing Lambda Layers, you can manage your dependencies independently from your Lambda function code. This approach offers several benefits, including improved organization, faster deployment times, and sharing dependencies across multiple Lambda functions.
What is a Lambda Layer?
A Lambda Layer is a distribution package that contains additional code, libraries, or custom runtimes for your AWS Lambda function. Lambda Layers allow you to manage your function’s dependencies separately, providing several benefits:
Benefits of Using Lambda Layers
- Reusability: Lambda Layers can be shared across multiple Lambda functions, reducing the need to include the same dependencies in each function.
- Organization: Lambda Layers help keep your function code clean and focused by separating dependencies from the core code.
- Versioning: Lambda Layers support versioning, allowing you to update dependencies without modifying your function code.
- Direct Code Editing from UI Editor: Lambda Layers enables you to modify the code directly within the User Interface Editor if necessary.
Creating Lambda Layers with Terraform
Let’s see how to create a Lambda Layer for Python lambda code using Terraform that includes the dependencies specified in a requirements.txt
file. This approach enables efficient management of our dependencies by repurposing the file already utilized by the code. Please note that the maximum size of a Lambda Layer is 60 MB when uploading directly. To work around this limitation, we will create a zip file and upload it to Amazon S3.The Lambda Layer will only be updated if the _requirements.txt_
file is changed.
#define variables
locals {
layer_zip_path = "layer.zip"
layer_name = "my_lambda_requirements_layer"
requirements_path = "${path.root}/../requirements.txt"
}
# create zip file from requirements.txt. Triggers only when the file is updated
resource "null_resource" "lambda_layer" {
triggers = {
requirements = filesha1(local.requirements_path)
}
# the command to install python and dependencies to the machine and zips
provisioner "local-exec" {
command = <<EOT
set -e
apt-get update
apt install python3 python3-pip zip -y
rm -rf python
mkdir python
pip3 install -r ${local.requirements_path} -t python/
zip -r ${local.layer_zip_path} python/
EOT
}
}
# define existing bucket for storing lambda layers
resource "aws_s3_bucket" "lambda_layer_bucket" {
bucket = "my-lambda-layer-bucket"
}
# upload zip file to s3
resource "aws_s3_object" "lambda_layer_zip" {
bucket = aws_s3_bucket.lambda_layer_bucket.id
key = "lambda_layers/${local.layer_name}/${local.layer_zip_path}"
source = local.layer_zip_path
depends_on = [null_resource.lambda_layer] # triggered only if the zip file is created
}
# create lambda layer from s3 object
resource "aws_lambda_layer_version" "my-lambda-layer" {
s3_bucket = aws_s3_bucket.lambda_layer_bucket.id
s3_key = aws_s3_object.lambda_layer_zip.key
layer_name = local.layer_name
compatible_runtimes = ["python3.9"]
skip_destroy = true
depends_on = [aws_s3_object.lambda_layer_zip] # triggered only if the zip file is uploaded to the bucket
}
Prerequisites
- Familiar with the AWS Lambda service.
- Familiar with Terraform.
Creating the Lambda Layer
To create a Lambda Layer using Terraform, follow these steps:
Add the following code block to your main.tf
file to create the Lambda Layer with your dependencies from the requirements.txt
file
#define variables
locals {
layer_zip_path = "layer.zip"
layer_name = "my_lambda_requirements_layer"
requirements_path = "${path.root}/../requirements.txt"
}
# create zip file from requirements.txt. Triggers only when the file is updated
resource "null_resource" "lambda_layer" {
triggers = {
requirements = filesha1(local.requirements_path)
}
# the command to install python and dependencies to the machine and zips
provisioner "local-exec" {
command = <<EOT
set -e
apt-get update
apt install python3 python3-pip zip -y
rm -rf python
mkdir python
pip3 install -r ${local.requirements_path} -t python/
zip -r ${local.layer_zip_path} python/
EOT
}
}
# define existing bucket for storing lambda layers
resource "aws_s3_bucket" "lambda_layer_bucket" {
bucket = "my-lambda-layer-bucket"
}
# upload zip file to s3
resource "aws_s3_object" "lambda_layer_zip" {
bucket = aws_s3_bucket.lambda_layer_bucket.id
key = "lambda_layers/${local.layer_name}/${local.layer_zip_path}"
source = local.layer_zip_path
depends_on = [null_resource.lambda_layer] # triggered only if the zip file is created
}
# create lambda layer from s3 object
resource "aws_lambda_layer_version" "my-lambda-layer" {
s3_bucket = aws_s3_bucket.lambda_layer_bucket.id
s3_key = aws_s3_object.lambda_layer_zip.key
layer_name = local.layer_name
compatible_runtimes = ["python3.9"]
skip_destroy = true
depends_on = [aws_s3_object.lambda_layer_zip] # triggered only if the zip file is uploaded to the bucket
}
This code block defines the local variables for thelayer.zip
file, layer name, and the requirements file path. The null_resource
block packages the dependencies in a zip file, while the aws_s3_bucket
and aws_s3_bucket_object
blocks upload the zip file to an Amazon S3 bucket. The aws_lambda_layer_version
block creates the Lambda Layer with the packaged dependencies only if the requirements.txt
file has been updated, as indicated by:
null_resource.lambda_layer.triggers
aws_s3_bucket_object.lambda_layer_zip.depends_on
aws_lambda_layer_version.my-lambda-layer.depends_on
After applying your Terraform configuration, Terraform will create the Lambda Layer with the dependencies specified in your requirements.txt
file. You can now reference the created Lambda Layer in your Lambda function by using the Layer ARN provided in the Terraform output.
Conclusion
In this article, we covered the different ways to upload AWS Lambda dependencies, the benefits of using Lambda Layers, and how to create a Lambda Layer with Terraform using a requirements.txt
file. The Lambda Layer will only be updated if the requirements.txt
file is changed. By properly managing your Lambda dependencies, you can improve the reusability, organization, and deployment times of your serverless applications.
P.S. There are several methods to create AWS Lambda Layers. If your layer requires testing prior to creation, it is recommended to incorporate the layer creation process into a Continuous Integration (CI) pipeline, such as GitHub Actions. This approach ensures that your layer is tested and validated before being deployed to your AWS Lambda functions.