Using AWS Lambda functions with Docker Containers: A Tutorial

Using AWS Lambda functions with Docker Containers: A Tutorial



In December re:Invent 2020, AWS announced a major update for Lambda by introducing support for container images in lambda functions. By nature, Lambda or any other Function-as-a-Service provides benefits like managed scaling, fault-tolerant, and high availability along with pay-as-you-go facility.

In this article, we will see how to integrate docker container with AWS lambda functions and what are the benefits and use cases.

What is the need for a docker container in serverless functions?

Before AWS joined forces with docker for lambda, there were two options to deploy code in lambda: to either use a build-in code editor on lambda console or via zip package. This zip file contains the code, required dependencies, and libraries required for the code to run. User can manually upload this zip file or use some automation like AWS SAM or third-party services like Serverless Framework. Since many developer clients have invested in Docker-based deployments and CI/CD, with this change in effect, developers can benefit from dockers, as well as server-less functionalities to create a uniform development process.

Benefits of using docker with lambda functions

Lambda functions are developed in such a way that every process is an isolated and immutable entity. This remains the same with container-based functions as well. When a container-based lambda function is called, it runs as-it-is resulting in uniform and immutable deployment packaging among local development, like CI/CD, and Lambda execution environments.

Another benefit is that the runtime package size can now be extended up to 10 GB as compared to the previous 250 MB package limit which prevented many workloads from using lambda. The new size limit allowed many new workload possibilities, which are more dependency heavy or data-heavy processes allowing machine learning and data analytics developers to use packages like NumPy, PyTorch, or other heavy libraries. Even though lambda has layers as a workaround, this came with limitations. With this new feature, all those issues can now be tackled by container-based functions. This also unlocked increased portability amongst different AWS services like AWS Fargate and AWS EC2.

How it works under the hood?

Container-based lambda supports Docker image manifest schema version 1.0 onwards and images from Elastic Container Registry. Lambda is currently providing many base images with pre-installed runtimes including Python, NodeJS, Go, Java, Ruby, and .Net. These images are created and maintained by AWS. Apart from this, developers can create their runtimes based on the Linux kernel.

If developers are using custom Linux kernel for different runtimes (say C++, PHP, elixir, etc) then they don't have access to RIC and RIE which are pre-installed in base images provided by AWS.

RIC and RIE are discussed in length below.

Introducing RIC and RIE

RIC - Runtime Interface Client(s) and RIE - Runtime Interface Emulator are the two new components introduced by AWS in re:Invent 2020.

About AWS Lambda Runtime Interface Client(s):

  • RICs are typically wrappers that integrate and connect custom code with Lambda's Runtime API.
  • RICs are pre-installed in base images provided by AWS.
  • For custom images, developers must ensure RIC related to their environment is present. RICs for all major runtimes are available on AWS's GitHub organization.

About AWS Lambda Runtime Interface Emulator:

  • RIE allows developers to mimic Runtime API and test the code locally instead of publishing it on lambda and then testing it in production.
  • It exposes HTTPS endpoint when docker container is running locally, which accepts a JSON input and passes that JSON as an event to a lambda function invocation.
  • RIE is open source and can be found on github.

Changes from the previously-existing container images support

What are the changes?

  • Larger artifacts up to 10 GB are possible.
  • Container images must have RIC present to talk to lambda runtime API.
  • No auto-update to managed runtime without redeploying. If a new update is available developers have to manually point lambda to use a new docker image with updated dependencies.

What stays the same?

  • Invoke model remains the same.
  • Custom code inside the docker still needs a handler function that includes an event and a context as a parameter.
  • Storage option remains the same as before.
    • Memory - 128 MB - 10240MB
    • /tmp space - 512 MB
  • Container lambda functions still have the same support for logging, metrics, AWS X-Ray, and newly launched lambda extensions.
  • Billing remains the same as that of zip code deployment.

Let's start with the coding:

To begin with, let's create a example lambda function with AWSCLI, Docker and NodeJS.

  • Create a directory for the application, setup npm, and install faker.js for generating fake testing data, which should look like this:
mkdir lambda-docker; cd $_
npm init -y
npm install faker
touch app.js
  • Open the app.js file and add the following code. This handler code is similar to that of the zip-based lambda function. Let's see how this is done:
const faker = require("faker");
module.exports.handler = async (event, context) => {
  return faker.helpers.createCard();
  • Now that we have a basic app.js setup, let's create a Dockerfile and add the following code. This will instruct the docker to install necessary dependencies and create a container from them. The following is an important step as it also informs RIC about the handler function via the CMD attribute like this:
COPY app.js package*.json ./
RUN npm install
CMD [ "app.lambdaHandler" ]
  • Build the docker image using the following code:
docker build -t lambda-docker-demo .
  • Now, we have to create a new ECR on AWS and push the docker image to that repo. In the below commands, replace <accountID> with your AWS account ID and <region> with your AWS region, as shown below:
aws ecr create-repository --repository-name lambda-docker-demo --image-scanning-configuration scanOnPush=true

docker tag lambda-docker-demo:latest <accountID>.dkr.ecr.<region>

aws ecr get-login-password | docker login --username AWS --password-stdin <accountID>

docker push <accountID>.dkr.ecr.<region>

The output should look like this:

The push refers to repository []
787f8ab2d1e8: Pushed
e0fd0b7ddcc7: Pushed
5bff738ecfb2: Pushed
9104caec20a8: Pushed
d6fa53d6caa6: Pushed
b0c5f6ff5d8a: Pushed
6cd95e8f3d80: Pushed
fe6098a9ee94: Pushed
latest: digest: sha256:6f30e39d7a8a372e6dd1377a741cb5bede8de0315b82d4caaa3666643b555b4d size: 1998

The next step will create an image in the ECR console:


  • Let's connect this Docker image to a new lambda function, as the next step. Continue to the lambda console and choose the Create Function button, after which you have to click on Container Image and select the newly uploaded image from the Browse image option. Lastly, press Create Function to create and deploy the function, as shown below:

lambda console

Following successful deployment, the lambda function can be tested the same way as regular lambda functions. Move to the Test tab, choose a new event and invoke it with the empty object and you will see the output from faker.js. This looks like:


AWS SAM for automation of container-based image deployment

We can automate the whole image build and deployment process using AWS Serverless Application Model(SAM). For this, AWS SAM CLI has to be installed locally. In this walkthrough, the same function will be deployed but using SAM CLI. SAM CLI's init command will create a basic function and a Dockerfile. Next, you can use SAM's build and deploy command to automate building, creating, and publishing the image on lambda, as explained.

To start off, follow these instructions from the local terminal:

  • Enter sam init in the project directory to start SAM Wizard.
  • Now, choose 1 – AWS Quick Start Templates’ from the options provided.
  • Next, you will receive an option for zip and image-based deployment from where you'll have to choose 2 - Image.
  • Choose the preferred runtime base image. In our example, it will be 1 – amazon/nodejs12.x-base.


  • Enter lambda-docker-demo as the project name, this will create a sample project with README and unit tests.
  • Now go to hello-world/app.js and add the following code to generate a fake response:
const faker = require("faker");
module.exports.handler = async (event, context) => {
  return faker.helpers.createCard();
  • Run npm install to install faker.js.
  • From the root directory of the project, you'll have to run the following:
sam build
  • Once the build is successful, run the following command to start the deployment process:
bash deploy --guided


  • To further access the Stack name, type the project title lambda-docker-demo, and then enter your respective AWS region ; In the case of Image Repository, we can give the same URL as of the previous implementation from the AWS ECR console. This will create a function and add an API endpoint using AWS API Gateway, as shown below:



The AWS-provided RIE(Runtime Interface Emulator) can be used for testing lambda functions locally before deployment. To test, you need to open two terminals, wherein, from one terminal we'll start the docker container, and from the other terminal we've to send a POST request to that docker for testing. To process, run the following code:

# Starting docker container and publishing port 9000.
docker run -p 9000:8080 lambda-docker-demo:latest

This will start an HTTP server where you can send a POST request, as shown:

# From other terminal use cURL to send a request.
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{"test": "value"}'


Closing Thoughts

With the new docker integration, its power can be used in lambda's execution environment. The new 10GB size limit opens up many use cases for lambda which were simply too hard to achieve before. AWS SAM can also be used to reduce boilerplate code and handle the building and deployment process.

Thank you so much for reading 😁.