How to Deploy Strapi Docker Container On AWS Elastic Beanstalk

How to Deploy Strapi Docker Container On AWS Elastic Beanstalk

A step-by-step guide to hosting your Strapi application on AWS elastic beanstalk.

Author: Omar Khairy

AWS provides many services to host your application there; in this tutorial, I will show you how to deploy Strapi as a Docker container in AWS Elastic Beanstalk.

The article will focus on deploying Strapi as a Docker container connected to PostgreSQL as a database with Load Balancer for monitoring the health check in this instance.

Prerequisites

To follow up through this article, you should have the following:

  1. Basic knowledge of JavaScript
  2. Understanding of Docker
  3. Basic understanding of AWS cloud concepts
  4. AWS account
  5. AWS CLI installed (If you don't have it installed, click here to start.).
  6. Basic understanding of Strapi
  7. Node.js downloaded and installed.
  8. Yarn as Node package manager
  9. Vscode or any code editor

What is Strapi?

Strapi is the leading open-source, customizable, headless CMS based on Node.js; it is used to develop and manage content using RESTful APIs and GraphQL.

With Strapi, you can scaffold an API faster and consume the content via APIs using any HTTP client or GraphQL-enabled frontend.

Scaffolding a Strapi Project

In this article, I'll show how to use the template blog to quickly scaffold a Strapi project. You can apply what we will do here for any Strapi project or template.

    yarn create strapi-app bloggy --template blog

The command will create a new folder called “bloggy” under the current working directory. It contains all files of the project generated by the Strapi command; after that, you can access the Strapi dashboard with that URL: localhost:1337/admin.

The default behavior with the generated project uses SQLite as the main database; you need to change that to use PostgreSQL in both development and production.

Connect to PostgreSQL Database

Create a new PostgreSQL container with the following command:

    docker run --name strapi-bloggy-db -v my_dbdata:/var/lib/postgresql/data -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=strapi -d postgres:13.6

This command will create a new Docker container called strapi-bloggy-db that runs on port 5432 with a database username and password called postgres with a pre-initialized database called strapi.

All Databases in the container

Now, it's time to change the connection from SQLite to PostgreSQL with the new configurations. In config/env/development/database.js, add these lines:

    const path = require('path');

    module.exports = ({ env }) => ({
      connection: {
        client: 'postgres',
        connection: {
          host: env('DATABASE_HOST', '12l7.0.0.1'),
          port: env.int('DATABASE_PORT', 5432),
          database: env('DATABASE_NAME', 'strapi'),
          user: env('DATABASE_USERNAME', 'postgres'),
          password: env('DATABASE_PASSWORD', 'postgres'),
          schema: env('DATABASE_SCHEMA', 'public'),
          ssl: env('DATABASE_SSL', false),
        },
        debug: false,
      },
    });

Strapi requires pg package to establish the connection with Postgres; add pg using the command below:

    yarn add pg

You've successfully changed the SQLite database to PostgreSQL; test it it by running:

    yarn develop

http://localhost:1337/admin

In production, do not use PostgreSQL as a Docker container; managing database operations such as backup, restore, and monitoring would be a hassle. Instead, delegate these tasks to the AWS Relational Database Service (RDS). The difference in database type or version between development and production would introduce problems, so you must prevent that.

Moving forward, Elastic Beanstalk will set different naming conventions for database credentials. Here is an overview of what the naming will look like.

AWS RDS Naming Conventions

You'll need to create a new file with that expected naming for database credentials; therefore, in config/env/production/database.js, add these lines:

    module.exports =  ({ env }) => ({
      connection: {
        client: 'postgres',
        connection: {
          host: env('RDS_HOSTNAME', ''),
          port: env.int('RDS_PORT', undefined),
          database: env('RDS_DB_NAME', ''),
          user: env('RDS_USERNAME', ''),
          password: env('RDS_PASSWORD', ''),
          ssl: env.bool('DATABASE_SSL', false)
        }
      }
    });

Here is an overview of how the config folder looks now. Config Folder Structure

Build the Strapi Docker Image

You should currently be running the Strapi server from your local machine; now, you need to run Strapi as a Docker container. Make two docker files: Dockerfile will be used in production and Dockerfile.dev for development.

Here's how to add the two docker files: Dockerfile:

    FROM node:16
    ENV NODE_ENV=production
    WORKDIR /opt/
    COPY ./package.json ./yarn.lock ./
    ENV PATH /opt/node_modules/.bin:$PATH
    RUN  yarn install
    WORKDIR /opt/app
    COPY . .
    RUN yarn build
    EXPOSE 1337
    CMD ["yarn", "start"]

Dockerfile.dev:

    FROM node:16
    ENV NODE_ENV=development
    WORKDIR /opt/
    COPY ./package.json ./yarn.lock ./
    ENV PATH /opt/node_modules/.bin:$PATH
    RUN  yarn install
    WORKDIR /opt/app
    COPY . .
    RUN yarn build
    EXPOSE 1337
    CMD ["yarn", "develop"]

If you do not understand any step in the Docker file creation, there is a detailed blog post by Simen Daehlin that will guide you.

The difference between the two files is this: In Dockerfile, you explicitly set the environment, whether it is production or development, and CMD to run yarn develop in case you are in development.

Add .dockerignore to ignore these files during the build step:

    .tmp/
    .cache/
    .git/
    build/
    node_modules/

Building our development docker image by tagging it as bloggy-dev:v1.0

    docker build -t bloggy:v1.0 .

If you try to run the "bloggy” container, it won't connect to Postgres container because, from the Docker perspective, they are running on different networks. Hence, you need to create a network and then make the two containers connected to the same network.

You can create a network with Docker CLI, and both containers can connect to the network, but to it let's introduce Docker-Compose.

Putting All Together with Docker-Compose

Docker-Compose simplifies creating multiple Docker containers with a single yml file. It helps to create Docker containers with configuration for volumes and networks.

You can create a docker-compose file from scratch or use an excellent tool called strapi-tool-dockerize to quickly generate this docker-compose file by answering five (5) questions.

strapi-tool-dockerize can generate either a Docker file or a docker-compose file.

To get started with strapi-tool-dockerize, run the following command:

    npx @strapi-community/dockerize

Provide the answers to the questions below as shown:

  1. Do you want to create a docker-compose file? Yes
  2. What environments do you want to configure? » Development
  3. What database do you want to use? » PostgreSQL
  4. Database host: Localhost
  5. Database Port: 5432
    ✔ Do you want to create a docker-compose file? 🐳 … No / Yes
    ✔ What environments do you want to configure? › Development
    ✔ Whats the name of the project? … strapi
    ✔ What database do you want to use? › PostgreSQL
    ✔ Database Host … localhost
    ✔ Database Name … strapi
    ✔ Database Username … postgres
    ✔ Database Password … ********
    ✔ Database Port … 5432

After answering these questions, it would add two files to our project docker-compose.yml and .env.

After running dockerize, check your .dockerignore file. It may add /data to be ignored. We are using it in our app, so make sure to remove it from the .dockerignore file. It should look like the following.

.tmp/
.cache/
.git/
build/
node_modules/
.env

The Dockerfile would be slightly different, you can revert it back to our version. There is no such big difference dockerize-tool, which adds some packages to meet sharp compatibility.

In Docker-compose.yml, I just edited the image value in strapiDB service to match the versions supported in AWS, to be image:postgres:13.6, so now docker-compose looks like below.

If you are using m1 mac change the platform value to use linux/arm64/v8 instead of linux/amd64

version: '3'
services:
  strapi:
    container_name: strapi
    build: .
    image: strapi:latest
    restart: unless-stopped
    env_file: .env
    environment:
      DATABASE_CLIENT: ${DATABASE_CLIENT}
      DATABASE_HOST: strapiDB
      DATABASE_NAME: ${DATABASE_NAME}
      DATABASE_USERNAME: ${DATABASE_USERNAME}
      DATABASE_PORT: ${DATABASE_PORT}
      JWT_SECRET: ${JWT_SECRET}
      ADMIN_JWT_SECRET: ${ADMIN_JWT_SECRET}
      DATABASE_PASSWORD: ${DATABASE_PASSWORD}
      NODE_ENV: ${NODE_ENV}
    volumes:
      - ./config:/opt/app/config
      - ./src:/opt/app/src
      - ./package.json:/opt/package.json
      - ./yarn.lock:/opt/yarn.lock
      - ./.env:/opt/app/.env
      - ./public/uploads:/opt/app/public/uploads
    ports:
      - '1337:1337'
    networks:
      - strapi
    depends_on:
      - strapiDB
  strapiDB:
    container_name: strapiDB
    platform: linux/arm64/v8 #for platform error on Apple M1 chips
    restart: unless-stopped
    env_file: .env
    image: postgres:13.6
    environment:
      POSTGRES_USER: ${DATABASE_USERNAME}
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
      POSTGRES_DB: ${DATABASE_NAME}
    volumes:
      - strapi-data:/var/lib/postgresql/data/ #using a volume
      #- ./data:/var/lib/postgresql/data/ # if you want to use a bind folder
    ports:
      - '5432:5432'
    networks:
      - strapi
volumes:
  strapi-data:
networks:
  strapi:
    name: Strapi
    driver: bridge

Docker-Compose will read environment variables from .env files. The .env should look like the following after strapi-dockerize adding database environment variables.

    HOST=0.0.0.0
    PORT=1337
    APP_KEYS=zz1kt2QS2I7BBuP8EuIjlA==,L8XX/OEbybRFh40q8DzIng==,yt4yAvYgK83xycthu5yxtA==,X7Gcx1VVAUm8d+A7rTZ7Yw==
    API_TOKEN_SALT=MWPCH4U70a2E8ubTlAC6Yg==
    ADMIN_JWT_SECRET=hJXXOaTmQl8A4zXbiqTicQ==
    JWT_SECRET=aUnqqM5AwuUQAyxXE6LQnQ==
    # @strapi-community/dockerize variables
    DATABASE_HOST=localhost
    DATABASE_PORT=5432
    DATABASE_NAME=strapi
    DATABASE_USERNAME=postgres
    DATABASE_PASSWORD=postgres
    NODE_ENV=development
    DATABASE_CLIENT=postgres

To build docker-compose, run:

    docker-compose build

To start all containers with docker-compose, run:

    docker-compose up

After that, you would see the Strapi container up, running, and connected to the Postgres container.

proof

How to Deploy a Docker Container in AWS

AWS provides many services to deploy Docker containers; the most popular services are:

  • Aws App Runner
  • Elastic Container Service
  • Elastic Beanstal

App runner is the simplest. You can set up and run the container without doing many tasks, but it is not available in all regions. Elastic Container Service(ECS) is a service with a low-level abstraction to run and manage multiple containers(cluster). In our case, we just need to run one container, so Elastic Beanstalk is a perfect choice and is available in all regions.

Deploying the Docker Image on Elastic Beanstalk

View this page before you go further: AWS Elastic Beanstalk FAQs - Amazon Web Services (AWS)

Elastic Beanstalk is a higher level of abstraction above cloud computing (EC2), cloud storage (S3), Cloudwatch (logging and monitoring), and Elastic Load Balancer. it facilitates the provisioning and managing of the backend infrastructure.

Below are the steps to take to deploy the image in Elastic Beanstalk as a Docker container:

A flow diagram of Elastic Beanstallk process

  1. Build and push the docker image to Elastic Container Registry (ECR).
  2. Create a file, Dockerrun.aws.json, and uploaded it to the S3 bucket. You can refence it later when creating an Elastic Beanstalk environment to pull the image from ECR.
  3. Create the environment.
  4. Config the environment.

Push the Strapi Docker Image to AWS ECR

You can use any container registry like “docker hub” or “google container registry”. However, it would require extra fields in the configuration file we are going to add “Dockerrun.aws.json” to make AWS able to authenticate to that service, but in case using ECR the authentication would be out of the box.

  1. In the ECR service, choose “create repository”

The Amazon ECR Repository Page

  1. Choose a unique name for the image.

Choosing the image name

  1. Leave the other config as it is and then choose “Create repository”. Create repository

  2. Click on “bloggy” in its row under the repository name; it will display all images you have under that repository. View of all repositories

  3. Click on “View push commands”. This step is to make docker CLI authenticate your repository by the authentication token that AWS provided to us.

“View push commands” displays the steps needed to push the image in the “bloggy” repository. To do so, follow up with these steps to authenticate with the repository.

Steps

Setup a New IAM User

To get started using AWS CLI, you need to obtain the Access key ID and Access Secret key. That access key gives permission to CLI to make programmatic calls like updating or creating any resources. In our case, we need to add an image in the ECR repository. To do so, we need to start with the following steps.

  1. In AWS Dashboard, search and navigate to IAM. Click on “Users” in the sidebar.

Steps

  1. Click on “Add users”

Addusers

  1. Choose an appropriate User name. Put a checkmark on “Select AWS credential type → Access key - Programmatic access.” Then click on “Next: Permissions”

credentialtype

  1. Choose “Attach existing policies directly” and put a checkmark on "AdministratorAccess” then click on “Next: Tags”.

AdministratorAccess gives all permission to that created users to manipulate all AWS resources. So you need to save them in a secret place. Also, you can choose a policy that has permission to just take actions over ECR in that case, the CLI would not be allowed to affect any other resources or services.

attachpolicies

  1. Tagging the tags is an optional step, so you can skip it by clicking on “Next: Preview”

tags

  1. That is a preview page that gives an overview of what we have done. After reviewing, click on “Create User”.

previewpage

  1. Download and save these keys on your PC by clicking on “Download .csv”.

download

Configure AWS CLI to Interact with AWS Resources and Services

Now we have the Access Key ID and Secret Access Key, we can configure AWS CLI by running the following command:

    aws configure

Add your AWS Access Key ID and AWS Secret Access Key.

  1. Run the first command to make Docker CLI establish authentication

I am using windows even so I can use macOS / Linux commands without any problems aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 391161446417.dkr.ecr.us-east-1.amazonaws.com

You should get the following result

result

  1. Run the "build docker image" command.

     docker build -t bloggy:v1.0 .
    
  2. Run the tag image command. I changed the tag name to v1.0. It is important not to build the image under the same tag (latest) because it would override the previous one. So when you make an unhealthy deployment, you would be able to roll back to the healthy version.

     docker tag bloggy:v1.0 391161446417.dkr.ecr.us-east-1.amazonaws.com/bloggy:v1.0
    
  3. Run the push command; it could take a while to upload.
     docker push 391161446417.dkr.ecr.us-east-1.amazonaws.com/bloggy:v1.0
    

Finally, the image is now in your repository. It will be pulled in the following steps.

bloggy

Create an S3 Deployment Bucket

In the AWS console, search and navigate to S3 and follow the steps outlined below.

  1. Click on “Create bucket.”

Create a bucket button

  1. Choose the bucket name: “bloggy-eb-deployment”

Create a bucket page

3- Click on “Create bucket”; you can leave the other options they are.

Further configurations

  1. Create a “Dockerrun.aws.json"
    “Dockerrun.aws.json" is a configuration file that Elastic Beanstalk will need to figure out where can get the image with some metadata like container port. You can save that file project folder, but we do not need to rebuild or push the image.
    {
      "AWSEBDockerrunVersion": "1",
      "Image": {
        "Name": "391161446417.dkr.ecr.us-east-1.amazonaws.com/bloggy:v1.0",
        "Update": "true"
      },
      "Ports": [
        {
          "ContainerPort": "1337"
        }
      ]
    }
  1. Upload “Dockerrun.aws.json” to the bucket.

upload page

Choose the“Dockerrun.aws.json” file by ”Add files” then click upload.

You should get the following result:

result

Before moving from S3 to Elastic Beanstalk, you need to get the object URL of “Dockerrun.aws.json” by clicking on “Dockerrun.aws.json” under the “Name” column in the previous view.

json file details page

Create Elastic Beanstalk Application

In the AWS console, search and navigate to Elastic Beanstalk page.

  1. Click on “Create Application.”

elastic beanstalk page

  1. Choose an application name, “bloggy”, and in Platform, choose Docker.

Further configurations

  1. Choose “Upload your code.”

application code page

  1. In the source code origin, change the version label to “bloggy-v1” and paste the object URL of “Dockerrun.aws.json” which you got from the previous step.

source code origin details

  1. Choose “Configure more options” to add extra configurations like database type or engine, add environment variables, etc.

configuration page

  1. In the capacity section, click on “Edit”.

  1. Remove “t2.micro” and leave just “t2.small”; “t2.small” is the minimum resource Strapi requires to run the Strapi instance properly. Click on “Save” to add other settings.

instance types details

  1. In the software section, click on “Edit”.

Details

  1. From the .env file, copy values of APP_KEYS API_TOKEN_SALT ADMIN_JWT_SECRET JWT_SECRET, then add them as key-value like below and click “Save”.

You do not have to add database credentials as stated above. Elastic Beanstalk sets them by default.

Environment properties details

  1. In the database section, click on “Edit” and edit as follows:

  2. Engine: postgres

  3. Engine version: 13.6 (same as local development)
  4. Instance class: db.t4.micro
  5. storage: 5
  6. Choose a username and password.

Leave other options as they are and click on “Save”.

screenshot

  1. Click on “Create app”

Create app page

It would take a few minutes until AWS creates a database instance and create server instance. If everything goes fine, you would get Health status as “Ok” and the application URL.

server instance details page

How to Automate Deployment with CI/CD

To automate deployment in a real-world CI/CD environment, you need to do three steps:

  1. Build and push a new docker image to ECR. Check out this link: Push to Amazon ECR · Actions · GitHub Marketplace
  2. Change image tag in “Dockerrun.aws.json”, then upload the file to S3. Check out this link: S3 File Upload · Actions · GitHub Marketplace
  3. Deploy a new version in Elastic Beanstalk. The following link could be helpful: Beanstalk Deploy · Actions · GitHub Marketplace

Conclusion

In this tutorial, you learned how to deploy Strapi as a Docker container in AWS Elastic Beanstalk, connected to PostgreSQL in Relational Database Service. You also saw an overview of how you can automate these deployment steps in CI/CD pipelines like GitHub actions.

You can check out the full source code of the project here.

Resources