written by relyq
published on: 2023-05-19
dockerizing a webapp & deploying to terraformed aws infrastructure + github based ci/cd pipeline
this project was partly based on the aws cloud resume challenge
it basically consists of the following
the main consideration i took while doing this was that i already had my issue tracker webapp i planned on migrating to AWS & this website with the relyq.dev domain
this project page will be a summary of the first few posts on my blog
infra
cicd
first thing i did was create two EC2 t2.micro instances for my angular frontend and dotnet API. on my homelab server i always use debian 11, however, i used amazon linux 2023 this time just to try amazon’s distro - its also supposed to be optimized for the cloud so it should have better performance
im also terraforming my networking resources (vpc, subnet, security groups), but i won’t go into much detail about it here. it’s all available in my infrastructure repo if you want to take a look
resource "aws_instance" "api_server" {
ami = "ami-02396cdd13e9a1257"
instance_type = "t2.micro"
key_name = data.aws_key_pair.kp_w11.key_name
subnet_id = aws_subnet.tracker_subnet.id
vpc_security_group_ids = [aws_security_group.allow_api_port.id, aws_security_group.allow_ssh.id]
iam_instance_profile = data.aws_iam_instance_profile.tracker_ec2_instance_profile.name
user_data = data.cloudinit_config.api_config.rendered
tags = {
Name = "TrackerAPI"
}
}
resource "aws_instance" "frontend_server" {
ami = "ami-02396cdd13e9a1257"
instance_type = "t2.micro"
key_name = data.aws_key_pair.kp_w11.key_name
subnet_id = aws_subnet.tracker_subnet.id
vpc_security_group_ids = [aws_security_group.allow_https.id, aws_security_group.allow_ssh.id]
iam_instance_profile = data.aws_iam_instance_profile.tracker_ec2_instance_profile.name
user_data = data.cloudinit_config.frontend_config.rendered
tags = {
Name = "TrackerFrontend"
}
}
i store my secrets on AWS SSM params store
data "aws_ssm_parameter" "cert_relyq_dev_privkey" {
name = "relyq.dev-privkey"
with_decryption = true
}
data "aws_ssm_parameter" "porkbun_api-key" {
name = "porkbun_api-key"
with_decryption = true
}
...
adding two new DNS A records to point my domain to my new instances using the porkbun provider
resource "porkbun_dns_record" "frontend_dns" {
domain = "relyq.dev"
name = "aws"
type = "A"
content = aws_instance.frontend_server.public_ip
notes = "autogenerated by terraform"
ttl = "600"
lifecycle {
replace_triggered_by = [ aws_instance.frontend_server.public_ip ]
}
}
resource "porkbun_dns_record" "api_dns" {
domain = "relyq.dev"
name = "aws-tracker-api"
type = "A"
content = aws_instance.api_server.public_ip
notes = "autogenerated by terraform"
ttl = "600"
lifecycle {
replace_triggered_by = [ aws_instance.api_server.public_ip ]
}
}
im using the following shell script as a local to import my certificate
locals {
import_cert_script = <<-EOT
#!/bin/bash
mkdir "/etc/pki/ca-trust/source/anchors/relyq.dev/"
echo -e "${data.aws_acm_certificate.cert_relyq_dev.certificate}" > "/etc/pki/ca-trust/source/anchors/relyq.dev/cert.pem"
echo -e "${data.aws_ssm_parameter.cert_relyq_dev_privkey.value}" > "/etc/pki/ca-trust/source/anchors/relyq.dev/privkey.pem"
chmod -R 0700 "/etc/pki/ca-trust/source/anchors/relyq.dev"
update-ca-trust
EOT
}
which im then injecting into my new instances using cloud-init
frontend cloud-init
data "cloudinit_config" "frontend_config" {
gzip = true
base64_encode = true
part {
filename = "import_cert.sh"
content_type = "text/x-shellscript"
content = <<-EOT
${local.import_cert_script}
chown -R nginx:nginx "/etc/pki/ca-trust/source/anchors/relyq.dev"
EOT
}
part {
filename = "init.yaml"
content_type = "text/cloud-config"
content = file("../cloud-init/frontend.yaml")
}
}
dotnet api cloud-init
here im also injecting my unit file & my cron job to clean demos
data "cloudinit_config" "api_config" {
gzip = true
base64_encode = true
part {
filename = "import_cert.sh"
content_type = "text/x-shellscript"
content = <<-EOT
${local.import_cert_script}
chown -R tracker:tracker "/etc/pki/ca-trust/source/anchors/relyq.dev"
EOT
}
part {
filename = "import_unit_file.sh"
content_type = "text/x-shellscript"
content = <<-EOT
#!/bin/bash
echo "${data.aws_ssm_parameter.tracker_api-unit_file.value}" > "/etc/systemd/system/tracker.service"
EOT
}
part {
filename = "import_crontab.sh"
content_type = "text/x-shellscript"
content = <<-EOT
#!/bin/bash
crontab -l -u tracker > /home/ec2-user/cron_tracker
echo -e "Secrets__JanitorPassword='${data.aws_ssm_parameter.tracker-janitor_password.value}'" >> /home/ec2-user/cron_tracker
echo -e "Tracker__BaseUrl='https:--aws\x2dtracker\x2dapi.relyq.dev:7004'" >> /home/ec2-user/cron_tracker
echo -e "@daily /usr/bin/python3 /opt/tracker/api/scripts/demo_clean.py" >> /home/ec2-user/cron_tracker
crontab -u tracker /home/ec2-user/cron_tracker
rm /home/ec2-user/cron_tracker
EOT
}
part {
filename = "init.yaml"
content_type = "text/cloud-config"
content = file("../cloud-init/api.yaml")
}
}
for my frontend im basically installing nginx, copying my nginx.conf (cert setup), and pulling my latest build from my S3 bucket
#cloud-config
users:
- default
package_update: true
package_upgrade: true
package_reboot_if_required: true
packages:
- nginx
write_files:
- path: /etc/nginx/nginx.conf
permissions: "0644"
defer: true
content: |
...
server {
...
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/pki/ca-trust/source/anchors/relyq.dev/cert.pem;
ssl_certificate_key /etc/pki/ca-trust/source/anchors/relyq.dev/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
...
}
runcmd:
- aws s3 cp --region us-east-1 s3://relyq-tracker-bucket/frontend/frontend.tar.gz /home/ec2-user/
- mkdir /home/ec2-user/build
- tar -xf /home/ec2-user/frontend.tar.gz -C /home/ec2-user/build/
- rm -rf /usr/share/nginx/html/
- mv /home/ec2-user/build/tracker/ /usr/share/nginx/html/
- chmod +x /home/ec2-user/build/post_install.sh
- /home/ec2-user/build/post_install.sh
- rm -rf /home/ec2-user/build
- rm -rf /home/ec2-user/frontend.tar.gz
- systemctl start nginx
and for my dotnet api im just installing libicu (dependency) and cronie (python script cron job), then pulling from my S3 bucket
#cloud-config
users:
- default
- name: tracker
homedir: /opt/tracker
shell: /bin/bash
ssh_redirect_user: true
package_update: true
package_upgrade: true
package_reboot_if_required: true
packages:
- cronie
- libicu
runcmd:
- aws s3 cp --region us-east-1 s3://relyq-tracker-bucket/api/api.tar.gz /home/ec2-user/
- mkdir /home/ec2-user/build
- tar -xf /home/ec2-user/api.tar.gz -C /home/ec2-user/build/
- mv /home/ec2-user/build/publish/ /opt/tracker/api
- chmod +x /home/ec2-user/build/post_install.sh
- /home/ec2-user/build/post_install.sh
- systemctl start tracker
my ci/cd pipeline starts with two github actions that build my project with my codedeploy appspec.yaml files & scripts, tars it, and pushes it to my S3 bucket
here’s the gh action for my frontend
name: Node.js CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Use Node.js 18.x
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: 'npm'
- run: npm ci
- run: npm run build --if-present
- name: copy codedeploy files
run: cp .aws/* dist/
- name: tar artifact
id: tar
run: mkdir build && tar -czf build/frontend.tar.gz -C dist .
- name: upload tar.gz build artifact to s3
uses: jakejarvis/[email protected]
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1'
SOURCE_DIR: 'build'
DEST_DIR: 'frontend'
- name: upload build artifact to github
uses: actions/upload-artifact@v3
with:
name: dist
path: dist/tracker/
and the one for my dotnet api
name: .NET
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
dotnet-version: 6.0.x
- name: publish
run: dotnet publish --self-contained -r linux-x64
- name: copy codedeploy files
run: cp .aws/* bin/Debug/net6.0/linux-x64/
- name: tar artifact
id: tar
run: mkdir build && tar -czf build/api.tar.gz -C bin/Debug/net6.0/linux-x64/ publish appspec.yml clean.sh post_install.sh start_api.sh stop_api.sh
- name: upload tar.gz build artifact to s3
uses: jakejarvis/[email protected]
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1'
SOURCE_DIR: 'build'
DEST_DIR: 'api'
- name: upload build artifact to github
uses: actions/upload-artifact@v3
with:
name: build
path: bin/Debug/net6.0/linux-x64/publish
next in the pipeline i have two AWS lambdas set up to trigger on S3 bucket update. these lambdas create a new codedeploy deployment which copies the build artifact to the ec2 instances and runs the scripts in the appspec.yaml files
import boto3
codedeploy = boto3.client('codedeploy')
def lambda_handler(event, context):
return codedeploy.create_deployment(
applicationName='tracker',
deploymentGroupName='[api/frontend]',
revision={
'revisionType': 'S3',
's3Location': {
'bucket': '[tracker_bucket]',
'key': '[api/frontend]/artifact.tar.gz',
'bundleType': 'tgz'
}
},
)
the final steps in my ci/cd pipelines are two aws codedeploy deployment groups which copy my build artifacts to my instances with the following appspec.yaml files & scripts
frontend deployment
# appspec.yml
version: 0.0
os: linux
files:
- source: ./tracker/
destination: /usr/share/nginx/html/
file_exists_behavior: OVERWRITE
hooks:
BeforeInstall:
- location: ./clean.sh
timeout: 300
runas: root
AfterInstall:
- location: ./post_install.sh
timeout: 300
runas: root
ApplicationStart:
- location: ./start_nginx.sh
timeout: 300
runas: root
ApplicationStop:
- location: ./stop_nginx.sh
timeout: 300
runas: root
# clean.sh
#!/bin/bash
rm -rf /usr/share/nginx/html/*
# post_install.sh
#!/bin/bash
chown -R nginx:nginx /usr/share/nginx
chmod -R 0755 /usr/share/nginx
# start_nginx.sh
#!/bin/bash
systemctl start nginx
# stop_nginx.sh
#!/bin/bash
systemctl stop nginx
dotnet api deployment
# appspec.yml
version: 0.0
os: linux
files:
- source: ./publish/
destination: /opt/tracker/api/
file_exists_behavior: OVERWRITE
hooks:
BeforeInstall:
- location: ./clean.sh
timeout: 300
runas: root
AfterInstall:
- location: ./post_install.sh
timeout: 300
runas: root
ApplicationStart:
- location: ./start_api.sh
timeout: 300
runas: root
ApplicationStop:
- location: ./stop_api.sh
timeout: 300
runas: root
# clean.sh
#!/bin/bash
rm -rf /opt/tracker/api/*
# post_install.sh
#!/bin/bash
chown -R tracker:tracker /opt/tracker/api
chmode -R 0755 /opt/tracker/api
mv /opt/tracker/api/appsettings.aws.json /opt/tracker/api/appsettings.json
mv /opt/tracker/api/appsettings.aws.Development.json /opt/tracker/api/appsettings.Development.json
chmod +x /opt/tracker/api/scripts/*.py
chmod +x /opt/tracker/api/Tracker
# start_api.sh
#!/bin/bash
systemctl start tracker
# stop_api.sh
#!/bin/bash
systemctl stop tracker
installing the services directly on the host is a nice thing to know, however thats just not the way things work. deploying the app as separate docker containers running each service makes things consistent and makes it easier to run the app on a single instance which makes things cheaper & enables me to keep it running 24/7 while staying in the free tier
after learning iac writing dockerfiles is more or less trivial. here are the dockerfiles for both containers
for the dotnet container i compile the api using microsoft’s dotnet sdk image, then run it on the asp.net runtime image
# syntax=docker/dockerfile:1
FROM mcr.microsoft.com/dotnet/sdk:6.0 as build
WORKDIR /src
COPY . .
RUN dotnet restore
RUN dotnet publish -o /publish
FROM mcr.microsoft.com/dotnet/aspnet:6.0 as runtime
RUN apt-get update && apt-get install -y python3
WORKDIR /publish
COPY --from=build /publish .
RUN chmod +x scripts/*
EXPOSE 7004
ENTRYPOINT ["dotnet", "Tracker.dll"]
and for the angular frontend i build it with node and run it on an nginx image. i have the nginx.conf file on the repo and i copy it to the container here
# syntax=docker/dockerfile:1
FROM node:latest as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
FROM nginx:alpine
COPY --from=node /app/dist/tracker /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
EXPOSE 443
i orchestrate my containers with a simple docker compose file
there’s not much to say about this - the most important thing here is setting up the bind mount for the ssl certificate & the environment variables
version: "3.9"
services:
nginx:
image: "relyq/tracker-nginx:master"
volumes:
- "${TRACKER_CERT_PATH}:/etc/ssl/"
ports:
- "80:80"
- "443:443"
api:
image: "relyq/tracker-dotnet:master"
ports:
- "7004:7004"
volumes:
- "${TRACKER_CERT_PATH}:/etc/ssl/"
environment:
- ASPNETCORE_URLS=https://+:7004
- ASPNETCORE_HTTPS_PORT=7004
- ASPNETCORE_CONTENTROOT=/publish
- ASPNETCORE_ENVIRONMENT
- Secrets__SQLConnection
- Jwt__Key
- Secrets__SMTPPassword
- Tracker__BaseUrl
i also had to update my terraform code as i was getting my artifacts from an S3 bucket & running my services natively before.
the main change is the removal of all the code to build or pull the artifacts and now the two services are combined into a single instance that just has to pull the docker images from duckerhub and run docker compose
i can pull the images implicitly as docker compose will automatically take care of that by itself. other than this i get my docker-compose.yml
and docker_update.sh
files from my S3 bucket
#cloud-config
users:
- default
- name: tracker
homedir: /opt/tracker
shell: /bin/bash
ssh_redirect_user: true
package_update: true
package_upgrade: true
package_reboot_if_required: true
packages:
- cronie
- docker
runcmd:
- mkdir -p /usr/local/lib/docker/cli-plugins/
- curl -SL https://github.com/docker/compose/releases/latest/download/docker-compose-linux-x86_64 -o /usr/local/lib/docker/cli-plugins/docker-compose
- chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
- aws s3 cp s3://relyq-tracker-bucket/docker-compose.yml /opt/tracker/docker-compose.yml
- aws s3 cp s3://relyq-tracker-bucket/docker_update.sh /opt/tracker/docker_update.sh
- chmod +x /opt/tracker/docker_update.sh
- source /root/.tracker.env
- systemctl start docker
- docker compose -f /opt/tracker/docker-compose.yml up -d
now all my build code has been moved to my two dockerfiles, and the github actions workflows look the same for both services as now its only job is to build the images and push them to dockerhub. once done i send an update command via SSH which runs my update script
name: docker
on:
push:
branches: ["master"]
pull_request:
branches: ["master"]
workflow_dispatch:
jobs:
push_docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: relyq/tracker-dotnet
- name: Build and push Docker image
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: pull image & restart containers
run: |
eval `ssh-agent -s` &&
ssh-add - <<< "${{ secrets.SSH_KEY }}" &&
ssh -o StrictHostKeyChecking=no ${{ secrets.SSH_HOST }} "sudo /opt/tracker/docker_update.sh"
here’s the update script - it just sources the env, pulls the images, and restarts docker compose
#!/bin/bash
ENV_FILE_PATH=/root/.tracker.env
docker pull relyq/tracker-nginx:master
docker pull relyq/tracker-dotnet:master
source "$ENV_FILE_PATH"
docker compose -f /opt/tracker/docker-compose.yml up --force-recreate --build -d
@relyq on discord!!!