Automated Deployments

Deploying to our host environment takes a lot of manual steps of logging in, navigating to the folder, deleting files, and adding the new files. An FTP client can automate much of this for us, but we still need to manually open the software, connect, and upload. What we need is something that can work in the background based on a common process that we are already doing. Usually we setup continuous integration (CI) to build any time a commit is added to a repositories branch.

Our new host allows us to deploy once our Git repository changes. Although our website currently has static content, that content is not within our repository. We only have the source code – not the final artifacts built from that code. We need to setup a build process so that each time our repository changes, the source code is compiled and the build artifacts are deployed to our website. This is Continuous Integration/Deployment.

Git Flow

Before we jump into build scripts, lets prepare our environment with Git-Flow. There are other branching strategies such as GitHub Flow and GitLab Flow. For those of you who are interested, a git-flow cheatsheet is available. In simplest terms, this process adds multiple branches to your environment. Primarily master (which we already have by default) and develop. Our build server will only look at changes on master and deploy those changes to app.periplux.io. We could take it a step further and deploy changes on the develop branch to dev.periplux.io so that we could test the upcoming changes without interfering with the production environment. To keep develop stable, many teams work with feature branches based on the develop branch and merge them into develop once the feature is complete. If feature branches tend to hang around for a long time (a few months to a few years), you could skip merging into develop and merge feature branches into a new release branch for staging, but you’ll need to merge your release branch into both develop and master so that everyone is up to date. Either way works – but the main mantra is that master should always reflect what has been deployed in the production environment.

How do you setup Git Flow in an existing project?

  • Manual: git switch -c develop origin/master
  • Automated: git flow init
  • GitKraken: Repo-Specific Preferences -> Gitflow

Walking through the git flow initialization, you can see more details of how git flow works:

$ git flow init

Which branch should be used for brining forth production releases?
    - master
Branch name for production releases: [master]
Branch name for "next release" development: [develop]

How to name your supporting branch prefixes?
Feature branches? [feature/]
Bugfix branches? [bugfix/]
Release branches? [release/]
Hotfix branches? [hotfix/]
Support branches? [support/]
Version tag prefix? []
Hooks and filters directory? [D: /git-flow/.git/hooks]

My GUI tool of choice is GitKraken since the GUI was available on Mac, Linux, and Windows (One of the selling points for me as I could consolidate my GUI environments and only pay for one piece of software between machines). It is a free tool, but I also pay money for added features. You simply go to your repositories preferences and initialize Gitflow.

Preferences
Initializing
Branching

Once we have our develop branch, we push it to origin.

Github Actions

Github actions watch your repository and perform various tasks within a virtual machine. We want to do the following actions

  • Build a node.js project
  • Deploy via FTP

The majority of prebuilt actions allow us to build a node project and deploy to various cloud services (Azure, ECS, GKE, Kubernetes, OpenShift). FTP is a bit of old-school tech.

Keeping Secrets

We don’t want sensitive information in the repository itself. For this, we add secrets to GitHub. Go to your repositories security section and add secrets for your actions.

We have both secrets and variables. Add the credentials as secrets and the non-sensitive host as variables.

YAML Ain’t Markup Language

Github actions uses YAML/YML files to define the jobs in a workflow. Depending on how you set things up, an action may have multiple jobs running in parallel to speed things up. I’ve worked on one system that took 24 hours to run tests, but we were able to split tests and distribute them to multiple machines to reduce the build time to an hour. We are not getting that complex here… We are just separating our action into two jobs – build and deploy. Each job may run on a different computer. It starts with a fresh environment with no previous build output, specific node installation, or npm packages.

Going with a free plan, I only have access to a finite number of resources, so I’ve added some safeguards to kill the job if it takes too long to build or deploy. This should bring immediate attention to work out what is going wrong if my build or deployments get out of hand, and prevent a hanging job from using up all of my minutes.

Limitations

  • Artifacts and logs retained for 90 days max
  • 20 concurrent jobs
  • 2,000 minutes per month. Minute multipliers affect different platforms.
    • Linux 2,000 max
    • Windows 1,000 max
    • macOs 200 max
  • 10 GB Cache

Here is an overview of what I’ve got.

  • Any time a commit is pushed to the master branch, the action runs.
  • The Build job runs on linux
    • Fails if it runs for more than 2 minutes
    • Checks out the latest code
    • Changes to the node version in my .nvmrc file ( v22.2.0 )
    • Installs package dependencies from npmjs.org
    • Runs my build script npm run build
    • Compresses files in the /dist/ folder
    • Uploads the artifact to the individual run
  • The Deploy job runs on linux
    • Depends on build to complete
    • Only works with the latest commit on the master branch
    • Fails if it runs for more than 1 minute
    • Grabs the artifact that was just uploaded and decompresses to local folder
    • Connects to FTP using Secrets and Variables
    • Forces SSL
    • Deletes all files on FTP server
    • Uploads all files in local folder to FTP server

At most, the job will run for 3 minutes. That means I can run 666 jobs per month at the max limit. Normally these run at about 1 minute. I could potentially run just under 2,000 jobs per month. This is way more than enough. The master branch rarely gets an update, and develop shouldn’t get updated often once feature branches come into play. Develop only gets a commit once a feature has completed.

on:
  push:
    branches:
      - master

name: Deploy on push
jobs:
  build:
    name: Build
    runs-on: ubuntu-latest
    timeout-minutes: 2 # Limited server resources

    steps:
      - name: Get latest code
        uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version-file: '.nvmrc'

      - name: Install Dependencies
        run: npm ci

      - name: Build
        run: npm run build

      - name: Upload Artifact
        uses: actions/upload-artifact@v4
        with:
          name: production-files
          path: ./dist

  deploy:
    name: Deploy
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/master'
    timeout-minutes: 1 # Don't let FTP hang

    steps:

      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: production-files
      
      - name: Deploy via FTP
        uses: sebastianpopp/ftp-action@releases/v2
        with:
          host: ${{ vars.FTP_APP_HOST }}
          user: ${{ secrets.FTP_APP_USERNAME }}
          password: ${{ secrets.FTP_APP_PASSWORD }}
          forceSsl: 'true'
          options: "--delete"

Along the way I started running into errors about missing certificates. I only need them for my local development environment when running Vite as a server. I changed the logic so that those files are only requested when running as a server. In addition I moved all of the content of the webmanifest into its own file.

/* eslint-env node */
import { defineConfig } from 'vite';
import compression from 'vite-plugin-compression';
import { readFileSync } from 'fs';
import { VitePWA } from 'vite-plugin-pwa';
import manifest from './manifest';

export default defineConfig(({ command }) => ({
  server:
    command === 'serve'
      ? {
          host: '0.0.0.0',
          port: 5173,
          https: {
            key: readFileSync('./localhost.key'),
            cert: readFileSync('./localhost.crt'),
          },
        }
      : undefined,
  build: {
    sourcemap: true,
  },
  plugins: [
    compression({
      algorithm: 'brotliCompress',
      ext: '.br',
      threshold: 1040,
      deleteOriginFile: false,
    }),
    VitePWA({
      registerType: 'autoUpdate',
      devOptions: {
        enabled: true,
      },
      manifest
    }),
  ],
}));

Eventually we could add more steps such as

  • Linting
  • Check for outdated packages
  • Perform package auditing
  • Versioning
  • Unit testing
  • Code coverage
  • Deployed package analysis of bundled files
  • Accessibility reports
  • Compiling documentation
  • NPM Package deployment
  • Deploy reports for online review
  • Confirming the deployment was successful
  • Integration testing
  • PR Builds

We are limited on resources – minutes. Much of this stuff can already be done locally in the development environment. Deploying to production was the main target for today.

The other thing I want to do is setup a development version on hostinger so that I can see and test changes before we deploy to master.

  • Add subdomain dev.periplux.io (Domains \ Subdomains)
  • Add FTP account (Files \ FTP Accounts)
  • Create new account
    • Username
    • Password
    • Directory: /public_html/dev.periplux.io/

As a result of having two target environments, I decided to name my variables differently.

FTPDevelopmentProduction
HostDEV_FTP_HOSTPROD_FTP_HOST
UsernameDEV_FTP_USERPROD_FTP_USER
PasswordDEV_FTP_PASSWORDPROD_FTP_PASSWORD

That should keep things fairly generic. I also renamed my two workflows

  • Development Build
  • Production Build

Here is the development workflow

on:
  push:
    branches:
      - develop

name: Development Build
jobs:
  build:
    name: Build
    runs-on: ubuntu-latest
    timeout-minutes: 2 # Limited server resources

    steps:
      - name: Get latest code
        uses: actions/checkout@v4

      - name: Setup Node
        uses: actions/setup-node@v4
        with:
          node-version-file: '.nvmrc'

      - name: Install Dependencies
        run: npm ci

      - name: Build
        run: npm run build

      - name: Upload Artifact
        uses: actions/upload-artifact@v4
        with:
          name: develop-files
          path: ./dist

  deploy:
    name: Deploy
    needs: build
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/develop'
    timeout-minutes: 1 # Don't let FTP hang

    steps:
      - name: Download artifact
        uses: actions/download-artifact@v4
        with:
          name: develop-files

      - name: Deploy via FTP
        uses: sebastianpopp/ftp-action@releases/v2
        with:
          host: ${{ vars.DEV_FTP_HOST }}
          user: ${{ secrets.DEV_FTP_USER }}
          password: ${{ secrets.DEV_FTP_PASSWORD }}
          forceSsl: 'true'
          options: '--delete'

It’s practically the same as the production build, except it works with a different branch and uses different secrets.

For some reason, my service worker was caching files with the incorrect mime types. I traced it down to how I configured my server to serve precompiled brotli files. For anyone interested, here is how I’m doing it after the fix:

# php -- BEGIN cPanel-generated handler, do not edit
# Set the “ea-php83” package as the default “PHP” programming language.
<IfModule mime_module>
  AddHandler application/x-httpd-ea-php83___lsphp .php .php8 .phtml
</IfModule>
# php -- END cPanel-generated handler, do not edit

# Cache Control Header for 1 year
<FilesMatch "\.(css|js|png|svg|ogg|jpg|jpeg|gif|ico)$">
  Header set Cache-Control "max-age=31536000, public"
</FilesMatch>

<IfModule mod_rewrite.c>
  # Serve Brotly Compressed Files
  RewriteEngine On
  RewriteCond %{HTTP:Accept-Encoding} br
  RewriteCond %{REQUEST_FILENAME}.br -f
  RewriteRule ^(.+)\.(css|js|html|svg|xml) $1.$2.br [QSA]
  RewriteRule \.css\.br$ - [T=text/css,E=no-gzip:1]
  RewriteRule \.js\.br$ - [T=application/javascript,E=no-gzip:1]
  RewriteRule \.html\.br$ - [T=text/html,E=no-gzip:1]
  RewriteRule \.svg\.br$ - [T=image/svg+xml,E=no-gzip:1]
  RewriteRule \.xml\.br$ - [T=application/xml,E=no-gzip:1]
  <FilesMatch "\.(css|js|html|svg|xml)\.br$">
    Header set Content-Encoding br
    Header append Vary Accept-Encoding
  </FilesMatch>
</IfModule>

<IfModule mod_negotiation.c>
    Options -MultiViews
    AddEncoding br .br
    AddType text/html .html
    AddType text/html .html.br
    AddType text/css .css
    AddType text/css .css.br
    AddType application/javascript .js
    AddType application/javascript .js.br
    <IfModule mod_mime.c>
        AddCharset utf-8 .html .css .js
    </IfModule>
</IfModule>

<IfModule mod_mime.c>
  AddType application/manifest+json .webmanifest
  AddType application/javascript .js
  AddType application/javascript .js.br
  AddType text/css .css
  AddType text/css .css.br
  AddType text/html .html
  AddType text/html .html.br
  AddType image/svg+xml .svg
  AddType image/svg+xml .svg.br
  AddType application/xml .xml
  AddType application/xml .xml.br
</IfModule>

Production Branch

One last thing I decided to do was to rename master branch to production. Just from experience, everyone should know that the master branch is a production branch. Why not name it as such? It may reduce the risk that someone would think that it’s the main branch to develop from. Usually before you touch code in a repository, the branching strategy would be explained. Giving it a name of production just gives everyone clarity on what that branch is for.

Wrap Up

What did we do today?

  • Setup Git Flow in our repository
  • Setup GitHub secrets
  • Setup GitHub Actions to trigger during commits to production/develop branches
    • Build and deploy our code to development
    • Build and deploy our code to production
  • Address Brotli compression issues with incorrect mime-types
  • Modified our code to be buildable on a machine without server resources (certificates)

Other things I did today. I converted a three page DOCX form into a web form on another WordPress site that could be printable without the sites graphical theme.

Discover more from Lewis Moten

Subscribe now to keep reading and get access to the full archive.

Continue reading