New blog, who dis?

In this installment of burning time doing stuff that doesn't really need to be done, we'll take a look at changing all the tooling i used to build my last blog that was working perfectly fine!

New Blog, Who Dis?

To be totally honest, this entire thing came about because I could not remember what all needed to be installed to get jekyll working on a new laptop to write posts. I was tired of fighting ruby gem install issues. Ultimately I wanted a platform to write posts that was not in the way, it just enabled me to write content more easily. I already have plenty reasons NOT to write posts ha ha ha.

Everything from my previous blog post about auto-deploying my blog is still accurate, however, I switched from jekyll to Hugo. I will be doing a follow up to this where I setup converting/serving media, and live streaming. I realize I can do all this through twitch, and youtube, but I want to do it all for myself.

Hugo

Hugo is a golang based blog tool. It utilizes markdown, and theming to generate blog pages. I realllllly like it a lot. To install it, you literally just need golang. Installing golang takes seconds. Then to install hugo simply run this command:

1
2
> go install -tags extended github.com/gohugoio/hugo@latest
hugo version

After that, you can create a site, and start blogging. Theming can be complicated if you don’t understand go modules, but it’s not any more complicated than jekyll. It’s also significantly easier to setup, and use.

Continuous Deployment

IAM User & Policy

Since the only features i’m using are listing/syncing files to an S3 bucket, and invalidatin cloudfront cache (this is my CDN), the permissions are pretty straight forward. I created an IAM user, and attached this policy.

I have redacted some fields, make sure if you go down this path, you put real resources into those fields. It should be obvious which ones.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::$BUCKET_NAME"
            ]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": [
                "arn:aws:s3:::$BUCKET_NAME/*"
            ]
        }, {
            "Sid": "ManageCloudFrontInvalidations",
            "Effect": "Allow",
            "Action": [
                "cloudfront:CreateInvalidation",
                "cloudfront:GetInvalidation",
                "cloudfront:ListInvalidations"
            ],
            "Resource": "$DISTRIBUTION_RESOURCE"
        }
    ]
}

Base Image

What I generally do for clients, and employers, is build a set of CI base images. Each one serves specific groups of needs. I may build one per language version in the case of Rust, Go, NodeJS, or PHP. This allows me to group code quality tools, configurations, etc into an easy to use bundle so nobody has to install anything. Simply add configuration files to your repository and extend a preconfigured job.

ops/ci-base:hugo

This image is used solely for building my blog, so all it needs to have on it is golang & hugo.

1
2
3
FROM golang:1.20

RUN CGO_ENABLED=1 go install --tags extended github.com/gohugoio/hugo@latest

ops/ci-base:aws

Some people might ask why you would want to build/install the aws cli vs using the existing aws cli docker images. There are several reasons, but the primary rationale for this in my mind, is that i will likely want to copy many scripts used to do common tasks in AWS, and gitlab also requires the entrypoint to be a shell. Since the awscli and hugo images don’t do this, I build my own so i can bundle tools with them.

1
2
3
4
5
FROM ubuntu:22.10

RUN apt-get update && apt-get install -y curl zip unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip
RUN ./aws/install

Deployment

I’m still using a gitlab runner to deploy my blog. Gitlab is very easy, and just gets out of your way, providing many tools, and options for you to use but it’s all opt-in. It doesn’t force you to do anything. In fact, to be able perform the build, deploy, and CDN cache invalidation, i simply set my AWS credentials in environmental variables in either the runner configuration, the group/project configuration, the system, or the pipeline and then add the following file to the repository.

On each push, the file .gitlab-ci.yml gets evaluated and based on the rules in the file, it will execute any number of jobs in a specific order, under whatever conditions I specify. The documentation for pipelines is available here: https://docs.gitlab.com/ee/ci/pipelines/.

my gitlab-ci.yml

This will first execute a build job, which builds the static files for the site. Then if it succeeds, it will execute the deploy job. The deploy job is the most interesting because it will sync the files from local to remote bucket, then it will invalidate the cache for the whole website.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
stages:
    - build
    - deploy

cache:
  key: $CI_COMMIT_SHA
  untracked: true

build:
  stage: build
  image: $HOST/ops/ci-base:hugo
  script:
    - hugo
    
deploy:
  stage: deploy
  image: $HOST/ops/ci-base:aws
  script:
    - aws s3 sync public/ s3://$BUCKET_NAME
    - aws cloudfront create-invalidation --distribution-id $DISTRIBUTION_ID --paths "/"

Where to go from here

Well, I recently started submitting my blog sitemap to google so the site gets indexed. I will likely automate this process, as well as add some more about how I intend to serve video on demand on my site.

Built with Hugo
Theme Stack designed by Jimmy