While configuring this site I looked around and found several different instructions on how to do things. I had played around with Jekyll separately a bit before and used GitHub Pages for some small things so using Jekyll to build a site seemed like a good idea to get it up and running fairly quickly.

Since I had recently rebuilt the CDN configuration at work I also thought lets make this small site be able to handle more traffic than it will ever get. It isn’t completely free but close enough while the traffic is low.

The requirements to follow along here is to have GitHub and AWS accounts.

Complaining about Jekyll

As I said this site uses Jekyll but when I re-introduced myself to it for this project I almost turned away when I read the set up instructions.

Sure, they are fairly short but requiring a complete Ruby install and then installing Jekyll using gem and then possibly tweaking dependencies of your Ruby version isn’t the right one. If you use Ruby regularly this might feel normal but for me who just wants to use a tool to generate a site this seems more like the set up for developing that tool than installing a tool.

Building the site with Jekyll

Luckily most of that can be avoided by using the provided Docker container. Not the biggest fan of Docker either for various reason that I might complain about in a different post but in this case it wraps things up nicely.

Creating a new site can be done by running the following

docker run --rm -it \
  --volume="$PWD:/srv/jekyll" \
  --volume="$PWD/vendor/bundle:/usr/local/bundle" \
  jekyll/jekyll:latest \
  jekyll new my-site

This will create a directory named my-site with a basic Jekyll site and configration that you can start with. Replace my-site with whatever good name you’ve thought up for your site, you should create a git repository in this directory and push it to GitHub.

If you want to be even more basic you can just create a directory with a Gemfile and this is the minimal one that I use for this site.

source "https://rubygems.org"
gem "jekyll"
group :jekyll_plugins do
  gem "jekyll-feed"
end

This file just says to use Jekyll and adds a plugin to create an RSS-feed for the posts. From there you can build your site structure from scratch following the Jekyll documentation.

For local development I have a file called _run.sh in the root of the site. The undescore makes it so that the script isn’t included in the site, this exclusion can be configured in the _config.yml file as well but I always feel that less configuration is better.

#!/usr/bin/env bash
docker run --rm -it \
  --volume="${PWD}:/srv/jekyll" \
  --volume="${PWD}/vendor/bundle:/usr/local/bundle" \
  -p 4000:4000 jekyll/jekyll:latest \
  jekyll serve --drafts --future

This script keeps running and detects changes so you don’t have to deal with the startup time on every change.

Publishing

This site is published using GitHub Actions which completely removes the need for a CI server or local configuration for pushing the built site anywhere else.

So you’ll need to push your site to a GitHub repository on your account. Just remember to update your .gitignore to not ignore _site, .jekyll-cache, and vendor which will be constantly updated and re-generated.

Actions live in the .github/workflows directory in your repository and this is the jekyll.yml file I’m using.

name: Build and deploy site

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:

    runs-on: ubuntu-latest

    steps:
    - name: Checkout
      uses: actions/checkout@v2
      
    - name: Build the site
      run: |
        docker run \
        -v $:/srv/jekyll -v $/_site:/srv/jekyll/_site \
        jekyll/builder:latest /bin/bash -c "chmod -R 777 /srv/jekyll && jekyll build --future"
    
    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: $
        aws-secret-access-key: $
        aws-region: $

    - name: Sync output to S3
      run: |
        aws s3 sync --delete $/_site/ s3://$

    - name: Invalidate distribution
      run: |
        aws cloudfront create-invalidation --distribution-id  $ --paths "/*"

It checks out the code, builds the site using the same Docker container as before, sets AWS credentials, syncs the built site to an S3 bucket and then invalidates the CloudFront distribution connected to that bucket.

Before this action actually works all the AWS configuration needs to be done and we’ll get to that now. Brace yourself for walls of text.

S3 bucket

The bucket can be created in any region of your choice, since all traffic will be going through CloudFront it doesn’t matter where. Give it a good name and leave all the settings as they are, blocking all public access.

When it is created there is one setting that needs to be updated to not break things later and that is the CORS settings. Go into the newly created distribution and find the Permissions tab. At the bottom of that you should find the Cross-origin resource sharing (CORS) section. Edit this and set it to the following.

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

You can of course lock this down if you ever start building a payment solution or get popular enough to get sub-framed but for most this will do what they want.

Domain and certificate

I like to manage both domain and certificates inside of AWS for this kind of site since it becomes easy and automatic that way. It’s good to have this set up before configuring the CloudFront distributions since you will otherwise need to go back and update it later.

Just go into the Certificate Manager and request a new certificate and go through the validation steps. Remember to add both the base domain and the wildcard domain when requesting it since this will make things easier for you later. E.g example.com and *.example.com.

CloudFront function

CloudFront functions is the new simpler variant of Lambda@Edge that we’ll use to correct some basic things that CloudFront doesn’t do for you. For instance it will automatically show the index.html file in the root but it will not do the same for any in directories.

Create a new function with a good and descriptive name and replace the example code with the following.

function handler(event) {
    var request = event.request;
    var headers = request.headers;
    if (!headers.origin) {
        var host = headers.host ? headers.host.value : 'example.com';
        headers.origin = {
            value: `https://${host}`
        };
    }
    var lastSlashIndex = request.uri.lastIndexOf('/');
    if (lastSlashIndex === -1) {
        return request;
    }
    var endsWithSlash = lastSlashIndex + 1 === request.uri.length;
    if (endsWithSlash) {
        request.uri += 'index.html';
        return request;
    }
    var lastPart = request.uri.substr(lastSlashIndex);
    var lastPartContainsDot = lastPart.indexOf('.');
    if (lastPartContainsDot !== -1) {
        return request;
    }
    request.uri += '/index.html';
    return request;
}

What it does is to first ensure that the headers contain an Origin header which is important for CORS to function. Some browsers will not send this on all requests and this will break the caching in CloudFront if not taken care of like this. Then it tries to figure out if the URI is actually a directory where we should serve the index.html.

This is easy to expand if you need specific redirects and such. Remember to save and then go to the Publish tab and click Publish function when you make changes, just saving will not push it live.

CloudFront distribution

Now that we got a bucket, function and certificate you’ll want to create the distribution that your visitors will actually access.

On the create screen choose your new S3 bucket as the Origin domain. Go to the bucket access and change it to use OAI, otherwise you’ll need to allow public bucket access which is a bad idea. I like to create a new OAI and also set it to update the bucket policy so I don’t have to do it later.

Going further down I like to set the Viewer protocol policy to Redirect HTTP to HTTPS since there is no good reason not to use https.

Then for the Cache policy and origin request policy select CachingOptimized as the Cache policy and CORS-S3Origin as the Origin request policy. This combination will make the caching really efficient and pass along all headers needed for CORS to work properly.

Going even further down we get to the section Function associations where we will for Viewer request select the function type CloudFront Functions and then select the name of the function we created earlier.

One of the two last settings to update is Alternate domain name where we set both the base domain and the www subdomain. E.g example.com and www.example.com.

Then finally we will set the Custom SSL certificate to the one we created earlier.

All the other settings have good defaults and we can just click Create distribution.

The information you will need to take note of that we will need a little later is the distibution id and the domain name associated to the newly created distribution.

Making the distribution accessible

At some point in the past I seem to remember that CloudFront updated your Route53 settings to point towards the distribution but I haven’t seen it happen lately so you will probably need to do this manually.

Go into to zone for the domain the site will go on and create both an A and an AAAA record in there. Click Add another record at the top to create both at once. Use the tiny toggle switch on the right on both of them to make them into aliases. Select the option Alias to CloudFront distribution and then find the domain name of the newly created distribution. Create these either with an empty record name or for www depending on what you want. Nothing if you want you visitors to see example.com in the address bar and www if you want them to see www.example.com.

Add one more record but this time a CNAME where you set the record name to www and the value to your domain, e.g example.com. Or the other way around depending on your choice of how the url should look in the address bar.

IAM user and policy

We’re getting close to the finish line now. We now only need a user that has the required access to update the S3 bucket and invalidate the CloudFront distribution so that the GitHub action will function.

So let’s visit the IAM dashboard and go to the Users section and click Add users.

Give the new user a name that you will still understand in a couple of months and check Programmatic access and go to the next page.

Here we’ll select Attach existing policies directly and then click Create policy. First we select the service S3 and add the four actions PutObject, DeleteObject, GetObject, and ListBucket. With these actions you will need to add two resource specifications in the Resources section. One that points to your bucket and one that points to any object in that bucket. E.g arn:aws:s3:::your-unique-bucket-name for the bucket ARN and arn:aws:s3:::your-unique-bucket-name/* for the object ARN.

Then we’ll add the necessary CloudFront access by clicking Add additional permissions and selecting the service CloudFront. The only action we’ll add here is CreateInvalidation. In the resources section you will add the ARN for the distribution we recently added by adding the distribution id.

When we have all that configured we can click next until we need to choose a name. I like to name the policy the same as the user since this will only be used by this user.

When done you should be back at the Attach existing policies directly screen, click the little refresh button on the right and select the new policy. Then click next until you get to create the user. Make sure to make note of the Access key ID and Secret access key since we will need these to configure the GitHub secrets.

GitHub secrets

Now we go back to our GitHub repository and go to the settings tab and there we find the Secrets. For the action specified above to work you will need to add the following secrets:

With all these configured you should now be ready to push changes to your site to the repository and it will be published automatically.

Conclusion

That was a lot of text to write down but if you followed along you should now have a static site that can handle any amount of traffic. As long as you pay Amazon enough to serve it.

This is of course complete overkill for a site like this but I’m sure you can figure out some other uses for this setup.

If I missed something, something isn’t understandable, or just doesn’t work don’t hesitate getting in touch and I’ll update the article.