Serving SPA through S3 to CloudFront using GitHub Actions for CD

Serving SPA (Angular, React, Vue, or any other framework) through S3 to Cloud Front using GitHub Actions for continuous delivery. This tutorial includes all the steps.

Serving SPA through S3 to CloudFront using GitHub Actions for CD

Serving SPA is a pain when especially it is a heavy traffic app. There are services like Netlify and Vercel, which tend to have ease in deployment using built-in Continuous Delivery methods, but they have a high price that increases based on traffic downloading that SPAs.

Moreover, when your entire stack is on AWS, then why not use SPA with S3? Why Not?

The downside is we have to set up everything from the ground up ourselves, but the final result will be automated and with a lot less money and a lot less hassle of maintaining different providers, i.e., AWS, Netlify, etc.

Here we will do the following:

  • First, we will create an S3 bucket with setting for static web hosting
  • Then we will create a cloud front for the origin s3 bucket with settings to use the cloud front as the provider of SPA
  • Then will set up a CD using GitHub Actions so that next time your content goes to the public hassle-free.

Create an S3 bucket:

  • Go to S3 in the AWS console, and click create bucket button
  • Select the bucket name, I will be using sulman-bucket-1
  • select a region suitable to you, I will be using us-east-1
  • In Object Ownership select ACL enabled and select Object Writer in Object Ownership
  • Unselect Block all public access and confirm it by checking the acknowledgment below this block
  • Leave the rest of the settings as is and create the bucket.

Build SPA for Production to upload for the first time:

  • build SPA for production that has to be manually uploaded for the first time
  • I use yarn, so I will call yarn build
  • This should create a dist folder
  • Click on upload in the S3 bucket, drag and drop the contents of the folder including index.html in the root of the bucket and upload them

Change permissions of the bucket:

  • go to the permissions tab inside the bucket in the S3 console
  • Edit policy in the Bucket Policy section
  • Enter the following in the text box
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::sulman-bucket-1/*"
  • Click Edit in Access Control List (ACL)
  • select both checkboxes in Everyone (public access)
  • Save changes

Enable Static Web Hosting:

  • Go to the Properties tab and at the end of it there is the Static Web Hosting section
  • Enable static web hosting]
  • Hosting type → Host a static website
  • Index document → index.html
  • Error document → index.html (as we will be using SPA)
  • Save changes

Create Cloud Front Distribution:

  • Go to Cloud Front in the AWS console and click the Create Distribution button.
  • Select in Origin Domain the S3 bucket where the static site you just uploaded is.
  • In section the Default cache behaviorviewerviewer protocol policy → select Redirect HTTP to HTTPS
  • In section the Default cache behaviorviewerAllowed HTTP methods → select all containing HTTP methods
  • In section SettingsAlternate Domain Name (CNAME) - optional → Add Item and enter the domain path like
  • In section SettingsCustom SSL certificate - optional → Request certificate. This will open the request public certificate, and validate the domain by adding the CNAME provided in your DNS provider.
  • After the certificate is done, click on the refresh button and select certificate just created
  • In section SettingsDefault root object - optional and enter index.html
  • Click Create Distribution

Add CloudFront as CNAME in DNS:

  • After cloud front distribution is created, copy Distribution domain name
  • Create a CNAME in your DNS and paste the domain name

Error pages in Cloud Front:

  • As we are using SPA then the problem of reloading the page in the cloud front gives an error, so we have to resolve that
  • In distribution, go to the Error Pages tab
  • Click on Create custom error page
  • select 404 in the HTTP status code
  • select Yes to Customise Error Response
  • Then enter /index.html in the response page path
  • and HTTP Response Code equal 404
  • create the page

GitHub Actions CD:

I am a great advocate of mono-repo, so my front-end is in folder front
name: Deploy Front Production

    branch: 'main'
      - 'front/**'

    runs-on: ubuntu-latest
        working-directory: 'front'
      - uses: actions/checkout@v2

      - name: Cache modules
        uses: actions/cache@v1
        id: yarn-cache
          path: node_modules
          key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
          restore-keys: ${{ runner.os }}-yarn-

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Setup Node.js environment
        uses: actions/setup-node@v3
          node-version: 16

      - name: Run a multi-line script
        run: |
          yarn build
      - name: Deploy
        run: aws s3 sync ./dist s3://${{ secrets.AWS_BUCKET_FRONT }}

      - name: Invalidate CloudFront
        uses: chetan/invalidate-cloudfront-action@v2
          PATHS: "/index.html"
          AWS_REGION: "${{ secrets.AWS_REGION }}"
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}

  • Create a folder in the root of repo .github and in that folder another folder workflows
  • Create in secrets, So go in your repo → settings → Secrets → Actions
  • Create secrets with the name AWS_BUCKET_FRONT and the value of your bucket name
  • Another secret with the name DISTRIBUTION_FRONT and value of distribution ID in value
  • Also name AWS_REGION and the value of the region of your bucket
  • name AWS_ACCESS_KEY_ID and value of your access key ID having permissions to S3 and Cloud Front
  • Lastly, name AWS_SECRET_ACCESS_KEY and the value of your secret key having the above permissions.
  • working_directory is for the mono-repo folder where lies the code to SPA

Here we first get the repo, then build SPA in the folder, then the first upload to the S3 bucket, and lastly invalidate the cloud front deployment so that it recognizes the current S3 code as new.

Now, when a code is pushed to the main branch, and it has changed in the front folder, then your changes will automatically be deployed to S3 and Cloud Front without any hassle.

Special thanks to the following people who helped me run my own distributions first:

Happy Coding!