Using S3 as a source action for CodePipeline

 

Rather new feature is to set S3 bucket to trigger CodePipeline (AWS blog).  There have been around scheduled pulls option for CodePipeline and S3 connection but since March 2018 also ‘push’ option is possible. There are good references available to set CodePipeline and that is therefore skipped in this blogpost. Lets run through a scenario where a new version arrives by Lambda. Let say the s3 bucket to be used is s3sourcebucket and the code is in the file.zip.

At first, versioning has to be turned on for S3 bucket. CodePipeline job can be identified by the version identifier. So after versioning turned on the S3 bucket is all set.

Then the focus is on CodePipeline. Lets use the CodePipeline console to configure the s3://s3sourcebucket/file.zip as the code source as (according to the documentation) CloudTrail and CloudWatch Events are automatically created and linked to the CodePipeline. Whenever new version is uploaded to the bucket, CloudTrail logs the API call, CloudWatch Event rule catch the CloudTrail logged trail and invokes the correspondent CodePipeline.

Below is an example of the automatically generated CloudWatch event rule. As default,  event pattern is built to match the entire event.

{
  "source": [ "aws.s3" ],
  "detail-type": [ "AWS API Call via CloudTrail" ],
  "detail": {
    "eventSource": [ "s3.amazonaws.com" ],
    "eventName": [ "PutObject" ],
    "resources": {
       "ARN": [ "arn:aws:s3:::s3sourcebucket/file.zip" ]
       }
    }
}

As the snippet points out, CloudWatch event rule is looking for “PutObject” as an EventName. That is the correct event name for ‘put’, as if IAM user is uploading a fresh file to the bucket. However, if the file transfers to the bucket by a copy operation, as if Lambda copies the file from somewhere to the bucket, then the EventName is not match. And the CodePipeline won-t be invoked. The correct event name is “CopyObject” for that occasion.  As a general hint for debugging, double-check the CloudTrail log for the correct event name and confirm that the same phrase is used in the CloudWatch Event rule.

-Tero

CodePipeline & CodeBuild & S3 website

Single Page Applications (SPA) are convenient as they provide a smooth user experience. For SPAs React is a good choice. But this blog is not about React, this is about AWS services that can be used in automatic deployment of SPA.  AWS services to be discussed are CodePipeline, CodeBuild,  CodeCommit, and S3.

S3

S3 is appealing service not only from a storage perspective but also due to the possibility to configure it to work as a static website; combining low price and high scalability.  S3 website is also a ‘Serverless’ approach. The lack of IP address can be handled with Route53 (using S3 website as an alias). But lets move on and say the name of the static-website bucket is www.myraectapp.com.

CodePipeline

The CodePipeline can be used as the main framework to address the continuing development. The pipeline can include several phases and each of the phases can be one of the handful of types. A basic pipeline contains just the source of the code (CodeCommit) and a buildphase (CodeBuild). There is no need to set any deployment phase. CodePipeline stores all output artifacts of the phases in S3 bucket and if those artifacts are used for example by Lambda in another phase of the pipeline, IAM policy with suitable permissions to the codepipeline-bucket should be attached to the Lambda’s service role.

CodeCommit

Integration between CodeCommit and (for example) Git makes it convenient starting point of the pipeline. Once user credentials for IAM user are set up, the user is able to connect the CodeCommit. The CLI-usage of the CodeCommit is not different than Git as the user experience is exactly the same. Setting up CodeCommit as the source for CodePipeline is presented in figure 1.

Figure 1. Setting up CodePipeline

CodeBuild

CodeBuild project can be setup through CodeBuild console and then it is possible to select existing CodeBuild project in the CodePipeline console. However, creating the CodeBuild project through CodePipeline console tackles some issues relating permissions and odd errors. CodeBuild will be invoked by CodePipeline not CodeCommit. Once the CodeBuild project has been created through CodePipeline console, the source is correct (CodeBuild project has the Current source: AWS CodePipeline) . Setting up the CodeBuild through CodePipeline is presented in figure 2.

Figure 2. Creation of CodeBuild project through CodePipeline console

The heart of the CodeBuild project is the buildspec.yml file. The build process is divided into several phases that can be used to run custom commands and scripts (check the example of buildspec.yml file below). The formal syntax of the yml-file is crucial, and it seems to be typical that syntax-errors are not necessary very conveniently identified from the logs. So make sure all those spaces are correct! As shown below (post-build phase), the build files are copied to www.myreactapp.com bucket. The sufficient IAM policy with permission to access the bucket should be attached to the CodeBuild service role.

version: 0.2

phases:

  pre_build:

    commands:

      - npm install

  build:

    commands:

      - npm run build

  post_build:

    commands:

      - aws s3 sync --delete build/ s3://www.myreactapp.com --acl public-read

Also notice that the version number (0.2) is not a random number. It is a static number defined by AWS.

-Tero

Website fail over to S3

On a good day everything works fine. Occasionally it is not a good day. Website is something that should be accessible on every day. With AWS that can be reached rather easily. I am going to discuss a solution that utilizes S3 and Route 53.

Steps are not super hard and there are lots of examples around. Yet, those might not cover all the aspects, at least I found that during my endeavour. Below is illustrated steps for basic case (HTTP, no LB).

S3

Naturally we need a static website that will be used in a case of failure at the primary.

  1. Create bucket with the domain name. If your domain is example.com, then create bucket example.com.
  2. Upload static files to your bucket
  3. Turn S3 Website Hosting on
  4. Make uploaded files public (recap: select the file, go to “More”, -> Make public)
  5. Confirm that http://<example.com>.s3-website.<region>.amazonaws.com is working

Route 53

The second AWS service is Route53. Following steps assumes that there is already a domain and hosted zone.  First  somekind of measurements have to be created and then we use to build automation to change the traffic from primary to secondary.

Health Check

  1. Create Health Check

    1. Choose a fancy name
    2. What to monitor: Endpoint
    3. Specify endpoint by : IP*
    4. Protocol: HTTP
    5. IP address: <ip address of your instance>
    6. Host name: <empty>
    7. Port: 80
    8. Path / : <what ever is preferred…>
    9. Advanced configuration -> Changes are not necessary required
    10. SNS notifications: if preferred.

*) Selecting Domain won’t fit here as Domain should be healthy all the time, just the routing should change according to the health check (either to Primary or Secondary).

Hosted Zone

  1. Select your domain’s A-record and modify it

    1. Alias -> no
    2. Routing policy -> Failover
    3. Failover Record Type : Primary
    4. Set ID: Primary (or similar phrase whatever you prefer)
    5. Health Check: yes
  2. Create new Record Set

    1. Alias -> yes
    2. Alias -> s3-website.<region>.amazonaws.com    (Notice that there is no bucket name)
    3. Routing Policy -> Failover
    4. Failover Record Type: Secondary
    5. Set ID: Secondary (or similar phrase)
    6. Health Check: no

And that’s it pretty much. When Route 53 finds that Primary is not ok, then it routes traffic to Secondary. Steps are similar when loadbalancer is in the front of instances. Route53 can utilize loadbalancer’s health check, so configuring a Route 53 Health Check to check health of the instance is not required…

-Tero