Sharing AMI between 2 accounts

Configurate and tune an EC2 could be fun. At least once. At some point is good to have the image of the finalized EC2 at hand to make similar instances  quicker. Amazon calls images as Amazon Machine Images (AMIs). These hand-made and polished AMIs can be used conveniently for example in Launch configurations. The amount of further configuration is decreased during the autoscaling with hand-made AMIs. Even though AMIs are regional and private by default, it is possible to share those between two accounts. If the AMI is required on the other region, it have to be copied first to that region.  Nevertheless, the first step is to create the AMI.

Step 1 – Create the image

Even though it is not necessary a must to stop the instance before the image creation, it is highly recommended. At least the reboot should be allowed during the process. Quite often the image is not working due to the creation process that had not included reboot or stopped instance. After the instance is stopped, just go to Actions and select Create Image (figures 1 & 2).

 

Figure 1. Stop the instance and go to Actions

Figure 2. Create image

Step 2. – Share the image

The image creation status can be found on Images -> AMIs right after the image creation has been started. Image is actually an EBS snapshot and therefore snapshot creation status can be found on Elastic Block Storage -> Snapshots. After a few minutes the image creation status reaches available status.

Sharing the image, select Modify Image Permissions and add permission to the target AWS account id (figure 3).

Figure 3. Add permissions to the AMI

Step 3. Launch the image on the target account

Log in to the target AWS account and select Launch instance. Choose My Amis -> Shared with me and pick just shared created AMI (figure 4).  As a side note, sharing RDS snaphots is a similar & simple task and there are a lot analogues in the RDS-snapshot sharing  process between accounts.

Figure 4. Select & Launch the AMI that is Shared with me

-Tero

VPN between on-Prem and AWS

Sometimes is necessary to have a secure connection between AWS services and on-premise devices. Or between two AWS accounts, or two VPCs in separated regions (as VPC peering is suitable only for VPCs within the same region -> update: intra-regions possible between certain regions. Yet, Security Groups are not visible between regions ). For these occasions, the solution can be based on AWS VPN connections -service that is found through VPC console. Furthermore, the VPN connections is divided to three parts; Virtual Gateway, Customer Gateway and VPN tunnel between these (i.e. VPN Connections). The VPN solution is based on two Ipsec tunnels; active-passive mode for redundancy.

First step is to create a CGW or -to be more precise- tell AWS what is the IP address of the on-prem VPN gateway. Therefore, the climax to configure CGW is to remember the IP address of the on-prem gateway. In addition, there is the decision between static and dynamic routing. The second step is creating VGW. That is a straightforward task, just give it a proper name and remember to attach it to the VPC.

Configuring VPN tunnel between these is the next step. Select previously created VGW and CGW. Static or dynamic routing? Naturally the decision should be consistent with the previous decision. Static is somewhat quicker to setup as there is no need to advertise routes. The setup of the static and dynamic routing are very similar. For dynamic routing, ASN should between 64512 and 65534 and routes to the AWS private CIDR block should be advertised at the CGW.

After VPN connection is created, select the connection and download configuration file for the on prem setup. There are numerous configuration files available and I am confident that even if the right one is absent, those available ones can be used to help finding the right steps&values. The file contains step by step instructions how to configure the CGW. (Majority of the lines can be copy-pasted and only a few lines require some tuning. Double check the IP values and CIDR blocks as well as keep in mind that on-prem is the <left> side and AWS is the <right> side.)

Finally update VPC route tables. Route all traffic with “destination: <on-prem CIDR block>” to VGW in those route tables that are preferred (most likely private route table(s)). Don’t forget to update security groups to allow traffic from on-prem CIDR block.

AWS side of the task is done.

 

-Tero

WordPress site: Lessons learnt

I had a somewhat limited experience about WordPress sites until I decided to host one on EC2. I kinda like to learn new and I was ready to learn all by the hard way. Luckily internet is full of WordPress related examples, tip, hints and nice to know items. Those are scattered around the net. Someone might find that frustrating but I think that is part of the fun. And, as a whole, it has been a great journey with ups and downs. In this blog I share  a few interesting notes from my journey.

Write permissions

As everyone is aware of, just installing WordPress is not enough. WordPress have to grant a permission to work with the files. So

  • give write permission to wp-content and wp-includes folders
  • ownership of the WordPress files should be (for example) apache

Permalinks

Changing default settings for permalinks will make the site  more or less unreachable.  I find that the reason is not WordPress itself but the webserver. Webserver should allow the change in the address. For example in Apache the file to be tuned is the conf-file.

  • make sure that there is AllowOverride All

PHP

Default values in /etc/php.ini are not suitable for serious WordPressing. I keep wondering why. I am confident that there is an excellent reason. Maybe I find it someday. Meanwhile I propose some tuning to the php.ini.

  • increase max upload size to 100M
  • increase memory_limit for example 256M

Beyond these notes I strongly advice to look for the best configuration for each WordPress site that you might be hosting. Clearly these notes don’t include the full list of recommended changes but hopefully this is a good start. 🙂

 

-Tero

Website fail over to S3

On a good day everything works fine. Occasionally it is not a good day. Website is something that should be accessible on every day. With AWS that can be reached rather easily. I am going to discuss a solution that utilizes S3 and Route 53.

Steps are not super hard and there are lots of examples around. Yet, those might not cover all the aspects, at least I found that during my endeavour. Below is illustrated steps for basic case (HTTP, no LB).

S3

Naturally we need a static website that will be used in a case of failure at the primary.

  1. Create bucket with the domain name. If your domain is example.com, then create bucket example.com.
  2. Upload static files to your bucket
  3. Turn S3 Website Hosting on
  4. Make uploaded files public (recap: select the file, go to “More”, -> Make public)
  5. Confirm that http://<example.com>.s3-website.<region>.amazonaws.com is working

Route 53

The second AWS service is Route53. Following steps assumes that there is already a domain and hosted zone.  First  somekind of measurements have to be created and then we use to build automation to change the traffic from primary to secondary.

Health Check

  1. Create Health Check

    1. Choose a fancy name
    2. What to monitor: Endpoint
    3. Specify endpoint by : IP*
    4. Protocol: HTTP
    5. IP address: <ip address of your instance>
    6. Host name: <empty>
    7. Port: 80
    8. Path / : <what ever is preferred…>
    9. Advanced configuration -> Changes are not necessary required
    10. SNS notifications: if preferred.

*) Selecting Domain won’t fit here as Domain should be healthy all the time, just the routing should change according to the health check (either to Primary or Secondary).

Hosted Zone

  1. Select your domain’s A-record and modify it

    1. Alias -> no
    2. Routing policy -> Failover
    3. Failover Record Type : Primary
    4. Set ID: Primary (or similar phrase whatever you prefer)
    5. Health Check: yes
  2. Create new Record Set

    1. Alias -> yes
    2. Alias -> s3-website.<region>.amazonaws.com    (Notice that there is no bucket name)
    3. Routing Policy -> Failover
    4. Failover Record Type: Secondary
    5. Set ID: Secondary (or similar phrase)
    6. Health Check: no

And that’s it pretty much. When Route 53 finds that Primary is not ok, then it routes traffic to Secondary. Steps are similar when loadbalancer is in the front of instances. Route53 can utilize loadbalancer’s health check, so configuring a Route 53 Health Check to check health of the instance is not required…

-Tero

AWS reached all time high in the Q4/17

AWS performed extremely strongly last year. Once again, they hit the record. The public cloud service providers -in general- performed very well in 2017, all the big players increased their revenues with double digits. Buzz around cloud remains on the high level. It seems that cloud services are not appealing just for certain industry but all of them. Fast paced business environments are more and more depending on dynamic possibilities of x-as-a-service based solutions.

Figure 1. A section from Amazon’s quarterly result

Based on the quarterly result, the income generated by AWS is highly important for Amazon. Just for the last quarter AWS was responsible for income of $1,35B that was a huge part of the total Amazon’s income. Furthermore, AWS generated 1/10 of all Amazon’s net sales in 2017.  If the trend continues, there is a possibility that AWS becomes bigger than SAP within a few years.

What those financial numbers means for the current year? AWS is starting once again the new year from the leading position. 2017 they introduced 1430 new services and features and it is expected that they will introduce even more this year.  It has begun to be hard to find a nice-to-have service that is not provided by AWS. Yet, the trend is to increase the number of managed services and I expect that trend to continue. Who really likes to be able to look under the hood, people just like to ride.

 

-Tero