Angular on Amazon Web Services

How to easily and quickly deploy an Angular application to Amazon Web Services.

At interfacewerk, we deploy Angular applications for various environments on HMI displays such as point-of-sale machines or for more common web development infrastructure environments. Given the increasing importance of cloud computing, let's look at how to deploy an Angular application on Amazon Web Services. We will show a basic architecture for the backend application and setup of a domain over https for our Angular application. So we won't explore details of the actual Angular application or the backend, but we will give some commands and some tips and tricks - not a full step-by-step process. Deploy as well your first Angular project on AWS! The estimated cost for this setup with no other usage is around 10USD/month. We will use the following services for our setup: EC2, S3, CloudFront, Route 53 and AWS Certificate Manager.

EC2 is the foundation of AWS services and provides scalable compute capacity. To begin, create a new instance of the Amazon Linux 2 AMI type. For testing purpose, it is recommended to use t2.micro as it provides a free version for several hours per month. Continue with the default settings and select the default VPC security group. Note that this is not recommended for production, as it will open all protocols and ports. Add SSH to your instance and install Docker:

sudo yum update -y sudo yum install -y docker sudo usermod -aG docker ec2-user sudo service docker start

Drag your Docker image from your registry, which contains your web server. When it runs, map its local port to 80. S3 is an object store that comes with many features such as versioning and high availability. Build your Angular application as you normally would. Create a bucket in S3 with the following command:

aws s3api create-bucket --bucket YOURBUCKETNAME --region YOURAWSREGION

Then, add a bucket policy that allows public read access for your bucket:

aws s3api put-bucket-policy --bucket YOURBUCKETNAME --policy "$policy_json"

The $policy_json looks like this:

{ "Version": "2012–10–17", "Statement": [ { "Sid": "PublicReadForGetBucketObjects", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::YOURBUCKETNAME/*" } ] }

In our setup, we use Amazon's CDN called CloudFront to distribute our content to various edge locations. This reduces latency around the world. Create a distribution using the following command:

aws cloudfront create-distribution \ --origin-domain-name \ --default-root-object index.html

Tip: Create an invalidation every time you upload new content to S3. This invalidates your old content, and CloudFront immediately fetches your new content into the margins.

Next, register certificates with AWS Certificate Manager. This can also be done externally, but it is often more convenient to use the AWS services. In addition, AWS Certificate Manager can be used free of charge.

Next, we use an Elastic Load Balancer, which is part of the EC2 service. This is usually appropriate if you want to use multiple EC2 instances to distribute your traffic. In this setup, Amazon Web Services forces you to use an Elastic Load Balancer because it allows you to map https requests to port 80 (http) on your instance. Create an application load balancer that listens for protocol https. Select your certificate from your domain (e.g. and point to your instance on port 80 in the target group. When you register your target, you can select the instance we created above. If you have multiple EC2 instances running with the same Docker container, select as many as you want for better load balancing. Finally, we use Route53 to route our domains. We want two domains:

1. one that the user can call (e.g. and 2. one for the angular application to call our backend (e.g.

Tip: This approach also works across AWS accounts if the domain is registered on another AWS account.

1. For the add as alias target the 'domain name' from the CloudFront distribution.

2. For the Create an A record with Alias and copy in Alias Target the DNS name of your Elastic Load Balancer. Note that the term dualstack is added before the alias target to support ip6.

This setup can be considered an introduction to AWS Services. Compared to more advanced setups, such as using AWS ECS or Kubernetes to compose backend containers, it is very easy to set up. However, most of the Angular-related setup would remain the same in any other scenario. With this setup, every time you update your backend infrastructure, you would have to ssh into your instance(s) to pull and restart your Docker containers. This would be simplified by using one of the Docker composition tools mentioned above.

Written by: