10 Intro EC2 Concepts DevOps Needs To Know

Photo by Growtika on Unsplash

10 Intro EC2 Concepts DevOps Needs To Know

EC2, or Elastic Compute Cloud, is AWS’s way of renting someone virtual servers in the cloud. What used to be in-house server stacks are now available in the cloud as EC2 instances.

Just like normal servers, EC2 instances can be pre-configured to have all the dependencies and software that you need when the servers come into existence. You can dictate what kind of memory, CPU, and network capacity each instance has. They can also dynamically handle increased load, as well as increase your application’s fault tolerance and availability.

There are some fundamental concepts that I’ve come to learn that are related to understanding the capabilities of EC2 instances. I’ll discuss them further below.


AMI, or Amazon Machine Image, is the way that you can pre-configure or provision your server so that it’s ready to host your application. AMIs can be stored and you can set rules to automatically create instances based on a certain AMI. AMIs can either be created by yourself or you can use one that either AWS or other users have created. Since AWS knows what people are commonly using for AMIs, most server needs have already been created in an Amazon provided AMI.

AMIs are region specific and are stored in S3 buckets (Amazon’s cloud storage). If you want to learn more about AMIs, AWS documentation can be found here. This documentation also highlights how you can set up an AMI, use an existing AMI, and many other useful guides.

Regions and Availability Zones (AZs)

AWS has data centers scattered across the world. This is an effort to try and get data as close to the users requesting it as possible. The geographical areas that these data centers cover are called Availability Zones (AZs). Regions house multiple AZs. Deploying instances in multiple AZs within a region increased your application’s fault tolerance and availability.

Security Groups

If you need to restrict or permit certain traffic to access your EC2 instances, Security Groups will be your friend. They act as virtual firewalls that allow or deny traffic based on the protocols, ports, and IP ranges that you give them. There can be multiple security groups attached to your EC2 instances.

These are handy when connecting different services that need to talk to various EC2 instances. By having a Security Group that allows traffic from one EC2 instance to another, your application can talk to an EC2 hosted API or vice versa. If you want to learn more about setting up security groups, AWS documentation can be found here.

Key Pairs

Key pairs are how you can SSH into an instance. When you create a new instance, towards the end of the creation process AWS will ask you if you would like to provide a key pair or create a new one. If you create a new one, you can download the private key which can then be used to SSH into the instance.

Key pairs exist to add a layer of security to accessing your instances since you need the private key to get into the server.

Elastic IP Addresses

The default configuration for EC2 instances is to have dynamic IP addresses. This means if your instance was to stop and then start again, the IP address may change. Elastic IP addresses allow you to have a static IP address. If you need an IP address that does not change, using an Elastic IP Address would be the best solution. Use cases for this are commonly hosting a website or web application or if you have network-based applications.

Elastic Block Store

EBS is for block-level storage volumes on your EC2 instance and would store things like your Operating System (OS). They can either be attached or detached. Having them detached allows you to store data independently from the instance’s lifecycle.

Auto Scaling

Auto scaling is a way for you to scale up or scale down your instances based on certain criteria. For example, if your web application has a sudden increase in network traffic, an auto scaling group would allow you to add more instances to better handle the increased load. There are other metrics that you can have that would increase or decrease the amount of instances you have, it just depends on what your needs are. To read more about auto scaling, AWS documentation can be found here.

Load Balancers

Think of these like traffic guards that direct network traffic based on the rules they’re given. If there is an unhealthy instance, they will be able to redirect the traffic away from the unhealthy instance and toward the remaining healthy instances. By having a dynamic traffic controler, you increase fault tolerance and availability.

There are a few kinds of load balancers, but the most commonly used one is the Application Load Balancer. This operates at Layer 7 of the OSI Model. Because of this, it allows for load balancing in many more ways than AWS’s Classic Load Balancer does. It can route traffic based on the content of the request, the path of the request, or which target group is defined for the request. I'll talk more about load balancers in upcoming articles.

Fault Tolerance

I’ve mentioned fault tolerance a few times in this discussion. If you’re wondering what that is, it essentially boils down to making sure your application can still run if there is a failure. If a component or subsystem fails, your application remains fault tolerant if that failure does not crash your application.

The most common strategy to ensure fault tolerance is redundancy. Having multiple instances that are running the same application means that if one fails, the others can take over the load. Redundancy can be accomplished through auto scaling groups. Load balancing is also critical to this because if an instance fails, the load balancer will redirect traffic away from the unhealthy instance and towards the healthy ones.


I’ve also mentioned availability a few times in this discussion. This essentially boils down to how reliably an application can be accessed by its users. Optimally, your application would have high availability meaning it’s almost always up and users can almost always access it.

To ensure high availability you have to maintain fault tolerance, as discussed above, and make sure that you can recover from failures quickly.


Hopefully, these descriptions shed some light on the capabilities of EC2. While this is not an article going in depth on any particular technology offered by AWS, it offers a range of important concepts to know when beginning your journey with AWS EC2. I will have future articles that discuss the concepts in depth in order to better explain what each concept is capable of.

In my experience, the most important concepts from this list are Application Load Balancing, Auto Scaling, and Security Groups. In my current DevOps position, I use these services regularly. Expect articles on these topics first before others. If you get a solid grasp of those concepts, it will be very beneficial to your career.

Other Useful Resources

Creating an EC2 Instance: https://www.youtube.com/watch?v=iHX-jtKIVNA

Full EC@ Basics Course: https://www.youtube.com/watch?v=iHX-jtKIVNA&list=PLt1SIbA8guuvvqyRA7BJMrSVtsrGD8fvo