Introduction
Amazon Web Services, touted as a pioneer in the Cloud service providers, has been a consistent frontrunner in the IAAS and PAAS space. Organizations irrespective of their size have chosen AWS as their go-to Cloud service provider making it an undisputed contender in the area of Cloud services. With the margin in the race to the top constantly widening between AWS and its closest market competitor, their predominance can be attributed to their carefully crafted market strategy. AWS retains the top position because of its continuous innovation approach strategy as well as an attitude of expanding partner ecosystem.
Let us delve deeper into each of these attributes that make AWS stand out so emphatically in such a competitive situation.
Continuous Innovation Strategy
In a short span of time of the first half of this year AWS has successfully added 422 services to its already illustrious portfolio. It has taken it further by integrating analytics and machine learning capabilities into its offerings. This gives AWS an advantage to stay abreast with the latest cloud trends and ahead of its competition.
Partner Ecosystem
AWS is also constantly expanding its partner ecosystem by adding several credible names to the list which adds to its reliability as a global Cloud service provider. Apart from that, AWS is spreading its footprints across the world with a strong and dependable network.
According to a CloudTech report, AWS continues its position a global leader in Cloud with more than 45% worldwide market share as compared to its competitors like Microsoft, IBM, and Google.
The RightScale 2017 State of the Cloud Report reiterates that AWS continues to lead in public cloud adoption.
Though AWS has come up with a myriad of services, one can’t help but notice that most of its services are either under-utilized or untapped to their true potential. The reason behind this under-utilization is often the lack of know-how and basic ignorance. This article, therefore, caters to insights from different user cases related to AWS. The primary objective is to offer your organization some hacks to save cost on the cloud infrastructure and manage it effectively.
Let’s explore each of these tips and tricks.
20 Tips and Tricks to Get the Most Out Of AWS
- Elastic IP can be a Free of Cost Feature
AWS provides one free Elastic IP with every running instance. However, additional EIPs for that particular running instance is often chargeable and it can cost you in certain scenarios. AWS charges its customers for EIPs in instances when they are either not associated with any instance or if they are attached to a stopped instance. They ensure that the stopped instances don’t have EIPs attached to them until required.
On the other hand, EIPs can be remapped for up to 100 times per month without incurring any extra charge.
- Save Big by Judicious Use of ALBs
Regular and rampant classic load balancers can be detrimental for an organization’s budget. As per statistics, an organization is paying at least $18 for each load balancer. ALB comes to the rescue in such situations. They are not only cheaper than classic load balancers, but ALB also supports path based routing, host-based routing, and HTTP/2. AWS ECS effectively supports ALB and can replace up to 75 ELBs with single ALB by proper utilization. However, ALBs only support http and https. So, if companies are using TCP protocol, they will still have to use ELB.
- Record Aliases over the CNAME when Using Route 53
While using CNAMEs for various services like ALB Cloudfront etc., adding an Alias record type over the CNAME provides some additional benefits. AWS doesn’t charge for Alias records sets queries and Alias records save time as AWS Route53 automatically recognizes changes in the record sets. Also, Alias records sets are not visible in reply from Route53 DNS servers making them more secure.
- Follow the Best Practices of EBS Provisioning
EBS volumes are an essential part of the EC2 infrastructure and need special attention while provisioning. It is a better practice to Start with smaller sized EBS volumes as AWS has recently launched feature of resizing EBS volumes as and when the application demands. This solves the problem of provisioning larger sized EBS volumes at an initial level keeping in mind the future requirements. It no longer requires one to schedule downtimes to upgrade their EBS volume capacity. However, the command to acquire new EBS capacity needs to be issued manually. Since provisioned EBS volumes generally have a higher cost as compared to general purpose EBS volumes using smaller EBS volumes also saves on cost.
- Use Multi-AZ RDS for Effective Application Backup and Recovery
In case of failure, AWS switches the same RDS endpoint to the point of the standby machine which can take up to 30 seconds, but the applications keep on working seamlessly. Backups and maintenance are first performed to the standby instance, followed by an automatic failover making the whole process smoother. Also, IO activity is not suspended while taking backups as they are taken from standby instance.
- Configure Multiple Alarms on Cloudwatch to Spike Notifications
Cloudwatch alarms are triggered only when they breach certain threshold. If a metric has already breached its threshold value and notified a team, then the alarm will only notify for the second time if the same threshold, doesn’t have the option for escalating an alarm to a different team if that alarm is not resolved in a stipulated time period. Therefore, to tap the AWS Cloudwatch to its fullest, configure multiple alarms on Cloudwatch for a single incident at different time intervals to notify different teams.
- Make Sure to Delete Snapshots while Deregistering AMIs
EBS snapshots are stored in S3 and do incur storage cost. This means the more AMIs created, the more you pay for storage cost. An AMI gets removed from your account when you deregister the AMI but their EBS don’t get deleted automatically. These are called zombie or orphan snapshots as their parent AMI doesn’t exist and eat up the storage costs. One needs to manually delete these snapshots to free up some storage to combat this problem.
- Leverage Compression at the Edge Feature in Cloudfront
Organizations can use compression at edge feature of Cloudfront while serving web content. Cloudfront automatically compresses the assets and delivers the compressed content to the client not only saving the data transfer cost but also pace up the content download.With this, you can directly compress and serve content from S3.
- Efficient use of S3 enables higher savings
S3 can produce huge savings when dealing with large sized data by leveraging the S3 RRS (Reduced Redundancy Storage) and infrequent access service. AWS keeps their data in a single region backed up in multiple AZs and not in all the regions. This doesn’t mean that the data is less secure. They can still use RRS for some less critical data.
They can also use Glacier for archival of data as Glacier supports data retrieval in a few minutes.
- Replicating Critical S3 Data
To replicate or backup the critical S3 data to other regions, companies now have a feature of cross-region replication of an S3 bucket to a different region. As soon as the data is uploaded in the source bucket, it will automatically replicate to the destination bucket in a different region. This feature only works with buckets in a different region and not in the same region.
- Save Time and Effort while Data Retrieval from Glacier
Glacier is a very helpful feature in data archival. However, data retrieval from Glacier can be an expensive affair. Even though the process is slightly tedious in the beginning but data that can be placed in the form of large zipped files should be stored in smaller chunks. It is ideal to move data in in small chunks instead of large chunks because to it is easier to restore a small-sized file in small-sized chunk than in a large-sized chunk. This will also make it easier to access and at a smaller cost as retrieving a large data chunk for a small file will have you pay the cost for the retrieval of the entire big data chunk.
- Avoid using T2 Instances in Production Environment
T2 instances shouldn’t be used in production environment as they are not designed to handle continuous production workloads for a long duration. However, the instances get a specific number of CPU credits per hour which can be utilized at the time of heavy workload for some time. Once the system consumes all its CPU credits, then it will again come to its baseline performance, degrading any CPU intensive mission critical task running on the production server.
- Optimize IO Performance for High-performance Instances
Even the high-performance instances may warrant issues related to IO. One must be aware that EBS optimized instances offer better accessibility and faster response between EC2 and EBS volumes. The new-generation EC2 instances like C4, M4 series are EBS optimized by default. However, instances like C3 or M3, one still needs to manually check the EBS optimized option for high-performance. There is an additional cost associated to avail this feature.
- Use Versioned Object Names to avoid Cloudfront invalidations
Purging Cloudfront cache is a time taking process and it is free for up to 1000 requests every month. In order to prevent invalidating CloudFront cache multiple times, companies can use versioned object names. This can really help in fetching the updated object without overwriting the existing object on CloudFront and is a much faster and reliable approach.
- Save Costs on AWS Screenshots
When you create an EBS volume based on a snapshot, the new volume begins as a replica of the original volume. EBS snapshots stored in S3 are incremental in nature. If you are creating a snapshot of a volume for the first time, the snapshot size would be the same as the disk usage. However, if their data is not changing too much, the storage size accumulated by the snapshots would be more or less the same as previous due to incremental nature of EBS snapshots.
- Monitor your AWS Usage
You don’t need a third party application to keep a tab on your AWS account. AWS comes with a billing alert feature to keep the customer apprised about their usage and expenses. This allows customers to plan their AWS bandwidth and handle any sudden spikes in the bill.
- Give Your Account the AWS Limits Advantage
Every service of AWS comes with a soft limit in specific regions. Having a clear idea of the region-specific server requirements is helpful decreasing the server limits of other regions to zero. So, even if your account’s security is compromised, AWS limits will prevent attackers from being able to perform any malicious activity in other regions keeping the AWS billing under control. These limits can be availed for EC2, RDS, ECS among other AWS services
- Remove Unnecessary CloudWatch Resources to Save Cost
You can remove alerts and notifications related to insufficient data, monitoring of nonproduction EC2 instances, redundant CloudWatch alarms and create custom metrics for the important parameters. It is also useful to remove non-functional and unused dashboards. This exercise will help you reduce some of the superfluous costs incurred.
- Save by Identifying and Segregating EC2 Workloads
Organizations can leverage spot instances for ad-hoc tasks and spot-fleet for QA/Test environments to optimize EC2 instances. You can migrate applications to a dockerized platform and use ECS for making the use of EC2 by running multiple containers on the same servers that can save any additional costs to be incurred.
- Warm-up your ELB resources before the Big Day
An anticipated influx of heavy traffic on a certain occasion, event etc. can negatively impact an application with issues as the website may become unresponsive to the users beyond application bandwidth. It is advisable in a scenario to plan and prepare your ELB to handle such a huge traffic. You can inform AWS about such this sudden surge at least a day in advance by raising a ticket to AWS support system so that they can add more nodes behind LBs. This would make the application more reliable in handling such workloads. This will help your organization prevent any application downtime caused due to traffic.
Conclusion
AWS has rightly gained its status as the most sought-after cloud provider giving its users some compelling benefits. Since infrastructure entails a significant capital expenditure, organizations need to be very particular about maintaining their infrastructure on the cloud. A little wisdom on cloud expenditure can help you avoid procuring services that you will not use. AWS has effectively changed the world of startups with is user-friendliness, flexibility, scalability, pricing and many such advantages. If you are still contemplating whether or not to choose AWS, you can read our blog on AWS benefits to gain clarity. You can also download our whitepaper on 20 Tips and Tricks Companies Must Know While Working on AWS for a more in-depth take on these best practices. If you have already chosen AWS, with the help of these tricks and tips you can make a huge difference in Cloud cost and utilize your cloud infrastructure services to their full potential.