cc by-sa flurdy

ec2 - Amazon Elastic Compute Cloud tips and howtos

ec2 instances

This page is part of larger set of tips & howtos on ec2 by flurdy.
| More
Other ec2 docs by flurdy

Instance types

If you read Amazon's list of instance types you start at the micro and small instances and they seem okay. Then the rest sound better and better, eventually really powerfull.

Unfortunetly it is just a dream as all the others are really too expensive, unless you are a large business whom can afford this investment. The only viable option is the "Small instance" and the new "Micro instance".

The 2nd least expensive option is the "Large instance". It is 64bit and has 7.5GB of memory. It is however 4 times as expensive.

The "small instance" is not a bad option and is by far the most common instance in Amazon ec2 cloud. True, it is only 32bit, yes, the memory is only 1.7GB and it is not a virtualised "multi core".

But it does fullfill most needs, and works very well as part of a more fault taulerant set up by splitting servers and still keep the cost below the "Large" option. Even with the paltry memory you can comfortably run a large amount of services one this instance.

The desirable instance option is the "High Memory Double Extra Large" instance. It has a whopping 34GB of memory and many cores. But at 14 times the cost of a "Small instance", I'd rather have 14 machines instead of one.

The "Micro instance" is a very nice affordable test instance. And good for low performance single purpose servers. So this is the solution if you have static web servers without 10,000s of visitors, or need a simple mail server etc that just do not need GB of memory. You may need to add a swap disk to compensate for low memory.


Instance suggestions

These are my suggestions for a server/instance network layout in Amazon ec2 based on my needs.

For two years I ran just one "Small instance". And it served its purpose very well and limited my costs. It was my mail, web and webapp server, as well as code repository, VNC tunnel, backup etc.

However every now and then the instance would die, about once or twice a year. Which is to be expected, however it was always caught me at bad times. However since I did use my own data backup strategies, the data loss was not catastrophic. And when they introduced EBS the data loss was minimal. It was however still tedious to try to remember the exact config changes since last backup, and the time spendt to re-setup the server.

So lately I am applying a more fault tolerant, seperation of concerns instance strategy. It costs more, but is more reliable and easy to reinstate if anything fails.

So here is my current set up and what I recommend:

(these are all small instances, however the web, mail and tunnel machines could be micro instances)

Basically this consists of one server configured with Apache httpd to host websites. This relays any calls to web applications via mod_proxy to another instance, the webapp server. That server is configured with Apache Tomcat. By separeting these servers, memory or load intensive webapps will not interfere with my websites. This setup is detailed on my Apache & Tomcat ec2 howto page.

A third instance is my mail server. It was very annoying when web site load or a memory leak in a webapp caused the instance to hang, thus not receiving any emails untill I relaunch and reconfigure another instance. Having a separete instance seprates that concern and have ready made backup mx servers in hand as an extra safety net. I detail this set up on my Mail server on ec2 howto page.

I also use ec2 as an easy way of helping people via VNC or SSH. By having it tunneled via ec2, I can configure firewalls on e.g. my inlaws pc to only accept traffic from certain ec2 IPs etc making it irrelevant where I am. Since this does not have to permanently on, the costs are extermely minimal. I detail this set up on my remote access on ec2 howto page.

Sometimes I run some development and other hacking directly on a server. Having an instance ready when I need to which does not interfere with/risk "live"/"production" servers is reassuring.

Having an Ubuntu server image ready on demand is also very handy. Whenever you need to do some experimenting or try some new software, or general hacking then a completely separate server is adviced. Use this as the base for future images.

Lately I have also created an ubuntu desktop image, for experimenting with some gui apps using NX client. Either an extension to my ubuntu server, similar to alestic's karmic tip or using an alestic desktop image. (Note, the kernal module fuse only works with karmic based images).

Good practices

In addition it is a good practice to periodically make instance images of your permanent instances. This is so that you can quickly relaunch them if they crash. Also so that you can test new configurations on shortlived experminent instances of these images before making changes to live servers. You can find more detail and tips on how to make images on my AMI making tips page.

Also have a routine of booting your on demand images now and then to update their packages and creating newer AMI versions of them.
This will save you time when you need to use them in a rush, and not having to update them or leave them exposed to outdated security patches.

Having a good strategy of using EBS disks will make data loss risk minimal and enable quick data transfer etc. More on my EBS tips page.

Tip: Remember to tie down your images, so when you launch them they are secured already. Have your local SSH keys on all images & instances. Each image & instance to have separate SSH key and apply all core images & instances public key to each image & instance.



Costs

Estimation of your costs will be by using Amazon ec2

Amazon's own calculator will give you a precise estimate of your costs. However it is very over engineered and I certainly did not have a clue to what my data/traffic needs will be in such detail before I started using ec2.

In general terms your Amazon AWS costs will consist of:

Every ones requirements of AWS (ec2/S3) will be different, but in general the 24/7 instance will be 90% of your costs. S3 costs will gradually increase over time as you dump more and more data in it. But still after 2 years of using it, it is still only 5% of my costs.

So a simpler calculation is estimating the number of instances permanently on + the required S3 usage + any extra short term instance use. This equates to permanent instances * 730 (average hours in a month) * 0.085 (small instance hourly cost). Add S3/IP/EBS costs and any instance testing, which usually is at about 5-10%. So your estimated costs will be in the region of:

number of 24/7 instances * 0.085 * 730 * 1.1 = AWS cost

If you can predict the instances you will require for the rest of the year, which after initial setup chaos you can usually deduce after a few months, then using reserved instances will reduce your cost per instance by ~35%. Then the calculation is then:

( (number of 24/7 instances * 227.5/12)
+ (number of 24/7 instances * 0.03 * 730)
* 1.1 )
= AWS cost

Keeping costs down

If the estimated costs seem quite a lot, they probably are. :) However if you try to think of what service you get, what it would cost you to get a similar flexible set up, then it is not expensive.

Colocation does usually cost about the same, especially as traffic is normally costlier in normal service providers. And it is not flexible at all.

Hosting yourself e.g. 3 servers at home would actually after a while cost quite a lot. You need to consider electricy power used, hardware used, and future maintenance and upgrade, your time used in configuring etc the actual hardware and machine. Your ISP's upload bandwidth capacity might be limited, and you might actually be in violation of their terms. Using virtualised images simulates multi server layout, but you quickly reach hardware limits.

But you can keep ec2 costs down:




back to flurdy's ec2 docs

flurdy