Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Comparing instance prices on the Amazon cloud

As the largest cloud computing company Amazon Web Services (AWS) offers various options to use compute-power on an “as-needed” basis. You can choose what size and type of machine, what number of machines – and you can choose a price model where you are “bidding” for the resource. This means you might have to wait longer to get it, but you will get an impressive discount! You can choose your machines from the AWS dashboard.

Here is a comparison of the current prices for “General Purpose – Current Generation” AWS machines in the EU (Frankfurt) region (as of 13/04/2017):

vCPU ECU Memory (GiB) Instance Storage (GB) Linux / UNIX Usage On-Demand Price per Hour Spot Price per Hour Saving %
m4.large 2 6.5 8 EBS Only $0.129 $0.0336 74
m4.xlarge 4 13 16 EBS Only $0.257 $0.0375 85
m4.2xlarge 8 26 32 EBS Only $0.513 $0.1199 77
m4.4xlarge 16 53.5 64 EBS Only $1.026 $0.3536 66
m4.10xlarge 40 124.5 160 EBS Only $2.565 $1.1214 56
m4.16xlarge 64 188 256 EBS Only $4.104 $0.503 88
m3.medium 1 3 3.75 1×4 SSD $0.079 $0.0114 86
m3.large 2 6.5 7.5 1×32 SSD $0.158 $0.0227 86
m3.xlarge 4 13 15 2×40 SSD $0.315 $0.047 85
m3.2xlarge 8 26 30 2×80 SSD $0.632 $0.1504 76

This only shows a selection of machine options and the prices obviously change over time – but the message should be clear…

Machine categories / families

To get an idea what the different machines are here are the current categories:

Instance Family Current Generation Instance Types
General purpose t2.nano, t2.micro, t2.small, t2.medium, t2.large, t2.xlarge, t2.2xlarge, m4.large, m4.xlarge, m4.2xlarge, m4.4xlarge, m4.10xlarge, m4.16xlarge, m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.12xlarge, m5.24xlarge
Compute optimized c4.large, c4.xlarge, c4.2xlarge, c4.4xlarge, c4.8xlarge, c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.18xlarge
Memory optimized r4.large, r4.xlarge, r4.2xlarge, r4.4xlarge, r4.8xlarge, r4.16xlarge, x1.16xlarge, x1.32xlarge, x1e.xlarge, x1e.2xlarge, x1e.4xlarge, x1e.8xlarge, x1e.16xlarge, x1e.32xlarge
Storage optimized d2.xlarge, d2.2xlarge, d2.4xlarge, d2.8xlarge, h1.2xlarge, h1.4xlarge, h1.8xlarge, h1.16xlarge, i3.large, i3.xlarge, i3.2xlarge, i3.4xlarge, i3.8xlarge, i3.16xlarge
Accelerated computing f1.2xlarge, f1.16xlarge, g3.4xlarge, g3.8xlarge, g3.16xlarge, p2.xlarge, p2.8xlarge, p2.16xlarge, p3.2xlarge, p3.8xlarge, p3.16xlarge

 
and slightly older models:

Instance Family Previous Generation Instance Types
General purpose m1.small, m1.medium, m1.large, m1.xlarge, m3.medium, m3.large, m3.xlarge, m3.2xlarge
Compute optimized c1.medium, c1.xlarge, cc2.8xlarge, c3.large, c3.xlarge, c3.2xlarge, c3.4xlarge, c3.8xlarge
Memory optimized m2.xlarge, m2.2xlarge, m2.4xlarge, cr1.8xlarge, r3.large, r3.xlarge, r3.2xlarge, r3.4xlarge, r3.8xlarge
Storage optimized hs1.8xlarge, i2.xlarge, i2.2xlarge, i2.4xlarge, i2.8xlarge
GPU optimized g2.2xlarge, g2.8xlarge
Micro t1.micro

Source: AWS