Cloud hosting compared

Written by Guido Jansen in
November 2009


Today I wanted to test cloud hosting for my own websites. Although I'm not a programmer/techie guy (although that seems to depend on the perception of the people I'm with), I think it is an interesting technology and I'm quite certain this is what storage and hosting will be using more and more in the future. And if you (still) think cloud computing is something mystical or 'for others': if you're using any of the Google services, Facebook, Twitter or something like Dropbox or Jungledisk you are already using it! So what I did today was to create a Rackspace cloud sites account and compared it with my current dedicated server solution from Some of the test were also executed on a shared hosting provider I've always been quite content with. Both the dedicated and shared servers are located in The Netherlands, Rackspace servers are in Texas USA. Test were executed at a local time between 12 and 2 AM so we can assume that the local servers weren't too busy, this would certainly be a disadvantage for RS so I might do the test again when the local time for RS is also between 12 and 2 AM. Notice: although these measurements are specific for my location and situation, I think it can still be useful for people in the same situation or useful for people who want to do the tests on their own and want to compare the results. Specs from the dedicated server:

Intel Dual Core Xeon 3065 2,33Ghz/4M 1333FSB 4GB 667Mhz DDR2 RAM

Specs from the shared and cloud hosting were not available To test performance, I used 5 tests: 1) ping 2) wget 3) loading a (drupal) webpage 4) HTTP Server Benchmarking Tool 5) Laying Siege Limitations of my own fiber connection: 50MB/s down, 6MB/s up. So how did they compare? (you can also skip the technical gibberish and skip to the conclusion) TEST 1: pingtime (99 requests): cloud hosting: 130.53ms on average (min 130.00, max 132.00) dedicated hosting: 7.34ms on average (min 6.88, max 10.60) shared hosting: 12.41ms on average (min 11.90, max 16.80) TEST 2: wget file (25.139.090 bytes) cloud hosting: 7.7s (3.09 MB/s) dedicated hosting: 6.0s (4.01 MB/s) TEST 3: Loading a webpage (drupal cms frontpage from, average on 10 reloads, measured with YSlow) Tested conditions: - normal: normal reload: just hitting Ctrl+R in my Fx browser - cleaned: cleaned Fx cache: hti Ctrl+Shft+R in my browser - drupal on: drupal performance enhancements 'on': Normal caching mode on, page compression, block cache, optimized CSS and optimized javascript enabled - drupal off: performance enhancements 'off': Normal caching mode off, page compression, block cache, optimized CSS and optimized javascript disabled average on 10 loads: cloud hosting: 0,873s | 1,876s | 1,168s | 1,728s dedicated hosting: 0,672s | 0,792 | 0,737s | 0,920s What we can see here: clearing browsercache always slows down pageload. Turning on Drupals performance enhancers almost (?) always speeds up the load time. TEST 4: HTTP Server Benchmarking Tool httperf is a tool to measure web server performance. It speaks the HTTP protocol both in its HTTP/1.0 and HTTP/1.1 flavors and offers a variety of workload generators. This code makes a total of 100 connections are created and that connections are created at a fixed rate of 10 per second: httperf --hog --server --num-conn 100 --ra 10 --timeout 5 Results from cloud hosting:

Connection rate: 9.3 conn/s (107.3 ms/conn, <=11 concurrent connections) Connection time [ms]: min 601.5 avg 738.2 max 1294.6 median 706.5 stddev 117.4 Connection time [ms]: connect 130.8 Connection length [replies/conn]: 1.000

Request rate: 9.3 req/s (107.3 ms/req) Request size [B]: 66.0

Reply rate [replies/s]: min 8.8 avg 9.3 max 9.8 stddev 0.7 (2 samples) Reply time [ms]: response 461.9 transfer 145.4 Reply size [B]: header 537.0 content 9959.0 footer 2.0 (total 10498.0) Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0

CPU time [s]: user 1.56 system 9.12 (user 14.5% system 85.0% total 99.6%) Net I/O: 96.2 KB/s (0.8*10^6 bps)

Results from dedicated hosting:

Connection rate: 10.0 conn/s (99.9 ms/conn, <=4 concurrent connections) Connection time [ms]: min 84.2 avg 107.8 max 498.8 median 85.5 stddev 63.0 Connection time [ms]: connect 7.8 Connection length [replies/conn]: 1.000

Request rate: 10.0 req/s (99.9 ms/req) Request size [B]: 70.0

Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples) Reply time [ms]: response 90.6 transfer 9.3 Reply size [B]: header 470.0 content 9967.0 footer 2.0 (total 10439.0) Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0

CPU time [s]: user 1.55 system 8.20 (user 15.5% system 82.1% total 97.6%) Net I/O: 102.7 KB/s (0.8*10^6 bps)

Results from shared hosting:

Connection rate: 6.7 conn/s (149.2 ms/conn, <=55 concurrent connections) Connection time [ms]: min 2574.2 avg 4911.0 max 7712.1 median 4872.5 stddev 1831.0 Connection time [ms]: connect 12.8 Connection length [replies/conn]: 1.000

Request rate: 6.7 req/s (149.2 ms/req) Request size [B]: 70.0

Reply rate [replies/s]: min 0.6 avg 0.9 max 1.2 stddev 0.4 (2 samples) Reply time [ms]: response 2422.4 transfer 2656.4 Reply size [B]: header 482.0 content 26382.0 footer 2.0 (total 26866.0) Reply status: 1xx=0 2xx=9 3xx=0 4xx=0 5xx=0

CPU time [s]: user 1.43 system 13.02 (user 9.6% system 87.3% total 96.8%) Net I/O: 16.3 KB/s (0.1*10^6 bps)

TEST 5: Laying Siege Siege is an http load testing and benchmarking utility. It was designed to let web developers measure their code under duress, to see how it will stand up to load on the internet. Siege supports basic authentication, cookies, HTTP and HTTPS protocols. It lets its user hit a web server with a configurable number of simulated web browsers. Those browsers place the server "under siege." Take a look at for more information about Siege). What I executed was: Test 5.1: siege -c25 -t1M (25 concurrent users, one minute stress test) dedicated hosting results:

Transactions:                1027 hits Availability:               97.44 % Elapsed time:               59.56 secs Data transferred:            3.00 MB Response time:                0.06 secs Transaction rate:           17.24 trans/sec Throughput:                0.05 MB/sec Concurrency:                1.02 Successful transactions:        1027 Failed transactions:              27 Longest transaction:            9.03 Shortest transaction:            0.03

cloud hostingresults:

Transactions:                 877 hits Availability:               97.55 % Elapsed time:               59.11 secs Data transferred:            2.64 MB Response time:                0.35 secs Transaction rate:           14.84 trans/sec Throughput:                0.04 MB/sec Concurrency:                5.14 Successful transactions:         877 Failed transactions:              22 Longest transaction:           14.32 Shortest transaction:            0.28

shared hosting results

Transactions:                 153 hits Availability:              100.00 % Elapsed time:               59.28 secs Data transferred:            2.32 MB Response time:                8.45 secs Transaction rate:            2.58 trans/sec Throughput:                0.04 MB/sec Concurrency:               21.80 Successful transactions:         153 Failed transactions:               0 Longest transaction:           13.75 Shortest transaction:            2.50

So let's increase the stakes a bit... Test 5.2: siege -c300 -t2M (300 concurrent users, two minute stress test) dedicated hosting results

Transactions:                3346 hits Availability:               80.43 % Elapsed time:              119.24 secs Data transferred:            9.81 MB Response time:                0.89 secs Transaction rate:           28.06 trans/sec Throughput:                0.08 MB/sec Concurrency:               24.88 Successful transactions:        3346 Failed transactions:             814 Longest transaction:           26.08 Shortest transaction:            0.03

cloud hosting results

Transactions:                2935 hits Availability:               82.05 % Elapsed time:              119.40 secs Data transferred:            8.59 MB Response time:                1.21 secs Transaction rate:           24.58 trans/sec Throughput:                0.07 MB/sec Concurrency:               29.85 Successful transactions:        2935 Failed transactions:             642 Longest transaction:           26.39 Shortest transaction:            0.29

shared hosting results

Transactions:                  32 hits Availability:                3.59 % Elapsed time:              119.75 secs Data transferred:            0.61 MB Response time:               19.42 secs Transaction rate:            0.27 trans/sec Throughput:                0.01 MB/sec Concurrency:                5.19 Successful transactions:          32 Failed transactions:             859 Longest transaction:           39.65 Shortest transaction:            0.00

(this actually killed the server... woops.... luckily it got back quickly)

So the above only takes performance into account. Let's take a look at pricing: My dedicated server: €160/ month Rackspace cloud hosting: $100 (around €75)/month Shared hosting: €30 - €70/ month. (price has no impact on performance, only on diskspace and data limit) Of course the specs and performance can vary wildly, but the above is what I tested The Conclusion As you can see, the dedicated hosting almost always outperformed both cloud and shared hosting.  The only noticeable advantage for cloud hosting compared to dedicated hosting is in Test 5.2 when there are many requests at the same time. Cloud computing also outperforms shared hosting on test 4 and 5, shared couldn't even handle test 5.2 and crashed. For me, the performance advantage of cloud hosting with many request isn't really a big advantage: my sites don't need to handle that kind of traffic. I believe that the main disadvantage for cloud hosting is the location an thus the high ping/ connection time from where I'm living. Taking costs into account, I think cloud hosting (even with the high ping times) is a very good alternative for shared hosting. It's not much more expensive (when your on a high-end shared hosting) but performance is much better and more stable. And as long as normal pageloads don't exceed the 2 seconds (it's still below 1 second) it's quite acceptable. It comes close to dedicated performance for less then 50% of the price! Rackspace let me know that they are working on providing the same service in their European datacenters somewhere in Q2 next year which would mean much better ping times. I'm looking forward to redo the test when that happens!

Recent posts
Optimization hierarchy of evidence
Optimization hierarchy of evidence

A hierarchy of evidence (or levels of evidence) is a heuristic used to rank the relative strength of results obtained from scientific research. I've created a version of this chart/pyramid applied to CRO which you can see below. It contains the options we have as optimizers and tools and methods we often use to gather data.

[EN] Datascience can do what?
[EN] Datascience can do what?

This is a bonus episode with Emily Robinson (Senior Data Scientist at Warby Parker) en Lukas Vermeer (Director of Experimentation at In her earlier session that day, Emily said that real progress starts when you put your work online for others to see and comment on which in this case was about Github. Someone from the audience wondered how that works out in larger companies where a manager or even a legal department might not be overly joyous about that to say the least so I asked Emily about her thoughts on that. Recorded live with audience pre-covid-19 at the Conversion Hotel conference in november 2019 on the island of Texel in The Netherlands. (oorspronkelijk gepubliceerd op