Block Devices are Tied to the Instance Type
On EC2, each instance type has a predefined CPU and memory size, but thanks to “Elastic Block Storage” (which is managed independently from the actual instance), you can make your block devices as large or as small as you want. You can also attach additional block devices as needed. This gives you a lot of flexibility to provision the appropriate resources for your specific application and to grow things as you need to. RackSpace Cloud has no EBS equivalent, so the size of your disk seems to be static and tied to the instance type. This means if you start to run out of space, you apparently have no choice but to upgrade to the next instance size, regardless of whether you actually need the additional CPU/memory. Based on a conversation I had with support, I’m guessing this has to do with the fact that all block devices are created locally on the physical VM host, rather than on a SAN. So I can definitely see how this architecture would make it difficult (or even impossible) for RackSpace to implement any of the features made possible by Amazon’s EBS.
See The ability to choose amount of RAM and HD space separately on the RackSpace Cloud feedback forum.
Password Logins by Default
On EC2, one of the first things you do is set up an SSH keypair for your account. This saves you from having to set a root password for new instances. You just select the appropriate keypair when creating the new instance and log in with your SSH key. As far as I know, there is no such feature in the RackSpace Cloud. After you request a new instance, you have to wait for a randomized root password to be emailed to you. Let me repeat that in case you missed it. Your root password is emailed to you in plain text over the Internet. Hmm…
See Do not send root password by email on the RackSpace Cloud feedback forum.
Unable to Stop Instances
Yup, it’s just like being in the old days of EC2 before root EBS volumes. Once an instance is started, you can reboot or terminate it, but you can’t actually stop it to save money. At my previous company, part of our continuous deployment process was to automatically spin up a staging environment to test new code before actually deploying it into production. We also had a dedicated testing environment which we would spin up on demand for testing various things. Traditionally, it was very expensive to run duplicate (or triplicate) environments for testing, but EC2 makes this sort of thing trivially inexpensive, since the instances don’t actually have to be running most of the time. I don’t think something like this would be feasible in the RackSpace Cloud, because constantly terminating and rebuilding every instance in every environment would make things a lot slower and more difficult to manage in general. I realize the process could be sped up a bit by creating a bunch of VM images, but I don’t even want to get started on why I hate that idea. Configuration management has made images obsolete as far as I’m concerned.
See Need option to suspend servers to save money on the RackSpace Cloud feedback forum.
No Concept of Security Groups
I guess I just got used to the peace and security of EC2 security groups, because I took it for granted that RackSpace Cloud would have something similar. So boy was I surprised when I discovered that my first new instances were essentially sitting wide open on the Internet! Now if you’re using a configuration management system, it’s not a huge deal to set up a local firewall on all your instances. But it can definitely be scary, because the lack of real console access in the cloud means there’s a very real possibility that you could accidentally lock yourself out of an instance while testing new firewalls rules.
See Create EC2-like security groups, so you don’t have to configure iptables for each instance on the RackSpace Cloud feedback forum.
One of the nice things about the way DNS is configured on EC2 is that when you resolve a public hostname from an instance, you’ll actually get the internal IP address. This means you can use your public hostnames everywhere, and everything will continue to work just fine. Since DNS doesn’t work this way in RackSpace, things just get a bit more complicated, but again, this is mostly just an annoyance to me right now.
Unable to change filesystem
The default filesystem on Ubuntu is EXT3. Want to convert to EXT4 in order to (for example) run MongoDB according to 10gen’s official recommendations? Oops, too bad.
Cloud Load balancers do not Support SSL Termination
In EC2, it’s possible to upload your SSL certificates to an Elastic Load Balancer (ELB) and have your SSL connections terminate right there (i.e. to accept and decrypt SSL traffic on the ELB and forward it in plain text to back end).
| Amazon ELB|
| app01 | | app02 |
It’s nice to be able to offload some work to the ELB, but it’s (almost) necessary if you have something like HAProxy or Varnish in front of your application servers (HAProxy and Varnish will not be able to read your SSL encrypted traffic, and therefore, will not be able to make decisions based on the requested URL, headers, etc.). This means you’ll have to stick something like stunnel between the RackSpace load balancer and HAProxy/Varnish/Whatever to handle the SSL decryption.
See Support SSL termination on Cloud Load Balancers on the RackSpace Cloud feedback forum.
These are pretty important (especially X-Forwarded-For) if you want to know anything about the clients connecting to your servers. Not having them means all your HTTP requests will appear to come from your load balancer, which is essentially useless. RackSpace support told me X-Forwarded-For would be available in Q3 of this year, and that X-Cluster-Client-Ip can be used in the meantime (though it appears that X-Cluster-Client-Ip still isn’t sent with HTTPS requests!), but there are apparently no plans to support X-Forwarded-Port or X-Forwarded-Proto.
See add the x-forwarded-for header to traffic from your cloud load balancer. on the RackSpace Cloud feedback forum.
HTTPS Health Checks on Cloud Load Balancers Occur in Plain Text
How on Earth did this get past QA? Basically what this means is if you set up an HTTPS load balancer (e.g. listening on port 443 and forwarding to 443 on the backend), and you set up an HTTPS health check from the load balancer (i.e. to check the HTTPS version of your site at https://host.example.com/health), you’ll discover that the load balancer essentially makes requests for http://host.example.com:443/health, which will obviously never work, and will result in the load balancer removing all of your instances from rotation. The only workaround is to use the CONNECT health check method, which can only ensure that a port is listening.
Update: This should be fixed as October 4th, 2011.
Based on what I’ve seen so far, I don’t think RackSpace’s Cloud offering even comes close to Amazon’s right now in terms of features and flexibility. EC2 feels to me like something that was designed from the ground up to be essentially “programmable infrastructure,” whereas RackSpace cloud feels essentially like a thin wrapper around a Xen or VMware cluster. Though I fully admit that I’ve only been using it for a couple weeks at this point, so I could be totally missing things, in which case, I would love to get some feedback on some of the issues I’ve raised above.
One thing I think RackSpace does have over Amazon is the ability to mix virtual instances with physical servers. I could definitely see the value in, for example, running some application servers in the cloud for flexibility and running your database on physical hardware for performance (I think the problems with EBS’s IO are pretty well known at this point).