For a while now, Amazon has offered a simple load balancing solution called Elastic Load Balancing (ELB). For simple sites, this avoids the need to run dedicated load balancer instances. Unfortunately, Amazon’s ELB solution is fairly limited feature-wise, so you may be forced to run your own load balancer instance anyway. Below are some of the limitations I ran into at SocialMedia.com:
- No ACL feature. This means it’s impossible to make forwarding decisions to back end servers based on URLs/headers/etc.
- Once the ELB is created, it’s impossible to change the port configuration. If you ever need to do this, you’ll need to create a brand new ELB and update the appropriate DNS records to point to the new ELB. Annoying. (Update: It appears that there may be a way to change the port configuration after all…)
- More dynamic/automated environments need to be careful when managing instances that are behind an ELB. I found that it’s not a good idea to simply stop your load-balanced instances without de-registering them from the ELB first. It’s also important to ensure that your instances are in a “running” state before registering them with an ELB. You can read why in a thread I started on the AWS Developer Forum.
My biggest issue was the first one. For one of my latest projects at SocialMedia.com, I needed the ability to accept all connections on the same external port and redirect them to different internal ports depending on the request. For example, a request for http://api.example.com/services/foo/v1/something needed to get forwarded to port 123, whereas a request for http://api.example.com/services/bar/v1/something needed to get forwarded to port 456. You can do this easily with HAProxy ACLs.
First I define a “frontend” section. This frontend listens on external port 80 and contain two ACLs; one for each of the requests in my example above. Line 4 defines an ACL called services_foo_v1 which will match any request with a path that starts with /services/foo/v1/. Line 5 forwards all requests that match the services_foo_v1 to the “backend” with the same name. The “default” line simply forwards unknown requests to the “website” backend:
frontend http bind *:80 acl services_foo_v1 path_reg ^/services/foo/v1/ use_backend services_foo_v1 if services_foo_v1 acl services_bar_v1 path_reg ^/services/bar/v1/ use_backend services_bar_v1 if services_bar_v1 default_backend website
Next I define my “backend” sections. Each backend contains a list of servers/ports that are able to handle certain requests. Lines 2 and 7 are a bit of magic that you may or may not need for your environment. They ensure that (for example) the original request for /services/foo/v1/something on the external interface gets rewritten to /something before being passed on to the backend servers.
backend services_foo_v1 reqrep ^([^\ ]*)\ /services/foo/v1/(.*) \1\ /\2 server app01 app01.example.com:123 server app02 app02.example.com:123 server app03 app03.example.com:123 backend services_bar_v1 reqrep ^([^\ ]*)\ /services/bar/v1/(.*) \1\ /\2 server app01 app01.example.com:456 server app02 app02.example.com:456 server app03 app03.example.com:456 backend website server webserver www.example.com:80
HAProxy High Availability on EC2
In a normal (non-EC2) environment, high-availability is achieved by running two HAProxy instances with a shared IP address and a heartbeat protocol between the instances. The idea is that if one HAProxy instance goes down, the other will simply take over the shared IP address. Unfortunately, it’s just not possible to share private IP addresses like this in EC2. So what other options are there?
The wrong solution is to use round-robin DNS records to distribute traffic between the two load balancer instances:
api.example.com. 300 IN A 184.108.40.206 api.example.com. 300 IN A 220.127.116.11
This will sort-of work while both instances are running, but if one goes down, half of your traffic will be sent to a dead load balancer. Remember kids, round-robin DNS records are not a high availability solution. ;-)
Other people have suggested using an Amazon elastic IP in conjunction with the load balancer instances. The idea is to detect the failure of one of the instances (via your existing monitoring system, etc.) and automatically reassign the elastic IP to the other instance. Although this solution sounds simple enough, uptime is important enough to my company that I don’t really trust myself to make this a totally automated and 100% foolproof process. It’s the kind of thing I just don’t want to have to worry about.
Fortunately, there’s another much simpler solution. Just stick an ELB in front of your HAProxy instances:
| +-----+-----+ | Amazon ELB| +-----+-----+ | +------+------+-------------+---------+---------+ | | | | | +-----+-----+ +-----+-----+ +---+---+ +---+---+ +---+---+ | haproxy01 | | haproxy02 | | app01 | | app02 | | app03 | +-----------+ +-----------+ +-------+ +-------+ +-------+
Elastic load balancers already have built in redundancy (a single ELB instance is actually backed by a pool of several load balancers which automatically grows and shrinks according to current load), so we don’t have to worry much about that. Then we can stick each HAProxy instance in its own EC2 availability zone to guard against internal EC2 network issues. Now assuming that all HAProxy instances are configured identically (synchronized via Chef, of course ;-), either instance can go down, and it wouldn’t matter, because the ELB will simply route traffic to the remaining live instances. Another nice thing about this solution is that both HAProxy instances will be handling requests at the same time (as opposed to having a backup that is only used during emergencies). This means you get a bit of additional capacity in addition to your redundancy. Though obviously, you’ll want to keep an eye on the total load across all HAProxy instances to ensure that you always have enough spare capacity to survive a failure.