This is probably the least "shiny" thing AWS has announced, but to me it's the biggest one yet. I can finally have a single LB for all of prod. This is probably the first thing they've announced that I will be using on day one.
It is kind of unbelievable that a feature such as this took so flipping long to add. But hey, it's here now.
Have been waiting for this for awhile. However, is needing to "warm up" the load balancer to deal with sudden spikes still an issue? This was a deterrent to going all-in on the ALB despite other benefits (SSL termination, etc.)
Also, wondering if there any obvious cons (perhaps performance?) to using one ALB to handle multiple routes vs. separating them.
You are welcome to check with AWS Support based on your predicted traffic load for extra assurance, but in my experience the new ALB no longer requires priming, even for significant traffic spikes.
If any of you are going to Dockercon in Austin stop by the AWS booth and I'd be happy to show you a demo app that uses an ALB with about ten rules to route traffic to different microservices running as tasks under EC2 Container Service. The demo includes a load test to drive traffic to the ALB and the services behind the ALB, verifying the consistent performance as traffic scales up.
I asked one of Amazon's ELB Product Managers this a few months ago and was told that no, there's no longer a need to warm up an ELB/ALB except perhaps in extreme cases.
Wow, if so, this should be "front and center" news. This best practices page [1], upd. 2014 still talks about "Pre-Warming"
Tangent: I wonder how often we utilize outdated "truths" in our best practices. For example, the "you must minimize the number of assets" rule was obsoleted with HTTP/2, but will probably take years to take hold.
With this announcement, is there still any benefit in having cloudfront in front of alb if you don't do any caching? I assume that aws has less latency on their internal network, but would love to see some numbers.
Does that mean - If I have 4 ec2 instances sitting behind an ALB, I can target and hit 1 of the 4 ec2 instances if need be? For eg. debugging purposes?
That was already possible. You can make any number of 'target groups', each of which have one or more instances. In your case you would have all 4 instances in your 'service1' group, and 1 instance in your 'debug' group. You'd then point example.com/service to 'service1' and example.com/debug to 'debug'.
With this announcement, you can point service.example.com or www.example.com to 'service1 and private.debug.example.com to 'debug'. And you could also have free fully managed SSL on all subdomains as well.
You can only configure a single SSL listener per load balancer, and that listener can only use a single certificate. That means you do indeed still need to use either wildcard certs or certs with multiple hostnames. Luckily you can very easily create those through the AWS certificate manager for free.
Think ALB already supported only SNI. Cloudfront has an offering that supports the older standard, but that's really expensive because of the need for all the static IPs.
Think the load balancers never had the possibility of static IPs, so SNI from the start.
But yeah, like another comment said, with the built in ACM managing certs is a non issue. It's like having an AWS specific Lets Encrypt for any service you want.