fooshards
he came, he saw, he paid, he imaged, he left
Eric Fusciardi
Higher capacity to handle load
Eventually you run out of hardware to throw at a single instance
Redundancy for failover
Redundancy in general for zero-downtime deployments / restarts
I'm sure you've heard of F5
Software Solution
Hardware Solution
.. and your sanity
Until you're in the scale of Alexa500, with millions of requests an hour, and 1-2ms latency is the difference between life and death of your servers and customers....
...simpler
Front End
Back End
Front end listening is a basic TCP/HTTP Server
Back end proxying the requests to the final endpoints is a basic TCP/HTTP Client
And then there's some logic between how to forward every front end connection into the correct back end
Text
10.6.4.153
DNS for
agy.sircon.com
points here
10.0.2.11
apsrv11
VERTICAL HOLD!
10.0.2.12
apsrv12
10.0.2.13
apsrv13
10.0.2.14
apsrv14
I received a request on my listening (frontend) address.
I'm just gonna toss it to any of these remote (backend) server IP addresses, let them handle it, then send it back.
Doesn't care if it came in on port 80, 443, or if it wasn't a http protocol like ldaps:636, whatever.
I received a request on my listening (frontend) address on port 80.
I'm just gonna toss it to any of these remote (backend) server IP addresses, but translate it to port 8001, since that's what those webservers are using. I'll let them handle it, then send it back.
If I got a request on a port I wasn't listening on, it's gonna be dropped on the floor or sent to some default handler.
I received a request on my listening (frontend) address on port 80. It's for http://herpderp.sircon.com
I need to read the request to figure out which backend to send this to. Since it's herpderp.sircon.com, I'll route it to these webservers on 8001 port. If this was a request for undertale.sircon.com or herpderp.sircon.com/blog, I'd have sent it elsewhere. I'll let them handle it, then send it back.
If I got a request on a port for a context I wasn't listening on, it's gonna be dropped on the floor or sent to some default handler.
I received a request on my listening (frontend) address on port 80. It's for http://herpderp.sircon.com. This is your second request, and you told me your servers need to retain session state.
I need to read the request to find your source IP or the session cookie I added that identifies the specific backend server I sent you to. I'll send you to them again, let them handle it, then send it back.
Since the load balancer is both a server and a client, the server can be listening on SSL, translate the request back to clear HTTP, and push that to the webservers.
This allows the load balancer to absorb the overhead of cryptography, before sending the request to the backend servers. Modern systems can have dedicated crypto processors or will delegate to a GPU to efficiently handle these operations. Make sure your Load Balancer supports this, and you're in business.
Except that magical F5 box you put in your datacenter in place of a nice router (Layer 3)
is now acting at Layer 7 like some sort of monster
not fulfilling what a high performance router would do,
but still somehow chaining you into their vampiric support model
making you learn their crazy language of rule writing
sapping productivity
and wallets
Simple, yet deep configuration.
Software based.
Free. (as in beer, and speech)
But you'll run into headaches when you start doing anything non-trivial.
IT'S TIME TO MAKE SPAGHETTI DEMO
svn+ssh://botd-svnsvr.devop.vertafore.com/svn/chef/sircon_weblogic/templates/default/haproxy.cfg.erb
The config file is 222 lines
And it supplies all the routing and load balancing for an entire environment of the Sircon, G2, and supporting systems.
Including context-sensitive routing.
And SSL offloading.
the raw export of f5 for uat is 8k lines of xml fragment garbage
... But in reality, it's a super high performance load balancer. It takes all the latency weaknesses of using a host to load balance, and integrates it into the network layer for absolute pinnacle of performance and stability as long as you pay the $$$$.
It shouldn't be heckled like this.
By fooshards