A crucial point. A few years back, I had to deal with some rather "chatty" applications that were attempting to span filesets/databases located in both the U.S. and India. Latency on a single data exchange might be annoying; multiply that by hundreds or thousands of exchanges per each human-level interaction with the application, and said application ground to something that would make a "crawl" look high speed.
Partly the vendor's fault, for not accurately representing this situation. Partly the Company's fault for neither anticipating nor investigating it (nor listening to me when I advocated doing so); further, for attempting a non-standard deployment in an attempt to save a few bucks.
I guess I'll blame the vendor again for such an unnecessarily verbose and crap-laden application design. Anyway, the whole thing turned into one giant, festering pool of fail, that the users actively avoided when they could and loathed and sweated over when they couldn't. Eventually, turnover at the pointy haired boss level made it politically feasible to kill more and more of the pieces, and now some years later the Company has supposedly ponied up the money for a solution that actually works (well, the demos look/work better, and the basic design seems capable of allowing it to work).
When I think of the countless hours and frustration wasted on that doomed deployment.
So, anyway, watch your network latency, know your application design, and pay attention to the implications. When testing, test network performance, and inject latency, noise and the like to get a picture of what things will look like in the real world.
Sounds simple; nonetheless, this technologically relatively sophisticated company still managed to completely screw it up.
A crucial point. A few years back, I had to deal with some rather "chatty" applications that were attempting to span filesets/databases located in both the U.S. and India. Latency on a single data exchange might be annoying; multiply that by hundreds or thousands of exchanges per each human-level interaction with the application, and said application ground to something that would make a "crawl" look high speed.
Partly the vendor's fault, for not accurately representing this situation. Partly the Company's fault for neither anticipating nor investigating it (nor listening to me when I advocated doing so); further, for attempting a non-standard deployment in an attempt to save a few bucks.
I guess I'll blame the vendor again for such an unnecessarily verbose and crap-laden application design. Anyway, the whole thing turned into one giant, festering pool of fail, that the users actively avoided when they could and loathed and sweated over when they couldn't. Eventually, turnover at the pointy haired boss level made it politically feasible to kill more and more of the pieces, and now some years later the Company has supposedly ponied up the money for a solution that actually works (well, the demos look/work better, and the basic design seems capable of allowing it to work).
When I think of the countless hours and frustration wasted on that doomed deployment.
So, anyway, watch your network latency, know your application design, and pay attention to the implications. When testing, test network performance, and inject latency, noise and the like to get a picture of what things will look like in the real world.
Sounds simple; nonetheless, this technologically relatively sophisticated company still managed to completely screw it up.