The web is a new platform, especially the new modern web-2.0-AJAX platform. It has only started featuring in the public knowledge for five years. Five years! Windows has been around, since its 3.11 incarnation, for over twenty years. At long last, however, we have what the computing world has been dreaming about for decades — a ubiqutiuous platform to write applications which can utilize the network natively. From “communicate with people everywhere” (Gmail and other through the web mail services) to “buy anything from anyone” (eBay and other online shops), we have a new platform with infinite new possibilities, many of them unexplored, simply because there has not been enough time to do so.
There is only one way to beat the others, to be the first, to be the Amazon that everyone remembers, and not the million of other web stores that vanished away into obscurity, and that is to make things fun and easy for the user. In turn, that means three things — features, features and features. Great UI design and ease of use only look simple, like a juggler’s trick. Inside, they are composed of lots of features which often need the extra smarts of knowing when to turn themselves off, or to change dynamically. This means programming the next web 2.0 success is hard. The only way to stay ahead is to implement more and better.
There is one things your users do not care about. They do not care about how many web servers you have, or how you scale your applications. If you spend too much time scaling the “right way”, if you “keep it clean and orderly”, you are spending valuable engineer time, time that your competitors are spending on making the web 2.0 applications they write better than yours. Their datacenter might feature five layers of interleaved load balancers, they might need twice as many servers as you, but if you do not have any users for your application, that doesn’t help you.
After a few phases of competition, the successful web 2.0 startups seem like a mess inside. They hired a lot of software engineers, who worked nights and weekends delivering the features needed. They needed to scale up as fast as they could which meant patching things as they go, with little room for planning and fixing problems as they went. Some startups have documented their scaling travails (YouTube being one) which are fascinating reading — but it always come down to the fact that scaling fast is always possible if you work hard and don’t care about how clean it is.
In the end, it is hard for anyone to know what is happening inside the application. The features were written fast, with little design or documentation. The servers were connected together in whatever way seemed most expedient at the time. As the company grows, and users call in with more and more problem tickets, managing the issues and finding the problems gets to be an exercise in voodoo and futility, with too many rules of thumb and folklore, and too little organized procedures. To reiterate: getting to this stage is *necessary* if the venture is to succeed in a world filled with competitors.
Network performance solutions must fit into this environment. An environment where nobody has time to spend configuring them, if indeed the person configuring them even knows those parameters. They must be able to map the structure of the data center — both physical and at the application level — themselves, and to give high-quality feedback which allows the operations group to pin-point issues in a way that the RnD group can respond to them. A network performance tool that needs to be heavily configured, or even worse, reconfigured as the structure of the data center or application changes, is not a solution — it is a new problem.