My new blog on technology entrepreneuship and start-ups

Note – this is my old blog on data center virtualization and cloud related technologies.
My current blog about technology entrepreneurship and start-up is here.

What would it take for the clouds to rain?

Technorati Profile

  1. Cloud is definitely the hot word in infrastructure talk these days, In a recent article by Frank Dzubeck in Network Worldhe discussed 5 open questions that need to be addressed for clouds to flourish most so in enterprise environment with its share of regulations and procedures. before going to address those questions from a technical perspective it is important to also address the these questions from the business angle as the solution often lays there.
    the questions that need to be addressed are: 1. Security , 2. Performance , 3. Management, 4. regulation and 5. variable cost structure.
  2. on the business side – enterprise cloud services are definitely a high-caliber problem to solve with multiple technical variables entangled with legal contracts so reducing it to an evolutionary process by looking at existing services we consume helps to make a tangled web into a solvable problem. adapting solutions from the world of voice service, power services and temp workforce placement services should be the start for where to look for solutions and practices.  for example, the security commitment and compensation question has been addressed and placed in practice for a lot of international companies outsourcing offshore. performance has been addressed in voice services more so by creating a commoditized voice service that allows the customer to switch to a different service provider if he hears noises or long latencies on the line than by any strict SLAs. the management is a pure technical question that assumes that underlying application infrastructure is manual and hands-on and not commoditized in nature like power is.  a century ago companies still managed and created their electric power as it became commoditized, the largest consumers don’t ask for the ability to manage and troubleshoot their service provider’s grid and last to the challenge of having a variable on-demand cost associated with computing, again i defer to the power consumption model which we can adopt in which demand forecasting provides a rough estimate but never a precise one.
  3. on the technology side – we are looking into a new type of infrastructure that has 2 main things:
    1. the ability to apply business policies to applications. in this model the application owner does not just request for features but also asociates service level requirements (latency, avalability, security) as well budgets for running the application (otherwise we know everyone will want a milisecond latency and 100% availability for everything).
    2. application infrastructure has to be automated in real-time based on the above business policies without constant hands on from engineers. in this world the infrastructure grows and shrinks based on the changing nature of the application, the demand from users and the business changing needs.
  4. as we see both the business side and the technology side being sorted out, we will see the cloud starting to reach massive commodity consumption which resembles more of power grid than the manual outsource model it has today.

Cut your data center weight by half! – ask me how.

I have been getting complaints that our posts are on the longer side so in order to accommodate the ever shrink attention span of blog readers i will keep this short and to the point (already wasted 3 lines just now…).

let’s start by cutting through the vague, fluffy terminology here. Green is the new black and obviously the euphemism du jour, by saying that we are doing a green data center we are saying it has been wasteful so far and we are looking reduce its costs but give it an ideological facade (how about we call it “the polar bear saving” data center?).

making an existing data center more efficient is a very complex problem, one thing useful that middle-school math taught us was that sometimes the only way to solve a complex problem is by reducing it to a simple problem we can solve and extrapolate from there. in this case let’s look at the daily, universal problem of dieting and weight control and see what we can extrapolate from there to the green data center problem.

in the world of dieting there are three approaches: the first is to try and cut everything we eat in half/tenth and hope to see results in a week. from personal experience we all know that this approach usually holds for a couple of days and ends up in a huge disappointment after a week which translates into a food-binge that gets us to the same place we started (in the best case). the second approach is one that replaces the large fries in our Big Mac + milkshake combo with small fries only to end up surprised at the end of the month that we just lost 1 pound (out of the 200+ we already have) and the third approach is one that aligns your life style, personal suffering thresholds and weight-loss goals to achieve a long term life style change that can be sustained and measured.

Greening the data center isn’t any different, the common approach everyone is taking today is looking for the low hanging fruit in the form of consolidation through virtualization of all the none important applications that no one really cares for or even knows if are still in use. in our analogy, this is really reducing the size of your fries order but not touching the bigmac and shake and the results are the same, you cut some costs but you end finding they are marginal compared to your expectations.

the second aggressive approach we are seeing out there is IT groups setting high expectations of consolidating EVERYTHING, ASAP and expecting to drop costs drastically by next month. this approach ends up like most crash diets, very fast and in a worst position than the one you started with. problems are going to appear in the form of push back form users and application owners complaining about their experience (and in our analogy, we know these guys represent a never-ending appetite for consumption) and each push-back will end in adding larger buffers of errors to the point of giving up all together or gradually finding ourselves eating two orders of trans-fat free onion rings instead of the fries we used to.

so that leaves us with the only sustainable approach to achieve a significantly green data center. this approach takes into account the business or application owners goals and pain thresholds in the form of performance to cost saving trade-offs. organizations trying to approach the green data center in a bottoms up perspective will find wither marginal results or colossal failures.

it is time for Infrastructure groups to open a conversation with the business and start to understand their infrastructure in the context of business process, end-user experience and service level goals. it is only through such conversation that the we can answer questions such as which machines can we sit down, when should we bring them up again or even the simplest sizing question of how much CPU is enough for an application process.

in our experience in the field this approach yields an order of magnitude larger results without compromising the business for which all the infrastructure is there in the first place.

Happy data center diet!

Add to Technorati Favorites

So you want to start a web 2.0 startup?

The web is a new platform, especially the new modern web-2.0-AJAX platform. It has only started featuring in the public knowledge for five years. Five years! Windows has been around, since its 3.11 incarnation, for over twenty years. At long last, however, we have what the computing world has been dreaming about for decades — a ubiqutiuous platform to write applications which can utilize the network natively. From “communicate with people everywhere” (Gmail and other through the web mail services) to “buy anything from anyone” (eBay and other online shops), we have a new platform with infinite new possibilities, many of them unexplored, simply because there has not been enough time to do so.

The barrier to entry are practically non-existant. One programmer, less money per month than what the programmer drinks in coffee per day, and a little time invested can bring a new web 2.0 application on the web. The platform may have its problems (Javascript compatibilities, an awkward communications protocol), but it is out there, and it is the great equalizer between multi-billion dollar behemoths to the lone programmer in the garage. And so, there’s only one thing standing between the new web 2.0 thing and huge, Facebook-level, success. Well, actually many things. All the other new web 2.0 things, which utilize the same low barrier to entry to compete for the same users.

There is only one way to beat the others, to be the first, to be the Amazon that everyone remembers, and not the million of other web stores that vanished away into obscurity, and that is to make things fun and easy for the user. In turn, that means three things — features, features and features. Great UI design and ease of use only look simple, like a juggler’s trick. Inside, they are composed of lots of features which often need the extra smarts of knowing when to turn themselves off, or to change dynamically. This means programming the next web 2.0 success is hard. The only way to stay ahead is to implement more and better.

There is one things your users do not care about. They do not care about how many web servers you have, or how you scale your applications. If you spend too much time scaling the “right way”, if you “keep it clean and orderly”, you are spending valuable engineer time, time that your competitors are spending on making the web 2.0 applications they write better than yours. Their datacenter might feature five layers of interleaved load balancers, they might need twice as many servers as you, but if you do not have any users for your application, that doesn’t help you.

After a few phases of competition, the successful web 2.0 startups seem like a mess inside. They hired a lot of software engineers, who worked nights and weekends delivering the features needed. They needed to scale up as fast as they could which meant patching things as they go, with little room for planning and fixing problems as they went. Some startups have documented their scaling travails (YouTube being one) which are fascinating reading — but it always come down to the fact that scaling fast is always possible if you work hard and don’t care about how clean it is.

In the end, it is hard for anyone to know what is happening inside the application. The features were written fast, with little design or documentation. The servers were connected together in whatever way seemed most expedient at the time. As the company grows, and users call in with more and more problem tickets, managing the issues and finding the problems gets to be an exercise in voodoo and futility, with too many rules of thumb and folklore, and too little organized procedures. To reiterate: getting to this stage is *necessary* if the venture is to succeed in a world filled with competitors.

Network performance solutions must fit into this environment. An environment where nobody has time to spend configuring them, if indeed the person configuring them even knows those parameters. They must be able to map the structure of the data center — both physical and at the application level — themselves, and to give high-quality feedback which allows the operations group to pin-point issues in a way that the RnD group can respond to them. A network performance tool that needs to be heavily configured, or even worse, reconfigured as the structure of the data center or application changes, is not a solution — it is a new problem.

Add to Technorati Favorites

SOA – are we there yet?

Given that it is the end of January, this is also the end of the official prediction season. In this last 2008 prediction post I will cover SOA as we see it in the market.

The challenge of determining if how much SOA is actually happening is the fact that it’s very hard to define it. As anything that IBM leads the way in defining, it is a complex beast involving business process and definition of web service byte structure in one package making it impossible to follow.

In a recent investor talk, Larry Ellison mentioned a slow and long penetration for SOA. The view from the user side is quite different.

To answer the questions simply, SOA is happening big time! While it might not be with the exact service structure, anyone writing a new application today is going to architect it around separate and interchangeable logical modules that communicate through web services. It doesn’t get any more SOA than that.

What is driving this architectural change? As with all successful technology evolutions – the business. The business puts tremendous pressure on developers to come up with new features faster and cheaper. No other architecture today can match the “copy – paste” approach offered by composite applications connecting through web services. It is the de-facto standard.

The challenges facing SOA are probably now is moving from development and QA and into production and infrastructure. The dynamics involved in an constantly changing environment where some part of code is changing somewhere all the time are such that it is best described by one word “Chaos”. One developer could shake his winds on one side of the data center and by the time it gets to the business service relaying on that service on the other side, that service goes down all together.

For SOA to move to the next step, we need SOI (Service Oriented Infrastructure) which the equivalent of the checks and balances placed in any open and dynamic system.

SOI is aware of the application state and dependencies and is able to assure that not every user and service are treated equally as they are not created equal either.

The business is the reason for SOA and the business context is the missing link to make SOA a viable platform (meaning affordable) in production.

Add to Technorati Favorites

Virtualization 2.008 – what is that all about?

Conversations around Virtualization in the second half of 2007 reminded me a lot of that once new term “The Internet” circa 1997. a TV sketch portrayed a host showing a guest around the house and as they pass through the room he explains “this is our living room, we have two internets, one behind the sofa and the other above the TV. this is our kitchen and there is internet right by the fridge and one above the oven. of course there is also one in the restroom and one in each bedroom” finally the guest stops her to ask her if she has any idea what the internet is to which she finally admits “not at all”.

a lot of conversations about virtualization sounds just like that, with people going and on about how virtualization is a strategic shift in the data center and such and how they have two of it everywhere but without ever explaining why and how.

so what is virtualization anyway? – to put in simple words, virtualization is the ability to separate the logical from the physical. bringing this definition down form the religious sphere, virtualization is about making one physical entity appear as multiple logical ones or multiple physical entities seem like one logical entity. the first type is one that includes server virtualization by placing a hypervisor that allows to run multiple OS environments in one physical server and has made a second coming since 2000 by VMware but has been around back in the good ole’ mainframe days. the second type of virtualization has actually been around the data center also since the mid 90’s as part of the network functionality at first and more recently by application switches (most commonly referred to as load balancers or application delivery controllers).

so looking beyond the brief history of virtualization forward, what we are going to see in 2008 will mostly be driven by the move of virtualization from development and lab areas to production environments. as this shift happens, we will see a commoditization of the hypervisor and a climb up the stack to allow virtualization to address the needs of production applications.

while this might come as a shock to a lot of people, the future of virtualization is closely tied to loosely coupled applications (mostly placed under that vague SOA umbrella which we will touch in the next post). that’s right, as in all previous shifts in infrastructure, it will be the application driving it forward.

to make it short, the dynamic needs of the business are driving a more dynamic environment at the application level which requires more real-time adaptability at the infrastructure. in fact, it requires a business policy driven infrastructure. a lot of small moving virtual components that shrink and grow based on business needs and can be moved around dynamically between physical machines and data centers seamlessly.

this means 3 things will happen this year in regards to virtualization:

1. moving away from the silo approach of treating server virtualization and network virtualization as two different things into a unified approach that synchronizes both into what will be just application infrastructure.

2. creation of a management tier that allows to link the infrastructure resources to the application and business needs

3. data center automation will become a must (reader beware – shameless plug to follow) – at the current pace of change in at the application level, code updates daily, demand for each application service and modules changes daily, unpredictably and most often, exponentially. manual operations to adjust to such changes are not only non-scalable cost wise, they are impossible to tune accurately with existing trial and error methodologies. the only way to turn those expensive lights out in the data center is to automate and automate the application infrastructure in real-time and based on service level policies.

bottom line – be on the lookout for the rise of Service Oriented Infrastructure built out of server virtualization coupled with network virtualization and orchestrated by an application aware automation tier that is able to take service level policies from the business and translate that to infrastructure changes.

2008 is all about the virtualization eco-system.

Add to Technorati Favorites

Psst… want to know what’s going to happen in the data center in 2008?

We at B-hive spend a lot of time helping customers achieve optimal service levels to their business applications while improving the cost effectiveness of their data center operations. That gives us good insight into what is happening inside the data center and what things are going to happen in the near future.

I thought an appropriate way to start this post is by sharing our predictions for 2008 as we see it from the fluorescent lighten, cold, floating floors of our customer’s data centers.

This post will tackle the future of Web 2.0.

Web 2.0 – there has been much talk recently about the end of the Web 2.0 world as we know it. That sentiment has been exemplified by two recent events – media anti-sentiment towards the web2.0 poster child, (probably peaking with the Beacon story) and the VC community’s declining interest in financing new Web 2.0 ventures, illustrated recently by one of the largest Silicon Valley VCs, Kleiner Perkins, stating that they will not finance any new Web 2.0 companies.

From the way I see it, Web 2.0 is here to stay – the concepts of user generated content and social connectivity are just a technological adaptation to some of the most basic human traits (we share information and we have the need to belong to groups that define who we are and who we aren’t) and as such we will see the same technology making its way into the enterprise (my CRM could use some more collaboration tools and I am sure that crowd wisdom will turn to be a very important information source in virtual financial trading floors) or as a progression of any online application today (Fantasy Baseball, Mommy Groups). Sure, some of the recent users in Facebook might stop using, but that would be mostly the users who were not supposed to be on it in the first place (that includes most of the media writers covering Facebook today and all of my mom’s co-workers who are poking me on Facebook so they can see some baby pictures of my 7-month old).

Facebook’s target audience will just keep on using it in the same way we didn’t stop using IM or shopping online after the .com bust . On the contrary, the number of users will continue to climb and I am sure that given enough time, the financial model behind it will be fine-tuned.

As for the VC angle, yes VCs make it easier for new companies to launch, especially when they have a long technology development period. Guess what? Launching a web2.0 application today has been quite commoditized with platforms like Ning and many others allowing users to focus on creating the community and the rest is up to that magic viral touch. In the same way that anyone today can open an online-shop in Yahoo! or eBay without the need to get Web-Van style funding (more than $800m raised and they didn’t even have a sock-puppet on the payroll!). Web 2.0 is now a commodity application maturing from a hit driven market to a long tail eternal phase.

Next week – The future of the virtualized data center.

Add to Technorati Favorites