Category Archives: Virtualization

What would it take for the clouds to rain?

Technorati Profile

  1. Cloud is definitely the hot word in infrastructure talk these days, In a recent article by Frank Dzubeck in Network Worldhe discussed 5 open questions that need to be addressed for clouds to flourish most so in enterprise environment with its share of regulations and procedures. before going to address those questions from a technical perspective it is important to also address the these questions from the business angle as the solution often lays there.
    the questions that need to be addressed are: 1. Security , 2. Performance , 3. Management, 4. regulation and 5. variable cost structure.
  2. on the business side – enterprise cloud services are definitely a high-caliber problem to solve with multiple technical variables entangled with legal contracts so reducing it to an evolutionary process by looking at existing services we consume helps to make a tangled web into a solvable problem. adapting solutions from the world of voice service, power services and temp workforce placement services should be the start for where to look for solutions and practices.  for example, the security commitment and compensation question has been addressed and placed in practice for a lot of international companies outsourcing offshore. performance has been addressed in voice services more so by creating a commoditized voice service that allows the customer to switch to a different service provider if he hears noises or long latencies on the line than by any strict SLAs. the management is a pure technical question that assumes that underlying application infrastructure is manual and hands-on and not commoditized in nature like power is.  a century ago companies still managed and created their electric power as it became commoditized, the largest consumers don’t ask for the ability to manage and troubleshoot their service provider’s grid and last to the challenge of having a variable on-demand cost associated with computing, again i defer to the power consumption model which we can adopt in which demand forecasting provides a rough estimate but never a precise one.
  3. on the technology side – we are looking into a new type of infrastructure that has 2 main things:
    1. the ability to apply business policies to applications. in this model the application owner does not just request for features but also asociates service level requirements (latency, avalability, security) as well budgets for running the application (otherwise we know everyone will want a milisecond latency and 100% availability for everything).
    2. application infrastructure has to be automated in real-time based on the above business policies without constant hands on from engineers. in this world the infrastructure grows and shrinks based on the changing nature of the application, the demand from users and the business changing needs.
  4. as we see both the business side and the technology side being sorted out, we will see the cloud starting to reach massive commodity consumption which resembles more of power grid than the manual outsource model it has today.
Advertisements

Cut your data center weight by half! – ask me how.

I have been getting complaints that our posts are on the longer side so in order to accommodate the ever shrink attention span of blog readers i will keep this short and to the point (already wasted 3 lines just now…).

let’s start by cutting through the vague, fluffy terminology here. Green is the new black and obviously the euphemism du jour, by saying that we are doing a green data center we are saying it has been wasteful so far and we are looking reduce its costs but give it an ideological facade (how about we call it “the polar bear saving” data center?).

making an existing data center more efficient is a very complex problem, one thing useful that middle-school math taught us was that sometimes the only way to solve a complex problem is by reducing it to a simple problem we can solve and extrapolate from there. in this case let’s look at the daily, universal problem of dieting and weight control and see what we can extrapolate from there to the green data center problem.

in the world of dieting there are three approaches: the first is to try and cut everything we eat in half/tenth and hope to see results in a week. from personal experience we all know that this approach usually holds for a couple of days and ends up in a huge disappointment after a week which translates into a food-binge that gets us to the same place we started (in the best case). the second approach is one that replaces the large fries in our Big Mac + milkshake combo with small fries only to end up surprised at the end of the month that we just lost 1 pound (out of the 200+ we already have) and the third approach is one that aligns your life style, personal suffering thresholds and weight-loss goals to achieve a long term life style change that can be sustained and measured.

Greening the data center isn’t any different, the common approach everyone is taking today is looking for the low hanging fruit in the form of consolidation through virtualization of all the none important applications that no one really cares for or even knows if are still in use. in our analogy, this is really reducing the size of your fries order but not touching the bigmac and shake and the results are the same, you cut some costs but you end finding they are marginal compared to your expectations.

the second aggressive approach we are seeing out there is IT groups setting high expectations of consolidating EVERYTHING, ASAP and expecting to drop costs drastically by next month. this approach ends up like most crash diets, very fast and in a worst position than the one you started with. problems are going to appear in the form of push back form users and application owners complaining about their experience (and in our analogy, we know these guys represent a never-ending appetite for consumption) and each push-back will end in adding larger buffers of errors to the point of giving up all together or gradually finding ourselves eating two orders of trans-fat free onion rings instead of the fries we used to.

so that leaves us with the only sustainable approach to achieve a significantly green data center. this approach takes into account the business or application owners goals and pain thresholds in the form of performance to cost saving trade-offs. organizations trying to approach the green data center in a bottoms up perspective will find wither marginal results or colossal failures.

it is time for Infrastructure groups to open a conversation with the business and start to understand their infrastructure in the context of business process, end-user experience and service level goals. it is only through such conversation that the we can answer questions such as which machines can we sit down, when should we bring them up again or even the simplest sizing question of how much CPU is enough for an application process.

in our experience in the field this approach yields an order of magnitude larger results without compromising the business for which all the infrastructure is there in the first place.

Happy data center diet!

Add to Technorati Favorites

Virtualization 2.008 – what is that all about?

Conversations around Virtualization in the second half of 2007 reminded me a lot of that once new term “The Internet” circa 1997. a TV sketch portrayed a host showing a guest around the house and as they pass through the room he explains “this is our living room, we have two internets, one behind the sofa and the other above the TV. this is our kitchen and there is internet right by the fridge and one above the oven. of course there is also one in the restroom and one in each bedroom” finally the guest stops her to ask her if she has any idea what the internet is to which she finally admits “not at all”.

a lot of conversations about virtualization sounds just like that, with people going and on about how virtualization is a strategic shift in the data center and such and how they have two of it everywhere but without ever explaining why and how.

so what is virtualization anyway? – to put in simple words, virtualization is the ability to separate the logical from the physical. bringing this definition down form the religious sphere, virtualization is about making one physical entity appear as multiple logical ones or multiple physical entities seem like one logical entity. the first type is one that includes server virtualization by placing a hypervisor that allows to run multiple OS environments in one physical server and has made a second coming since 2000 by VMware but has been around back in the good ole’ mainframe days. the second type of virtualization has actually been around the data center also since the mid 90’s as part of the network functionality at first and more recently by application switches (most commonly referred to as load balancers or application delivery controllers).

so looking beyond the brief history of virtualization forward, what we are going to see in 2008 will mostly be driven by the move of virtualization from development and lab areas to production environments. as this shift happens, we will see a commoditization of the hypervisor and a climb up the stack to allow virtualization to address the needs of production applications.

while this might come as a shock to a lot of people, the future of virtualization is closely tied to loosely coupled applications (mostly placed under that vague SOA umbrella which we will touch in the next post). that’s right, as in all previous shifts in infrastructure, it will be the application driving it forward.

to make it short, the dynamic needs of the business are driving a more dynamic environment at the application level which requires more real-time adaptability at the infrastructure. in fact, it requires a business policy driven infrastructure. a lot of small moving virtual components that shrink and grow based on business needs and can be moved around dynamically between physical machines and data centers seamlessly.

this means 3 things will happen this year in regards to virtualization:

1. moving away from the silo approach of treating server virtualization and network virtualization as two different things into a unified approach that synchronizes both into what will be just application infrastructure.

2. creation of a management tier that allows to link the infrastructure resources to the application and business needs

3. data center automation will become a must (reader beware – shameless plug to follow) – at the current pace of change in at the application level, code updates daily, demand for each application service and modules changes daily, unpredictably and most often, exponentially. manual operations to adjust to such changes are not only non-scalable cost wise, they are impossible to tune accurately with existing trial and error methodologies. the only way to turn those expensive lights out in the data center is to automate and automate the application infrastructure in real-time and based on service level policies.

bottom line – be on the lookout for the rise of Service Oriented Infrastructure built out of server virtualization coupled with network virtualization and orchestrated by an application aware automation tier that is able to take service level policies from the business and translate that to infrastructure changes.

2008 is all about the virtualization eco-system.

Add to Technorati Favorites