Is VMware’s Hyper-Growth Phase Over?

by

VMWare’s Opportunity to Expand Into and Potentially Disrupt Adjacent Markets

By George Gilbert and Juergen Urbanski

We’ve talked to a fair number of VMware customers and investors over the past few weeks.  In the process, we’ve repeatedly been asked whether VMWare is done with its phase of hyper-growth.  While it isn’t likely to grow anywhere near triple digits again, it is likely to grow into a strategic platform provider for both data centers and desktops, though this will require solid execution in a tough macro environment.  Its opportunity comes from its chance to both expand and disrupt a series of large adjacent markets.  The ripple effects of this sea change in computing will also affect many markets which VMware has no plans to compete in, though that will be fodder for future posts.  (Disclosure: the authors own shares in VMware)

VMware’s biggest near-term challenge is that it over-sold both units and high levels of functionality with their enterprise license agreements.  These ELA’s were an attempt to encourage customers to deploy more virtual servers with richer functionality ahead of Microsoft’s entry into the market this past summer.  While this may have had some success in making adoption of Microsoft technology more challenging in some accounts, it has actually had unintended side effects.  It left VMware competing with its own inventory of licenses already on the shelves of its customers.  While VMWare works its way out of that near-term hole, some have lost sight of the bigger picture opportunity.

The Many Waves of Virtualization Technology That Start at the Server

Once VMware, followed by Citrix and Microsoft, inserted a thin layer of software between the server hardware and all the operating system and application software that now sits above it, a whole range of value creation opportunities opened up.

Everyone knows the story about the server consolidation opportunity.  The biggest misconception though is that the high growth story is done because Microsoft has commoditized server virtualization technology.  What is really going on is that there are multiple waves of functionality continuing to penetrate the installed base of servers.  Roughly in descending order of immediacy, these waves include server consolidation, business continuity, desktop virtualization, data center automation, and cloud virtualization.  These new waves greatly expand the addressable market for server virtualization because VMware is democratizing functionality that often exists in other markets at much higher cost and complexity.

Conventional wisdom doesn’t seem to take into account that each successive wave can reach existing and/or additional servers at price points that are anything but commodity-level.  The business continuity wave is already upon us.  In particular, high availability and disaster recovery are building on or replacing the original server consolidation value proposition just as Microsoft is entering with the previous wave.

The waves that are somewhat further out are desktop virtualization, which is actually managed by servers, data center automation, which turns internal IT infrastructure into a private cloud of resources and application services, and federation with public clouds like those that an Amazon or an IBM might run.

Just because VMware plans to deliver these new waves of functionality doesn’t guarantee it success.  In fact, the competitive dynamics of each wave are likely to be dramatically different.

The Server Consolidation Wave is Mostly Well Understood

The biggest misconception in this phase of market development is that Microsoft has driven price points per physical server (or per two sockets in VMware’s case) toward zero and that the number of workloads consolidated per physical server continues to climb with improving server price performance.  The reality is that ease of managing planned downtime via live VMotion remains a differentiator for VMware for now.  In addition, the consolidation ratio of workloads per physical server is trending down, not up, raising the price per workload.

Some estimates put the total percentages of physical servers that have been consolidated in the mid-teens and the total percentages of workloads that have been consolidated at least in the mid-twenties.  These numbers need more analysis because the consolidation ratio for early workloads, particularly test & development and tier 3 applications such as directory servers, DNS servers, and file & print servers, have supposedly ranged from 10 : 1 to as high as 20 : 1.  But this consolidation ratio is not continuing to climb, and hence the price per workload is not continuing to drop, as server price / performance continues to improve.  Customer plans are different.  Those we’ve talked to indicate that as the applications become increasingly sensitive to downtime, customers become increasingly cautious about stacking too many workloads on a single server.  They are starting with tier 2 business critical applications and extending that to tier 1 mission critical and customer-facing applications.  However, the consolidation ratios that we are hearing about are closer to 3 : 1 or 6 : 1.  This may increase over time as customers get more comfortable with measuring and managing utilization rates on these more performance-sensitive applications and as the hypervisor “performance tax”, currently 10-20%, continues to diminish.  But for now, the price per workload is going up, not down.

Figure 1:

Figure 1

Average 4-CPU Oracle database performance requirement is 1/80th of VMware ESX capacity, leaving plenty of room for consolidation.  Data based on actual analysis of 700,000 VMware customer servers.  Source: VMWare

In terms of where in particular Microsoft is gaining share, Hyper-V is proving attractive to customers migrating to Windows Server 2008 who need a “compatibility box” for hosting older versions of server applications.  Specifically, those who need to bring along Microsoft server applications running on previous versions of Windows find Hyper-V hosted on Windows Server 2008 attractive.  It’s not clear that Citrix has the channel to reach server infrastructure buyers generally, but they do have the most natural claim to consolidate the 1 million servers running what used to be called Citrix Presentation Server.

The Business Continuity Wave is Imminent as the Next Growth Driver

At a meeting for Wall Street analysts at VMworld, CEO Paul Maritz said the company had two years to come up with something to follow the server consolidation opportunity.  The thought that the original growth driver might be tailing off and that the company hadn’t identified the next leg of its growth strategy spooked investors.  The market promptly punished the stock the following day somewhere between 15-20%.  The event highlighted that consensus opinion doesn’t even realize that a follow-on wave is imminent.  And post-Diane Greene, the company is now over cautious about hyping its opportunities.

VMware has a rapidly maturing set of products for business continuity that builds on and extends the server consolidation value proposition.  The percentage of workloads covered by high availability (HA) and disaster recovery (DR) in a pre-virtual world is low because of the cost and complexity of implementation and operation.  Today, somewhere between 10-20% of workloads are covered by HA and even fewer by DR.  In a virtual world, that cost and complexity goes down dramatically, and customer deployment plans appear to be for 2-5x higher penetration rates than the pre-virtual world.  Current VMware list prices for this functionality are 3x the level for server consolidation functionality, or roughly $3K per dual socket.  Even if street prices drop as volumes go up and despite ever richer functionality, this opportunity appears bigger than the server consolidation wave.

Figure 2:

Today, application-specific clustering from Microsoft, Oracle, and IBM provide application fault tolerance but with complex setup and maintenance, severely limiting the applicability of HA functionality.  VMware on the other hand provides failover and restart upon hardware failure with minimal to no administrative overhead.  Citrix with Marathon Technologies provides fault tolerant application failover.  Source: VMWare

While potentially bigger, the business continuity wave is unlikely to take off at the same rate as server consolidation.  For one, the ROI is not as easily measurable.  In addition, VMware, Citrix, and their channel partners have to step beyond the traditional server infrastructure administrator and address the application owners and administrators.  This is a new buyer for both.  Citrix is the first to admit their ambitions on the server are limited because of their channel’s limited reach with this buyer.  For VMware, however, their opportunity starts with the voluminous number of previously consolidated servers.  Because many of these have so many workloads on them, customers now consider them too important to fail.  This is the entry point for HA deployments.

For HA, the principal competition comes from the existing server application and operating system vendors that have product-specific approaches such as Windows Cluster Server or Oracle Real Application Clusters (RAC).  These legacy solutions tend to be very expensive and complex, partly because administrators have to maintain replicas of servers in the cluster for the entire hardware and software stacks across configuration, patch, and version changes for each and every layer.  VMware and Citrix, by contrast, treat the whole software stack as a file that can move between servers, vastly simplifying maintenance and operation.  It is likely only a matter of time before Microsoft is compelled to adopt this approach as well.  Today VMware only protects against hardware failure, unlike Citrix – in conjunction with Marathon Technologies – and the server software vendors, who protect against software failures as well.

But VMware is building application awareness into its product and plans to offer full software-level fault-tolerance to applications.  At the same time, it will enable its ISV partners to do the same for their products as an alternative offering.  Customers regard virtualization-based application fault-tolerance as the next killer feature to follow live migration.  HA, where the recovery time for the app and recovery points for the data are measured in minutes, or FT, where both are instantaneous, are likely to become standard service levels.  Now that applications have many tiers or components, the availability of one is dependent on the availability of all.  It’s no different from stacking 10 workloads on a single server and then deciding the server has become too important to fail.  Customers talk about applying HA or FT to just about all physical servers and their workloads now that they are becoming so inexpensive and simple to deploy and maintain.  For example, Oracle’s Real Application Clusters cost an additional $25k per socket at list and are so fragile that customers for the most part only deploy them in 2 server FT clusters, not for scalability.  The mid-point of VMware’s HA functionality today, though not yet application-level FT, is roughly $3K per socket.  And the cost of ownership is roughly an order of magnitude lower: administrators can manage clusters with an order of magnitude more servers.

The Data Center Automation Wave Represents a Holy Grail Bigger Than Consolidation and Business Continuity Combined

Today the proliferation of x86 servers, as well as VMs, has created management chaos.  It has left some administrators wishing the days of the mainframe would return.  As an industry, we’ve collectively traded the centralized administration of the mainframe for the better price performance and greater departmental control over resources of client-server systems.  VMware, Microsoft, and Citrix want to deliver the best of both worlds.

This wave represents the biggest value creation layer thus far.  But it is not likely to yield the monopoly control and rents for anyone that Microsoft achieved with Windows.  Although it appears to deliver the control and switching costs of managing the underlying hardware infrastructure – servers, storage, and networking – it’s not clear it delivers the same switching costs for applications.

While this data center layer represents the emergence of a new operating system platform, significant differences exist.  Like traditional operating systems, the data center layer masks the complex and distinct elements of the underlying hardware infrastructure with software tied into interfaces it controls.  Like traditional operating systems, it provides services to applications that enhance their capabilities, such as availability – discussed above, scalability, security, and manageability.  Most significantly, it provides these capabilities to an application service that is actually made up of many components or tiers.  At the simplest level, it would ensure that the web, application, and database tiers all operate in synch to deliver the desired service levels.  Unlike traditional operating systems, however, it takes existing, unmodified applications and provides them with all these additional services.  In other words, the self-reinforcing networking effects coming from greater ISV support and resulting higher unit shipments don’t appear to exist at the application layer.

Figure 3:

VMware’s Virtual Data Center OS shares some of the properties of an OS.  It manages the interfaces to the underlying hardware infrastructure and provides services that enhance applications.  However, unlike an OS, it doesn’t require an application to write to its programming interfaces to leverage the application services.  Combined with heavy competiton from traditional vendors, that will probably limit its ability to achieve dominance via self-reinforcing network effects.  Source: VMWare

Pricing is not yet clear, but Microsoft won’t be giving this functionality away.  Unlike with their server virtualization layer, they have great revenue ambitions for their System Center management tools.  For VMware, the price point is likely to be higher than their current mid-range $3K / socket.   Unlike VMware, Microsoft has the opportunity to leverage the millions of developers using Visual Studio .NET.  As part of their Dynamic Systems Initiative, their management tools will be able to capture the information coming out of the developer tools that explain how all the elements of a distributed application work together.  Combined with a map of a customer’s infrastructure, they have a lot of the information they need to keep things running at customer-defined service levels.

Managing workloads at the Windows or Linux layer as VMware does, or the .NET layer as Microsoft does, isn’t the only approach to management automation.  IBM is taking J2EE applications and isolating and managing how they deliver on service levels at the application server level.  And the management software Big Four, CA, HP, IBM, and BMC are leveraging their technical assets in managing services, independent of whether they’re virtualized.  But these different approaches are a topic for a future post.

Desktop Virtualization Has the Potential to Remake How Users Interact With Their Desktops and the Business Model for Vendors

Conventional wisdom is that desktop virtualization is a fancier, richer version of what Citrix has delivered for many years with its Presentation Server product, now called XenApp.  It’s actually growing into something much different.  For starters, VMware, Microsoft, and Citrix are all trying to change how users interact with their desktops.  Today users’ desktop environments – all their settings, applications, and data – are tied to a physical box.  In the future, that environment will be associated with a user, following them around to whatever desktop or other device they might log into.  Citrix would appear to have the pole position in this market.  They can start by upgrading their existing 100m Presentation Server users and leverage the mature channel they’ve built to reach them.  In addition, they have Microsoft referring the desktop business their way.

But VMware is trying to go a big step beyond this approach to a place where Microsoft and their partner Citrix will be hesitant to follow.  The way Microsoft and Citrix are pursuing the desktop opportunity appears to assume Windows already resides on whatever desktop, laptop, or thin client a user logs into.  If online, the user’s environment is downloaded from a centrally managed set of servers.  If offline, the environment updates the centrally managed servers, to the extent policy allows, the next time the user is online.  This respects the very heart of Microsoft’s strategy, namely to put a copy of Windows on every box that ships anywhere in the world.  Beyond the basic Windows licensing cost, Microsoft currently charges $100 per year per client that remotely connects to a server.

VMware wants to break that business model by offering users greater choice.  In this approach, an OEM version of Windows doesn’t have to be fused to the client device first.  Instead, a bare metal hypervisor can host any version Windows, Mac OS X, or just a browser for running rich internet applications.  The idea is that users, not IT, can choose whatever client device they want, and IT just provisions a virtual machine that runs on the client with appropriate access permissions and applications.  Where Windows traditionally managed all access to the hardware and, therefore, had to be purchased with the box, now it’s just one of several potential personalities.  This doesn’t mean Windows goes away.  It does mean though that customers will have more flexibility when deciding about Windows upgrades or whether to accept Windows as the OEM-installed default OS.  Citrix could clearly do a bare metal hypervisor also, but would it venture into territory that would annoy its key partner?

About these ads

Tags: , , , , ,

4 Responses to “Is VMware’s Hyper-Growth Phase Over?”

  1. Gaurav Says:

    Great Article !!
    You are right that business continuity will be the next wave to ride on. Business Continuity is becoming extremely important for datacenters or for data intensive industry. VMware has made it more critical. VMware made the server consolidation possible resulting in multiple virtual servers running on single physical server leading to high workloads on it. This better utilization of servers also comes with this server becoming SPOF. To avoid making these servers as SPOF, now industry need to move to HA clustering solutions.

    You correctly said that right now clustering solutions from OS or application vendors is only limited to failing over their applications and are not robust or matured products. VMware provide only harware FT failover, but there are some products which are matured enough to handle the complete server failover. Few days back I was reading an article about NEC’s ExpressCluster product. I did few days of research on internet about it and I think its a robust product. I could not find any other product either by Microsoft or by any other big vendor even near to it. It provides complete failover and DR functionality. Best part is that administrator need not to worry at all once he configures the cluster. It will automatically detects not only some application issues, but any type of failure in any application, OS or even hardware. On detection, it can automatically failover the applications and related data to standby server. At the same time it will also inform administrator to take care of failure in active server either through email, sms or by signalling through switching on the light in server room. To know more about it, do some googling on “NEC ExpressCluster”

    I also in search of some other cool prodcts like ExpressCluster, which can provide me atleast same set of functionality, so that I can compare these products for my organization.I would recommend

  2. Gaurav Says:

    Hey I forgot to give the link to ExpressCluster in my last comment. Just to facilitate other readers, here is the link to ExpressCluster – http://www.necam.com/EntSw/ExpressCluster/

    Just download the evaluation copy and try it out. I am sure you will start loving it. In case any one know some other products providing somewhat same functionality, kindly feel free to share information with me. I would love to try them too.

  3. De volgende fasen voor VMware « EarlyBert Says:

    [...] 26, 2008 by Bert Bouwhuis George Gilbert en Juergen Urbanski van TechStrategy Partners geven een analyse van de groei die VMware in het verleden heeft bereikt. Ze geven tevens gratis advies over de vraag [...]

  4. The 3 waves VMware will be riding, aren’t they forgetting something ? | Virtualfuture.info Says:

    [...] stumbled on a nice post on TechStrategyPartners from George Gilbert and Juergen Urbanski. It gives a clear view on which areas VMware will be [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: