Archive for the ‘Enterprise Software’ Category

How Network Effects Are Likely To Power The Cloud

November 6, 2008

The recent posts about the economics of cloud computing between Nick Carr and Tim O’Reilly (here and here) and the panel at this week’s Web 2.0 have created a lot of buzz. The central question is of great consequence: will the emerging “cloud” operating system generate the monopoly rents and industry control that Microsoft enjoyed with Windows? For the sake of argument, let’s assume VMware is leading in private enterprise clouds and that for now Amazon leads public ones with Google, Yahoo, Rackspace, and Microsoft as contenders. It seems likely so far that Microsoft will have a significant advantage in private clouds, though not to the same extent as with Windows. Public clouds seem years further out, so they’re harder to handicap.

But at the center of the argument is whether dominance in either variety will come via Web 2.0 style “harnessing collective intelligence” or the more traditional “network effects.”  I believe we will see Microsoft emerge as a leader in private clouds in 3-5 years and it will be on account of the more traditional network effects.  Market share won’t accrue to the leader by virtue of capturing more information every time someone uses the cloud.  Instead, it will accumulate the way Windows steadily accumulated application and device support over time.

The two critical success factors in cloud computing: virtualizing the hardware and managing the software components

There are two key assets to leverage success in this market. VMware and Microsoft are likely to share self-reinforcing leadership to the first. That is the ability to make a sprawling and heterogeneous collection of servers, storage, and networking look like a single machine. Through automation interfaces, this capability dramatically changes the economics of administering data centers. The other key asset to leverage for success in this market is the ability to combine infrastructure and applications management. That is the critical requirement for turning an IT operation into a private cloud that can deliver rock solid online services.

Leveraging network effects to make the hardware infrastructure look like one big machine

As Tim explains, the cloud has multiple layers. But the bottom-facing utility layer sits on the hardware infrastructure and uses this generation’s equivalent of device drivers to make it look like a uniform pool of resources to the software above this layer. This is VMware’s strength by virtue of its lead in virtualization. Market share begets network effects at this layer in terms of device support, such as the widest range of storage devices and access to all their unique features. However, over time, it’s hard to imagine how Microsoft will fail to leverage its Windows Server and associated Hyper-V unit volume to achieve similar device coverage. So although the utility layer intrinsically has low value add, vendor concentration in private clouds will probably preserve prices and margins to some degree.

Leveraging network effects to deliver end to end application service management

In order to be able to deliver end to end online services, the upward-facing cloud software layer has to orchestrate and manage an untold number of application components, many from third parties, some from corporate developers. VMware has some leading-edge management technology that automatically wraps around applications (courtesy of the B-hive acquisition). But as long as commercial and corporate developers primarily target Windows as their application deployment platform, Windows will have a self-reinforcing advantage relative to VMware.

Microsoft’s self-reinforcing advantages are twofold, building on its leadership both as a deployment and development platform.

  1. First, since it accounts for roughly 80% of X86 server unit shipments, software developers of just about any stripe have to do at least some work to wrap Microsoft management tools around their applications.
  2. Second, because Microsoft is the leading provider of development tools on Windows, it will be able to capture even more management information about the subset of applications that use their tools. Even if both of these structural advantages haven’t yet been exploited by Microsoft, it’s hard to see that situation lasting as they roll out their Dynamic Systems Initiative.

VMware’s best potential for upside in this market is threefold.

  1. First, the extent to which commercial or corporate software developers use development platforms that are independent of Windows likely means neither company has an advantage in collecting management information. Examples of these platforms include J2EE, PHP, Ruby On Rails, Spring, and Hibernate. Some of these platforms don’t require any conventional operating system.
  2. Second, the extent to which VMware proliferates its platform ubiquitously, it may have more of an advantage in managing the infrastructure it sits on and the applications that sit on it.
  3. Third, virtualized environments make it possible to deploy applications in “appliances” that include all the bits required to run, including operating system, middleware, and multiple application components. Today, all these appliances are deployed using Linux because of Windows licensing restrictions. If appliances take off independent of Windows, that would help VMware tilt the platform competition in its favor.

Public clouds are likely to be more diverse, like a set of services, but with the core or anchor services having similar economics

Windows Azure has all the economic characteristics of a private cloud – masking the infrastructure, managing the application services – but with the margin-depressing overhead of tightly-integrated data centers. It will clearly have higher-level services like SQL Services, Exchange, and .NET, but it will be easy to integrate premise-based software as well as third-party services such as Salesforce.com’s Force.com or Google Adwords.  In other words, if Microsoft delivers on its promise, it will be an anchor platform at a higher value, higher margin layer than Amazon, but with bridges to other services.  Considering how it appears to build on Microsoft’s on-premise software tools and interfaces, it appears likely to have a leading market share.

Advertisements

Is VMware’s Hyper-Growth Phase Over?

October 19, 2008

VMWare’s Opportunity to Expand Into and Potentially Disrupt Adjacent Markets

By George Gilbert and Juergen Urbanski

We’ve talked to a fair number of VMware customers and investors over the past few weeks.  In the process, we’ve repeatedly been asked whether VMWare is done with its phase of hyper-growth.  While it isn’t likely to grow anywhere near triple digits again, it is likely to grow into a strategic platform provider for both data centers and desktops, though this will require solid execution in a tough macro environment.  Its opportunity comes from its chance to both expand and disrupt a series of large adjacent markets.  The ripple effects of this sea change in computing will also affect many markets which VMware has no plans to compete in, though that will be fodder for future posts.  (Disclosure: the authors own shares in VMware)

VMware’s biggest near-term challenge is that it over-sold both units and high levels of functionality with their enterprise license agreements.  These ELA’s were an attempt to encourage customers to deploy more virtual servers with richer functionality ahead of Microsoft’s entry into the market this past summer.  While this may have had some success in making adoption of Microsoft technology more challenging in some accounts, it has actually had unintended side effects.  It left VMware competing with its own inventory of licenses already on the shelves of its customers.  While VMWare works its way out of that near-term hole, some have lost sight of the bigger picture opportunity.

(more…)

Economic Fallout From Virtualization In The Data Center

September 1, 2008

This is our first set of hypotheses about how virtualization is impacting each of the layers of the IT stack. We will elaborate and refine them as we continue to collect insights from vendors and our upcoming survey of IT decision makers.

The Ultimate Objective

· It’s more than just the savings from server consolidation and more than just greater flexibility in managing planned (VMotion) and unplanned downtime (disaster recovery, high availability)

· Ultimately, it’s about automating the data center in order to make it easier for companies to deliver online business and consumer services. The iconic example of an online service that complemented a traditional business was the Sabre travel reservation system born in the ‘60s. It was based on purpose-built infrastructure that required intense collaboration between the customer, American Airlines, and the vendor, IBM. More recent examples include Fedex package tracking or the familiar dot.com services from Amazon, eBay, and Google. In order to make it easier for businesses to build or assemble end to end services from existing assets, technology vendors have to convert “assets” into “pools of services” using virtualization at every layer of the IT stack.

Looking at the IT Stack Layer by Layer

(more…)

Death of the VAR in a SaaS World

August 20, 2008

In general, offline channels have not played a big role in SaaS GTM strategies. The early focus of business and infrastructure SaaS solutions has been on SMBs. SaaS delivery makes it economical to serve SMBs, and online channels make it economical to reach SMBs. As SaaS grows up though (see our earlier post), what role will offline sales channels, in particular VARs, be able to play?

We are very skeptical that VARs will be able to thrive and prosper in their current business model as SaaS adoption continues to gather momentum. The reason is that the TCO model that is promised by SaaS drastically reduces the revenue pool accessible to channel partners. (As we point out in our earlier post, the TCO model still needs to be proven in the long run.) In a June 2007 article, the McKinsey Quarterly compared the total cost of ownership (TCO) for a 200-seat CRM license as on-premise ($2.3m) vs. SaaS ($1.6m). More interesting though than the headline 28% reduction in TCO is the fact that the non-software revenue pool accessible to channel partners shrinks by 90%. Specifically, in this midmarket example, the $1.1m that is spend in the on-premise model for implementation, deployment and ongoing operations shrinks to a meager $106k in the SaaS world.

(more…)

When, If Ever, Will SaaS Crack Core, Mission Critical Processes In The Enterprise

July 17, 2008

It’s no secret Software as a Service (SaaS) has generated tremendous excitement among many customers for its apparently transformational adoption model and ownership experience.  Unlike client-server applications, SaaS delivers faster time to value often via a viral buying cycle as well as lower risk deployment.  The early adopter focus has been on small and midsize businesses (SMBs) because SaaS makes it economical to reach them with broad penetration for the first time.  Where SaaS has carved out successes in large enterprises, it has largely been in more independent, non-mission critical departmental functions who have no capex budgets such as HR, CRM, or marketing, not end to end suites.  Despite the undoubted progress that SaaS is making, we believe the adoption of core, mission critical processes (Financials, order management, industry-specific processes such as manufacturing or securities processing) in large enterprises is still many years out for a variety of technical and business challenges.

SMBs have been the early SaaS suite adopters because traditional vendors couldn”t reach them

SMBs have been the low hanging fruit for early SaaS adoption because they’ve historically been underserved by application vendors.  Small deal sizes and bare bones cost of ownership requirements typically were critical stumbling blocks.  The small deal sizes mean vendors have to reach them with a much lower cost channel than direct sales.

(more…)

Web 2.0 Turns The Enterprise Inside Out

June 18, 2008

A couple of good examples emerged from the Churchill Club session yesterday on “Succeeding with Web 2.0 within the Enterprise”:

  • Serena Software is using Facebook as their corporate intranet and it now seems to be morphing into a sort of extranet. To overcome adoption challenges among its employee base, most of whom are ages 45 and over, Serena brought in a bunch of 16-year olds for Facebook Fridays. Serena’s SVP Marketing Rene Bonvanie claims 90% of employees are now using it. The primary benefit seems to be increased collaboration. Bonvanie says this makes it easy for both employees as well as customers to identify the right person for a specific question. Conversations have become more open and imbued with better knowledge. Thus, marketing and sales are losing some of their monopoly power as touch points with the outside world. In addition, knowing more about your previously face-less co-workers may also help increase a sense of common purpose at the workplace, states Bonvanie
  • Best Buy’s Steve Bendt shared how their internally-focused ‘Blue Shirt Nation’ network helps generate recommendations that can increase store sales. Giving this online market place of ideas the look and feel of ESPN and online games was key to creating adoption among Best Buy’s young workforce, where turnover of 60% per year is the norm
  • Paul Pedrazzi from Oracle shared how it’s internally-focused Oracle Connect and externally-focused Oracle Mix social networks do a great job of filtering content. One use case for Mix – once the traffic picks up more – is prioritizing customer needs and feature request prioritization. A common challenge is that product managers tend to overweigh feedback that is recent, local or comes from the largest customers – those who can afford to send their folks to Oracle’s executive briefing centers
  • Shiv Singh from Avenue A / Razorfish shared how their own internal wiki, now used by 75% of employees, aims to increase internal information sharing. There is no silver bullet for overcoming employees’ tendency to ‘keep information close to their chests to impress their boss’. As you would expect, measures that can drive the right behavior range from executive sponsorship to making sharing fun to incorporating collaboration in the informal and formal reward and recognition systems

(more…)

When Applications Talk To Each Other Via SOA, What Happens To User-Based Pricing

June 16, 2008

Alphabet soup of evolving application design patterns: SOA, EDA, BPM

It’s clear that SaaS doesn’t represent a threat to client/server pricing.  Consider what SOA might represent. In case the terminology is new, here are the definitions first. Bear with the abstractions. To be technically correct, we have to include Event-Driven Architecture (EDA) and Business Process Management (BPM) technologies as well for customers to get the full value out of autonomous services communicating with each other with few users involved.

  • In the case of enterprise applications, SOA means functionality in the form of business processes that are made up of services that communicate with each other’s interfaces by exchanging data. These interfaces might be implemented as Web services. The supplier electronically submitting an invoice to a customer is the relevant example here. 
  • EDA allows services in an SOA to be more loosely coupled. They can communicate by publishing and subscribing to events without either side talking directly to the other. A retailer tracking delivery of goods to its distribution center via RFID would be an example here.
  • BPM orchestrates services and includes users in a workflow where necessary to manage a complete process. A BPM agent for a supplier may be managing the order to cash process when a customer places an order that goes beyond their credit limit. The BPM agent may escalate this exception to the finance department as well as the sales account manager for resolution.

(more…)

Why SaaS Isn’t The Real Threat To Enterprise Application Pricing

June 16, 2008

Whether subscriptions or perpetual licenses, it’s still about user-based pricing

Imagine for a moment that you are at IBM and a small supplier of components from the Far East has just submitted an invoice. It just shipped an order of printed circuit boards to IBM’s networking equipment division in upstate New York. IBM receives it and a clerk in its invoice processing department enters the invoice into its ERP system. Whether IBM bought a client/server or software-as-a-service (SaaS) ERP system doesn’t matter. The clerk has to fill out and navigate as many as 20 screens to enter the invoice so the purchase to payment process can move to the next step.

But go back to the distinction between client/server and SaaS applications. Conventional wisdom says that SaaS applications such as SalesForce.com and their subscription licenses represent a threat to the perpetual licenses and the business models of traditional client-server companies such as Oracle or SAP. Stretching payments out over multiple years, as SaaS does, makes it harder to show the profitability and growth that comes from the upfront payments of perpetual licenses. The reality is somewhat different. As many know, SaaS actually takes in significantly more revenue over the product’s lifecycle. But the pricing models have much more in common than their differences. They both charge based on the number of users accessing the application.

(more…)

Roadmap to Improving IT Services Profitability

June 11, 2008

Pricing excellence can lift the profitability of services businesses by 300-500 basis points. Getting there requires a well-structured, multi-functional approach with strong executive sponsorship. The size of the prize though is well worth the pain.

CONTEXT

The Professional Services (PS) business at product-led enterprise technology vendors often fails to live up to its potential. Managed properly, PS can play a key role in enabling customer loyalty, deepening account relationships, and channeling insights from the frontline back into product development. At many vendors, though, the PS business falls short of delivering on these objectives and is plagued by low overall profitability.

This post lays out an approach to improving PS profitability which we have refined over the course of working closely with several Fortune 100 technology providers.

CHALLENGE

Managers in Professional Services businesses often focus on reported utilization, i.e., volume, as the primary driver of improving overall profitability, followed by a focus on structural labor cost (e.g., on-shore vs. off-shore mix). Compounding the profitability challenge, billable utilization is often affected by the need to remediate product quality issues in the field.

RESOLUTION

Pricing though can yield large potential for improving aggregate profitability but is often an undermanaged area, as it resides at the intersection of services product management (strategic pricing), the services field (tactical pricing) and services operations (enabling infrastructure).

(more…)

The Possible Paths From Today’s Virtualization To Cloud Computing

May 26, 2008

George Gilbert

From Virtualization To Cloud Computing

Virtualization and cloud computing have been getting a ton of buzz. But there has been less discussion about how virtualization, now known mainly for its server consolidation capability, will morph into cloud computing. For that to happen, servers, storage, and networks have to all fuse into one virtual machine from a developer’s and an administrator’s perspective. If the rumors that Cisco will buy EMC (and by extension its majority stake in VMware), the industry will have the first vendor who has a credible shot at putting together all the pieces. This post and the one that follows attempt to layout the different ways this transition could unfold. (Disclosure: I’m an investor in VMware).

Cloud computing, previously known as utility computing, is where all computing resources in Internet data centers look to users, developers, and administrators like one giant computer. It offers seamless scalability and radically reduced administrative overhead. There is more than one path from today’s virtualization to tomorrow’s cloud computing, and they’re not necessarily straightforward.

Ray Ozzie highlighted the importance of the transition from virtualization to cloud computing as one of the “three core principles that we’re using to drive the reconceptualization of our software so as to embrace this world of services that we live in… Most major enterprises are, today, in the early stages of what will be a very, very significant transition from the use of dedicated application servers to the use of virtualization and commodity hardware for consolidating apps on computing grids and storage grids within their data center. This trend will accelerate as apps are progressively refactored, horizontally refactored, to make use of this new virtualization-powered utility computing model. A model that will span from the enterprise data center, and ultimately, into the cloud…”

(more…)