Marketing Strategies for a Recession

June 16, 2008 by

It is not uncommon for large enterprises to spend $300m or more per year just on marketing (e.g., $10bn in sales, 3% marketing budget). The bulk of that money is typically spent on lead generation (i.e., driving interest in the company’s products and services), vs. high level awareness. Is that money well spent? Three facts suggest not:

  • Our own interviews with 100+ Sales & Marketing executives confirm that 85-90% of leads generated by Marketing are never followed up by Sales
  • The average tenure of US enterprise CMOs is 18-24 months. CMOs are the C-level function with the highest turnover
  • Third party studies (e.g., regularly point to marketing performance optimization as a top 3 marketing pain point

Read the rest of this entry »


When Applications Talk To Each Other Via SOA, What Happens To User-Based Pricing

June 16, 2008 by

Alphabet soup of evolving application design patterns: SOA, EDA, BPM

It’s clear that SaaS doesn’t represent a threat to client/server pricing.  Consider what SOA might represent. In case the terminology is new, here are the definitions first. Bear with the abstractions. To be technically correct, we have to include Event-Driven Architecture (EDA) and Business Process Management (BPM) technologies as well for customers to get the full value out of autonomous services communicating with each other with few users involved.

  • In the case of enterprise applications, SOA means functionality in the form of business processes that are made up of services that communicate with each other’s interfaces by exchanging data. These interfaces might be implemented as Web services. The supplier electronically submitting an invoice to a customer is the relevant example here. 
  • EDA allows services in an SOA to be more loosely coupled. They can communicate by publishing and subscribing to events without either side talking directly to the other. A retailer tracking delivery of goods to its distribution center via RFID would be an example here.
  • BPM orchestrates services and includes users in a workflow where necessary to manage a complete process. A BPM agent for a supplier may be managing the order to cash process when a customer places an order that goes beyond their credit limit. The BPM agent may escalate this exception to the finance department as well as the sales account manager for resolution.

Read the rest of this entry »

Why SaaS Isn’t The Real Threat To Enterprise Application Pricing

June 16, 2008 by

Whether subscriptions or perpetual licenses, it’s still about user-based pricing

Imagine for a moment that you are at IBM and a small supplier of components from the Far East has just submitted an invoice. It just shipped an order of printed circuit boards to IBM’s networking equipment division in upstate New York. IBM receives it and a clerk in its invoice processing department enters the invoice into its ERP system. Whether IBM bought a client/server or software-as-a-service (SaaS) ERP system doesn’t matter. The clerk has to fill out and navigate as many as 20 screens to enter the invoice so the purchase to payment process can move to the next step.

But go back to the distinction between client/server and SaaS applications. Conventional wisdom says that SaaS applications such as and their subscription licenses represent a threat to the perpetual licenses and the business models of traditional client-server companies such as Oracle or SAP. Stretching payments out over multiple years, as SaaS does, makes it harder to show the profitability and growth that comes from the upfront payments of perpetual licenses. The reality is somewhat different. As many know, SaaS actually takes in significantly more revenue over the product’s lifecycle. But the pricing models have much more in common than their differences. They both charge based on the number of users accessing the application.

Read the rest of this entry »

Roadmap to Improving IT Services Profitability

June 11, 2008 by

Pricing excellence can lift the profitability of services businesses by 300-500 basis points. Getting there requires a well-structured, multi-functional approach with strong executive sponsorship. The size of the prize though is well worth the pain.


The Professional Services (PS) business at product-led enterprise technology vendors often fails to live up to its potential. Managed properly, PS can play a key role in enabling customer loyalty, deepening account relationships, and channeling insights from the frontline back into product development. At many vendors, though, the PS business falls short of delivering on these objectives and is plagued by low overall profitability.

This post lays out an approach to improving PS profitability which we have refined over the course of working closely with several Fortune 100 technology providers.


Managers in Professional Services businesses often focus on reported utilization, i.e., volume, as the primary driver of improving overall profitability, followed by a focus on structural labor cost (e.g., on-shore vs. off-shore mix). Compounding the profitability challenge, billable utilization is often affected by the need to remediate product quality issues in the field.


Pricing though can yield large potential for improving aggregate profitability but is often an undermanaged area, as it resides at the intersection of services product management (strategic pricing), the services field (tactical pricing) and services operations (enabling infrastructure).

Read the rest of this entry »

Making Advertising Work On The Other (Non-Search) Part Of The Web

June 3, 2008 by


Just about everyone knows advertising on the Web, outside of search, isn’t living up to expectations. Advertisers and publishers buy and sell inventory of display ads for CPMs (cost per thousand impressions) well below the offline equivalents in magazines, newspapers, and tv. Part of the reason is that outside of search, the Web hasn’t found its equivalent of the 30 second spot or the two page spread (an insight courtesy of John Batelle). But a major part of the reason is that Web sites can’t consistently serve up relevant and engaging ads to an audience that’s no longer captive. So how do we substitute relevance for the more traditional reach to increase ad prices?

For years users have left “breadcrumbs” about their interests across the Web. Now there are increasingly sophisticated ways to connect that information into rich profiles, much of it by user choice, while still respecting privacy concerns. The consolidation and massive reach of ad networks in the hands of Google, Yahoo, and Microsoft likely means that most of the economic rewards of this shift will continue to accrue to these companies. Even if the traditional media giants accelerate their migration to the Web, they are likely too late. They won’t be able to match the reach of the tech giants’ online ad networks and their trove of online personal profiles.

Read the rest of this entry »

HP – EDS: A good deal, actually

May 29, 2008 by

Is the EDS deal bad for HP shareholders? The market’s initial reaction suggests so, with HP losing several billion dollars in market cap because the deal introduces operational risk to what has been a remarkable turnaround / margin expansion story for HP investors.

A more in-depth look though at the likely strategic rationale, potential alternative targets and possible financial rationale leads us to conclude that the deal may not be so bad if you take a longer term perspective.

Read the rest of this entry »

The Possible Paths From Today’s Virtualization To Cloud Computing

May 26, 2008 by

George Gilbert

From Virtualization To Cloud Computing

Virtualization and cloud computing have been getting a ton of buzz. But there has been less discussion about how virtualization, now known mainly for its server consolidation capability, will morph into cloud computing. For that to happen, servers, storage, and networks have to all fuse into one virtual machine from a developer’s and an administrator’s perspective. If the rumors that Cisco will buy EMC (and by extension its majority stake in VMware), the industry will have the first vendor who has a credible shot at putting together all the pieces. This post and the one that follows attempt to layout the different ways this transition could unfold. (Disclosure: I’m an investor in VMware).

Cloud computing, previously known as utility computing, is where all computing resources in Internet data centers look to users, developers, and administrators like one giant computer. It offers seamless scalability and radically reduced administrative overhead. There is more than one path from today’s virtualization to tomorrow’s cloud computing, and they’re not necessarily straightforward.

Ray Ozzie highlighted the importance of the transition from virtualization to cloud computing as one of the “three core principles that we’re using to drive the reconceptualization of our software so as to embrace this world of services that we live in… Most major enterprises are, today, in the early stages of what will be a very, very significant transition from the use of dedicated application servers to the use of virtualization and commodity hardware for consolidating apps on computing grids and storage grids within their data center. This trend will accelerate as apps are progressively refactored, horizontally refactored, to make use of this new virtualization-powered utility computing model. A model that will span from the enterprise data center, and ultimately, into the cloud…”

Read the rest of this entry »

Elaborating On The Scenarios From Virtualization To Cloud Computing

May 25, 2008 by

George Gilbert

In the last post, I outlined why virtualization was morphing into cloud computing.  In this post, I elaborate on the potential paths it could take.

1.  VMware manages compute virtualization, Cisco manages network virtualization, and another vendor such as EMC or Network Appliance manages storage virtualization:

In this scenario, VMware provides the developer and management interfaces for making all the servers look like a single machine.  But customers adopt Cisco, which recently introduced its Nexus line of switches, as the network virtualization layer.  This product creates virtual networks and connections between computers and storage networks out of a physical switch. There are a variety of approaches to storage virtualization, but for the sake of simplicity let’s say companies choose to deploy EMC or NetApp.  Again, developers and administrators see only one logical device.  The downside of this approach relative to one vendor owning all the virtual resources end to end is twofold.  Software developers have to write to three separate interfaces to work with the cloud.  Second, administrators also have to work with three consoles to make sure software can deliver on its SLAs.

2. VMware becomes the end to end infrastructure:

Today VMware only offers virtual compute infrastructure.  It would still need to offer file system virtualization and network virtualization.  And of course, it would have to build the whole policy-based management infrastructure, or at least a framework other vendors could plug into to complete the platform.  The challenge with this scenario is that VMware has the reputation of being somewhat closed.  So the burden of the storage and network virtualization work, which is non-trivial, would fall mostly on VMware.

3. Microsoft manages end to end virtual and physical resources for Windows shops:

Microsoft has made a lot of noise with its Hyper-V server virtualization product and the emerging suite of management tools it is promising that go along with its management tools for physical resources.  Although it also supports SUSE and Red Hat Linux, it’s possible it could spread its footprint in the Windows environment to support storage and network virtualization with the proper hardware partnerships.  For Windows-only shops, managing all the physical and virtual resources with one set of interfaces for developers and administrators would be ideal.  I don’t know how difficult it would be to accomplish the storage and networking portions.

4. Red Hat or Citrix could do the same as 1 or 2 for Red Hat Linux shops

Although XenServer supports more Linux distributions than Red Hat, plus Solaris on X86, it’s hard to see application developers and system administrators committing to another distribution for end-to-end deployment.  The challenge with a Red Hat deployment as platform for end to end virtualization is that they’ve lost control of their virtualization technology to Citrix.  And it’s hard to see application developers committing to API’s promoted by a firm known for terminal services.

5.  HP Openview or IBM Tivoli manage both virtual and physical infrastructure in mulit-vendor shops:

At the beginning of the decade, this was the default assumption held by industry analysts and probably most customers.  The core assumption for this scenario today is that these are the incumbent vendors for multi-vendor shops and they are the only ones who can bring order to the chaos.  The challenge they face is that they appear to have almost no presence in the market for virtualization infrastructure right now.  Their server businesses are among VMware’s biggest partners.  They are in no position to define the developer interfaces to virtualization products.  A more likely scenario is that they integrate their management tools with VMware, Xen, and Microsoft’s Hyper-V.

6.  Individual vendors like Oracle and SAP write their own policy-based virtualization into their infrastructure:

Oracle already announced support for the Xen server but SAP’s plans for multi-tenancy are less clear.  But in this scenario, each vendor builds virtualization support into its own products.  In Oracle’s case, presumably this would be the database, application server, and the business applications that support them.  In SAP’s case, this would probably mean its NetWeaver application server and the related business applications.  This vertically integrated approach has some challenges of its own.  Customers running SAP on top of Oracle infrastructure, for example, would have conflicting policy-based administration layers trying to ensure SLAs.

7. Cloud computing vendors create their own purpose-built virtualization infrastructure

Rather than adopt the commercially available products, the major cloud computing vendors such as Google, Microsoft, Yahoo, Amazon, and, etc. all build technology specifically for their platforms.  So far, all but Microsoft seem to be pursuing this path.  However, only Microsoft is talking specifically about seamless scalability from running software hosted on the customer’s premise to cloud-based deployment.  If customers are interested in seamlessly migrating their enterprise software into the cloud, other than Microsoft, they’re going to have trouble working with these vendors.  More likely, these vendors are going to be the platforms for a new class of consumer-facing Web software.  Microsoft and other yet to emerge vendors are likely to be the cloud platforms of choice as today’s enterprise software migrates to the cloud.