This is our first set of hypotheses about how virtualization is impacting each of the layers of the IT stack. We will elaborate and refine them as we continue to collect insights from vendors and our upcoming survey of IT decision makers.
The Ultimate Objective
· It’s more than just the savings from server consolidation and more than just greater flexibility in managing planned (VMotion) and unplanned downtime (disaster recovery, high availability)
· Ultimately, it’s about automating the data center in order to make it easier for companies to deliver online business and consumer services. The iconic example of an online service that complemented a traditional business was the Sabre travel reservation system born in the ‘60s. It was based on purpose-built infrastructure that required intense collaboration between the customer, American Airlines, and the vendor, IBM. More recent examples include Fedex package tracking or the familiar dot.com services from Amazon, eBay, and Google. In order to make it easier for businesses to build or assemble end to end services from existing assets, technology vendors have to convert “assets” into “pools of services” using virtualization at every layer of the IT stack.
Looking at the IT Stack Layer by Layer
Servers: Steady Workload Expansion Suggests Near-Term Pricing Stability
· Continuing consolidation of virtual workloads has created servers with more and more multi-core CPUs. Whereas we’ve all heard of customers consolidating 10 workloads on a single server, we’re starting to see the intent to put 50 on a single server. Although customers want to stay within the envelope of commodity x86 servers, this steady expansion of individual server capacity suggests prices and margins shouldn’t completely evaporate. (Suppliers to public cloud vendors such as Microsoft or Yahoo or Google may find otherwise). But with this capacity expansion trend comes new risks and bottlenecks. For one, each server becomes too big to risk failure.
Storage: Accelerating Migration From Server Direct Attached Storage (DAS), Consolidation of Vendor Investments During Systems Refresh, Increasing Value of Storage System Software
· Storage should see a bump in growth from the convergence of the above factors though ultimately “thin provisioning” of capacity on-demand to applications and data de-duplication should return growth rates to historical norms
· With servers too big to risk failure, storage continues to be carved out and put on a NAS or SAN so that virtual machines can be easily migrated in the case of planned or unplanned server downtime while still pointing to persistent storage on the network. Apparently, customers used to frequently have SAN or NAS environments segregated by application. These are undergoing consolidation so that all the workloads or applications consolidating on fewer servers can all connect to one or a few pools of data.
· Customers also appear to be consolidating vendors to get closer to a single virtual pool of storage during this simultaneous refresh of the storage systems.
· Increased spend on backup, disaster recovery, and high availability is causing additional value to shift to the storage systems for several reasons. First, the number of workloads or applications considered mission critical or just important enough to ensure quick recovery continues to grow as it becomes easier and cheaper to deliver these capabilities. Second, backup and recovery capabilities have moved off servers. With so many workloads consolidating on them, this functionality is bottlenecking their operation. By living on the storage network or the storage system itself, they operate where the data already lives without soaking up server CPU cycles. Snapshots that get backed up off the storage system enable continual data protection. Replication to a storage system at another site provides the foundation for high availability. Data deduplication, however, prevents this profusion of data from growing geometrically.
Networks: Speed and Redundancy Driving a Refresh
· With the extra load on the path between consolidated servers and each other and more networked storage, the pipes are getting clogged. As a result, both appear to be in line for an upgrade. The data networks are likely to move to 10GbE. And the storage networks are likely to move to higher speed Fiber Channel over Ethernet or iSCSI.
· The big unanswered question is whether customers will consolidate their data and storage network equipment vendors so that it will be easier to configure and manage end to end secure quality of service (QoS) bandwidth from spindle to server to client.
Operating Systems: Just Enough OS (JeOS) at Just Enough of a Price
· In a virtual world, the server operating system becomes just a library of functionality that is part of a bigger container of functionality. In other words, ISVs can deliver a ready to run virtual machine (VM) file that contains all the operating system, middleware, application, and specific configuration settings necessary to run the application by deploying the VM file as a virtual server.
· In this “appliance” scenario, the ISV distributes or determines the configuration settings for the supporting software so that only the minimum required capabilities are used. Consequently, deployment of and pricing for any extraneous functionality is squeezed out. Today this works with Linux, but Microsoft prohibits it for Windows. We’ll see if they can maintain that position.
Server Software: Licensing More for Variable Usage
· Until a few weeks ago all major vendors such as Oracle and Microsoft licensed their server software for “peak” usage. In other words, customers paid for the maximum number of cores, CPUs, or servers their software would physically touch or run on. In a virtualized data center, however, capacity fluctuates more with demand and certainly more fluidly for maintenance and high availability activities.
· In this more fluid environment, there is intense customer pressure to change the licensing model. Just in the last few weeks Microsoft eased the licensing restrictions on its server software to accommodate deployment to virtual servers. This is likely the first step for all vendors to an environment where software is licensed more according to its usage than according to the number of servers it is deployed on. That would appear to be an effective price cut for customers.
Enterprise Applications: Lower TCO
· Virtualization doesn’t seem likely to up-end the economics of enterprise applications. But increasingly sophisticated management tools, starting with deployment, availability, and recovery, should help drive total cost of ownership (TCO) down to less burdensome levels.
Management and Automation: The New Data Center OS
· One major new layer in the IT stack will emerge over the next few years. Today we call it systems management, but it will go beyond the monitoring and management of traditional physical resource pools such as storage, servers, and networks. Instead, we are likely to see a management platform that automates and orchestrates the delivery of resources from virtual pools of infrastructure to deliver business application services according to policies set in SLAs.
· This layer is what Microsoft, IBM, and HP among others called utility computing early in the decade and what the public Web companies call cloud computing today. For it to work, however, it appears that existing software will have to be modified to be more deeply aware of the connection between how it’s designed and how it’s deployed. Like all major platform shifts, this will take time but create great value for its owner. Microsoft and VMware are both focused on this layer but maybe we’ll see a dark horse such as Cisco emerge.
Tags: Cisco, Cloud Computing, Data Center, Data Center Automation, EMC, Enterprise Software, Microsoft, NetApp, Network Virtualization, Server Virtualization, SLA, Storage Virtualization, Virtualization, VMware