December 04, 2008
Las Vegas isn’t exactly Silicon Valley East, but it does manage to keep IT journalists living here busy with countless conferences. This week, the Gartner Data Center Conference is in town.
Now, I respect Gartner as much as the next guy, but it has to stop scheduling keynotes at 8 a.m. I have no clue how these pent-up datacenter folks get up that early after spending their nights experiencing Sin City, especially considering I was unable to make it to the MGM Grand that early -- even to hear about cloud computing and disruptive datacenter technologies. (Shocking!) Luckily, I’ve got slides.
Let’s start with the cloud. In his presentation titled “The Future of Infrastructure and Operations: The Engine of Cloud Computing,” Gartner’s Thomas Bittman presents cloud computing an external implementation of a real-time infrastructure (RTI). RTIs are distinguishable by their capabilities around automation, service-orientations, dynamic provisioning, SLAs, etc. Cloud service providers themselves already have made the move to RTI, and organizations wishing to leverage cloud computing to its maximum potential should be on their way to RTI, too. His timeframe for widespread organization RTI adoption is 2010-2020. Of course, virtualization enables all of this, and advanced organizations already are in the midst of virtualizing their operations.
(For what it's worth, Bittman seems to have warmed to the cloud since last year.)
Speaking of virtualization, Bittman breaks it into three distinctive versions thus far. Virtualization 1.0 was about consolidation and cost savings, Virtualization 2.0 is about agility and speed, and Virtualization 3.0 is about alternate sourcing, or cloud computing. Initially, the latter might consist “simply” of moving VMs between physical servers to meet demand. In more advanced stages, this would include the much-hyped hybrid model -- moving workloads between the datacenter and off-site cloud resources.
In order to handle this “cloudsourcing,” Bittman believes smaller enterprise will turn to “service brokers” -- the evolution of today’s systems integrators and VARs -- who will orchestrate cloud resources and take responsibility for service levels. Larger enterprises, Bittman suggests, will form business- and IT-savvy “dynamic sourcing teams” that will make day-to-day decisions regarding sourcing.
According to Bittman’s presentation, RTI, or internal cloud, will further be enabled by meta operating systems (see VMware’s Virtual Datacenter Operating System) and advanced blade racks featuring integrated compute, storage and memory, aggregated CPU and memory, and virtualized I/O.
In a presentation I did attend, Gartner’s Donna Scott expanded on RTI. For starters, she said that, presently, there is no such thing as a datacenter-wide RTI; they only exist in pockets. However, she added, RTI is being spurred by external forces that include grid computing/HPC/cloud computing, shared services, SOA, IT pressures (cost and performance) and datacenter concerns, like modernization or constraints. The driving force? Scott’s June 2008 survey and a quick audience poll both found “agility” the leader, but “cost savings” is No. 2 and rising. (Let’s not forget that much has changed since June. Cost savings could be a chart-topper very soon.)
Another audience poll found “management process maturity” and “organization/culture” as the top two inhibitors to RTI -- a result that makes sense. These are the exact same issues that plagued grid computing in the mainstream enterprise, and the same I keep hearing cited around cloud computing -- two delivery models that fall under RTI’s penumbra. Other inhibitors include infrastructure maturity, unproven technology, lock-in concerns and ROI.
It’s not all about the future, though. Scott says that RTI is entrenched in high-performance computing environments, and progress is being made in the following areas: server virtualization, J2EE and .NET apps, database virtualization, disaster recovery, shared testing environments, loosely coupled server HA (acting as loosely coupled clusters), repurposing of production nodes (e.g., from batch to OLTP), and in dynamic capacity expansion for specific applications.
Scott also talked about the need for service governors, the brains of RTI. Service governors take action based on SLAs, policies, application demand, etc. However, she added, you’ll likely need several governors across different tiers, applications and environments. This definitely seems true as it relates to VMware, for example, whose VDC-OS initiative seems all-encompassing, but only supports VMware environments. That said, companies like IBM might be eliminating the need for multiple governors with cloud management solutions like Blue Cloud.
In two areas of particular concern to me, Scott named as application service governor leaders Cassatt (with Active Response), DataSynapse (with FabricServer) and IBM (with WebSphere Virtual Enterprise). For infrastructure service governor leaders, she noted Cassatt again, as well as Univa UD (with Reliance), CA (with Data Center Automation Manager), VMware (with DRS), Novell (with ZENworks) and IBM (with Tivoli Intelligent Orchestrator and Tivoli Provisioning Manager).
As for how to build an RTI, Scott suggests doing it from the bottom up. First, she says, standardize and consolidate. Next, virtualize. Step three, standardize processes. Finally, automate policy-based actions, move toward service-oriented IT, etc. -- all those cutting-edge steps that seem might seem like pipe dreams right now.
I’ll end with Carl Claunch’s list of the “Top 10 Disruptive Technologies Affecting the Data Center.” No surprise, cloud computing is on the list -- No. 2, to be exact. (I hope no one is upset that I’m not defining cloud here for the billionth time, and because I didn’t actually see the keynote, I can’t offer any burning insights from Claunch himself.) He also includes on the list, at No. 3, computing fabrics, which are very similar (if not the same) as the next-generation blade racks Bittman spoke of in his cloud keynote. This makes sense to me because, as I noted on several occasions last year, IBM said its first Blue Cloud solution would be in the form of an IBM BladeCenter. Whether this actually happens, we’ll have to wait and see.
In order from 1 to 10, Claunch’s list of disruptive technologies is:
Posted by Derrick Harris - December 04, 2008 @ 4:21 PM, Pacific Standard Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.