December 03, 2007
If I learned one thing from the time I spent at last week's Gartner Data Center conference, it's that the greater IT community still isn't entirely on board with the family of technologies this publication holds near and dear, but it will jump on soon enough -- and virtualization will spur it along.
Kicking off the event with a keynote titled “The Future of Infrastructure and Operations,” analyst Thomas Bittman predicted that virtualization will be the most “impactful” infrastructure and operations technology through 2012, and that it will help to transition IT into what Gartner is calling the real-time infrastructure (RTI) era. Virtualization will play this facilitator role, he said, because aside from simply being used to consolidate servers, virtualization technology enables the layers of abstraction necessary for alternative delivery models like grid computing, SaaS and cloud computing, all of which will touch different parts of the RTI elephant.
Until we achieve the nirvana that is RTI, however, we should prepare to see virtualization impact everything, said Bittman in a later session, from costs to rollout time to shared services. For example, while Gartner research finds that only 6 percent of virtualization-friendly workloads are actually running in virtualized environments, this number will increase sharply as we approach Gartner's prediction of 4 million x86-based virtual machines by 2009. We also should expect to see, among other virtualization-based innovations, an increase in software appliances (essentially, pre-packaged applications complete with their own virtual operating systems) and employee-owned PCs that contain locked-down, virtual environments for job-related applications and tasks.
Bittman also predicts virtualization will continue to have a disruptive effect on software licensing as workloads begin to move dynamically across both physical and virtual machines. Illustrating the current state of licensing as it relates to virtualization, an impromptu survey of attendees in the packed MGM Grand meeting room showed 28 percent partaking in custom negotiations for licensing their virtualized environments, 24 percent saying licensing issues have limited their virtualization efforts, and 21 percent claiming virtual licensing concerns are affecting software choices.
Gartner analyst Carl Claunch also sang the praises of virtualization in his keynote highlighting the “Top Ten Disruptive Technologies Affecting the Data Center.” Calling virtualization a “Swiss Army knife tool” for IT, Claunch said it is a “real disservice” to consider virtualization as a technology only to be used for consolidation or slicing up servers. The layers of abstraction that virtualization enables, he commented, are great for eliminating the “dense spiderwebs” of interdependencies in the datacenter, and the ability of virtualization to make hardware generic leads to on-the-fly movement of work. In addition, virtualization can accomplish the opposite of slicing up servers, instead allowing users to aggregate resources to resemble one big SMP machine.
This latter use of virtualization is not unlike the concept of server fabrics, which Claunch also cited as a disruptive technology. A step beyond blades, server fabrics are most realistic as entirely virtualized blade racks that users can partition to create whatever size or type of systems they need in terms of memory, CPUs, I/O, etc. Not surprisingly, I/O virtualization plays a big role in making server fabrics realize their full potential.
Speaking of the aforementioned cloud computing (because how could a week go by without doing so), Bittman also shared his thoughts on the technology, its current state and how it fits into the RTI concept. RTI, he explained, is all about creating a service-oriented infrastructure where IT is shared across users, divisions, applications and beyond. The ultimate realization of a cloud of computing resources where service requirements go in and services come out -- all transparently -- fits right into what RTI is looking to accomplish, but for now, believes Bittman, cloud computing is pretty much a way to achieve scale. There is a lot of smoke, he said, but little fire, as current (and recently announced) offerings simply lack the service level management and the policy enforcement capabilities needed for a mature, enterprise-ready solution. However, he said, cloud computing certainly is “pointing toward the future” and certainly is a strategy to be aware of in the years to come.
Personally, I'm inclined to agree with Bittman about the current state of cloud computing as it relates to doing it on a grand scale -- a la Google -- but I believe it will be a reality in the datacenter sooner than some might expect. IBM, for its part, understands that baby steps are necessary, and therefore is planning for its first Blue Cloud offering to be in the form of an IBM BladeCenter -- far from a Web-scale collection of servers. What's more, solutions like Appistry's Enterprise Application Fabric (or GigaSpaces XAP or DataSynapse FabricServer, to name a few) already do a good job forming “clouds” and allowing for high availability, and they're improving every day when it comes to policy-based automation and SLAs. Obviously, there is some work to be done along the lines of education and open standards before niche solutions like this become widespread, but I believe the foundation of cloud computing already has been laid.
For those of you interested in scale, I would suggest reading our interview with ScaleOut Software founder and CEO William Bain. Distributed caching solutions are gaining in popularity, but not everybody is aware of all the players in the space; most of us tend to focus on the more well-known vendors like GigaSpaces and Tangosol (now part of Oracle) and GemStone. However, other companies are doing their own things to deliver massive scale and low latency, and Bain does a nice job of differentiating ScaleOut from the others.
I also should point to our special section, featuring two articles relating to the Open Grid Forum's activities at last month's SC07 conference. OGF is doing some great work toward achieving interoperability among both enterprise and production grids, and they showcased both initiatives in Reno. As I mentioned when the original announcements were made during the show, the grid market has come a long way by having major middleware and job-scheduling solutions being able to interoperate.
As for the rest of this week's issue, make sure to check out the following items, as well as any others that might be up your alley: “HealthAlliance Hospital Implements IBM Grid Archiving”; “ObjectWave Launches Data Caching Developer Framework”; “SAS, Sun Launch Datacenter BI Initiative”; “DMTF Creates Open Standard for Virtualization Management”; “Fujitsu, Citrix Cut Cost of Datacenter Scalability”; and “NCSA to Host Workshop on Datacenter Design.”
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at email@example.com.
Posted by Derrick Harris - December 03, 2007 @ 11:26 AM, Pacific Standard Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.