September 17, 2008
It took a perfectly planned schedule to maximize my time moving between VMworld at the Venetian and the Hosting Transformation Summit at the Mirage, so, needless to say, I was disappointed when I arrived at the latter to find it running late. A tough decision had to be made, and I opted to head back to the Venetian and get my fill of virtualization. The result: no insights from the hosting world today, but even more from VMworld. (Not to fret, though: I spoke with Tier1 Research cloud master Antonio Piraino for about an hour yesterday about the intersection of hosting, virtualization and cloud computing, and I'll share some of his thoughts at a later date.)
The morning began with a kickoff keynote by VMware President and CEO Paul Martiz. I didn't attend, but I understand he reiterated some of the big news the company announced yesterday, including VMware's Virtual Datacenter Operating System. As I noted yesterday, and I as I firmly believe, this is a very forward-thinking strategy, and it could be poised for success if it works as planned.
However, a few red flags went up on that front as I listened to Maritz answer questions in a Q&A session following his speech. One attendee was concerned about Virtual Center being a single point of failure, particularly with with VMware's bug problem last month. Maritz acknowledged (as he did at a later press Q&A) that this definitely is something to be concerned about, but he reiterated that VMware prides itself on building reliable software, and vowed that such a problem will not occur again. The problem is, we're not far enough removed from the bug issue to just brush it aside as an anomaly, and in a cloud datacenter running only VMware machines, the results of a similar oversight could be disastrous.
Maritz also noted that a federated Virtual Center for managing multiple datacenters won't be fully available until 2010. Granted, 2010 isn't too far away (well, January 2010, at least), but that is a long time in IT terms. And if organizations can't even manage their VMware infrastructures across multiple datacenters until 2010, when will they be able to -- or feel confident enough to -- run their geographically distributed datacenters as a cloud with VDC-OS?
But these concerns are not to be construed as indications that I am a VMware cloud hater, so to speak. Especially if the company opens up Virtual Center to work with other hypervisors, VMware's cloud vision has a lot of promise. In a late-day panel on vCloud featuring representatives from Rackspace, AT&T and T-Systems, VMware's Deepak Puri touted vCloud's ability to transport security and business policies alongside their associated applications as workloads move between the datacenter and the cloud. T-Systems' Gregory Smith added that this capability makes short-term hosting deals much easier to pull off. If, for example, a company is building out a big SAP infrastructure and needs to house certain things externally during the process, a solution like vCloud will ease the transportation of data back and forth because the virtual machines will be the same on both ends, and policies won't need to be changed or altered in the new environment.
Oh, and Cisco's Ed Bugnion announced that Cisco and VMware have teamed up to bring virtual machine awareness to the network layer. The results: the Cisco Nexus 1000V switch and VN-Link. If, as Bugnion says, the virtual machine really is the new datacenter building block, a network that inherently understands the VMs' portability and dynamism must be a good thing. Looking forward, Bugnion said the goal is to move from cluster-scale virtualization to datacenter-scale virtualization to Internet-scale virtualization, which will require this new breed of network components.
We'll have comments from VMware on its new cloud initiative by the end of the week, so make sure to stay tuned. What a week it already has been -- and it only promises to get more interesting.
Posted by Derrick Harris - September 16, 2008 @ 11:55 PM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.