August 06, 2007
With the LinuxWorld Conference & Expo and Next Generation Data Center taking place this week in San Francisco, we thought it would be a nice service to focus on the idea of the next-generation datacenter in this week’s issue. With that in mind, you will find insights from three distinct conference presenters, each of whom looks at the issue from a different perspective: Werner Vogels, Amazon.com CTO; Mark Linesch, Open Grid Forum president; and Bob Lozano, Appistry founder and chief strategist.
Vogels really needs no introduction, and his comments on Amazon’s Web services offerings, as well as the datacenter of the future, should be of interest to anyone who is concerned with the ability to scale – especially at a high level. As for Lozano, well, Appistry has one of the more unique distributed datacenter solutions on the market right now, and Lozano makes clear that Appistry’s Enterprise Application Fabric was developed with scalability in mind. He also previews his two presentations at NGDC, both of which should be eye-opening to any attendees unfamiliar with this market. For my money, though, the real star of our NGDC special edition is OGF’s Linesch, who speaks candidly about what his standards body is doing to keep up with the ever-evolving world of distributed IT. Topics like virtualization and SOA might seem like a far cry from the grid interoperability work the organization was working on just a few years ago (under the Global Grid Forum moniker), but Linesch assured me that the OGF has its ear to the ground and is doing the work necessary to stay relevant.
As for my own thoughts on this week’s conference (at which I can be found Tuesday and Wednesday, so come find me if you want to chat), I think it looks like a good starting point to highlight this topic, and hopefully we can see an expanded schedule of presentations in future events, perhaps focusing more heavily on distributed technologies. I realize that I am biased because I cover this area for a living but, come on, is there a set of technologies better equipped to handle scale, availability, processing, data management, etc., than those that have emerged from the grid computing space? And aren’t these the things around which the next-generation datacenter will revolve? Even virtualization, which does play a rather large role in the conference program, seems to be represented only in terms of server virtualization. This is unfortunate, because while server virtualization can maximize resources and help with your company’s “green” initiative (I know, it’s an annotated list), there is a lot to be said about the benefits of application virtualization, which exists almost solely to ensure your mission-critical applications always have the resources they need whenever they need them. Don’t even get me started on distributed databases, in-memory caching, utility computing, and good, old-fashioned grid computing – all of which are for real today, but still need some championing to find their way into the next generation of mainstream datacenters.
Of course, maybe any blame here doesn’t lie with the conference organizers, but rather with the vendors themselves. Aside from Appistry and the OGF (which is hosting a Grid Pavilion), the combined exhibition floor plays host to only two other companies who have built their businesses around high-availability distributed software: Evergrid and Cassatt. In contrast, last year’s GridWorld event had representation on some level from a slew of vendors who might actually be a better fit for NGDC. Two prime examples are United Devices and DataSynapse, both of whom have abandoned the term “grid” to some extent to focus on application virtualization and other datacenter-driven solutions. Companies like GigaSpaces, GemStone Systems, Egenera and Oracle (especially since acquiring Tangosol’s Coherence technology), as well as more traditional grid vendors like Platform, Digipede, and even big boys IBM, Sun Microsystems and HP, might also be wise to spread word of their distributed solutions at future events like this. By including a “Virtualization, Grid and HPC” track, the folks at IDG clearly realize the importance of these types of solutions in tomorrow’s datacenters, so now maybe it’s up to the leaders and innovators in their respective spaces to make their collective presence and products known.
To conclude the NGDC spiel, I just want to remind you to check back with GRIDtoday in the days to come for all the breaking news from LinuxWorld and NGDC, and be sure to give a look at next week’s issue for all of this news, as well as some commentary, all nicely packaged in one place.
As for the rest of this week’s news, some highlights include: GigaSpaces seeking partners; Evergrid eyeing the enterprise datacenter; Red Lambda bringing grid to network security; Storage Resource Broker targeting HIPAA compliance; and Sun continuing its profitability.
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at email@example.com.
Posted by Derrick Harris - August 06, 2007 @ 11:16 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.