November 19, 2007
For anyone (such as myself) tasked with covering cutting-edge enterprise IT news and announcements, it is unlikely there has been a busier time than last week, which included Supercomputing 07 in Reno, Nev.; Oracle OpenWorld in San Francisco; and the Microsoft TechEd IT Forum in Barcelona, Spain. And although I usually like to comment on the big news of the week, there is no way I can even begin to touch upon everything that was announced within the last seven days, so I’m not even going to try. However, we will do our best to bring you more in-depth coverage of many of these announcements in the coming weeks and into the new year.
Nevertheless, I was able to make it to Reno for a day of SC07, and I sat in on a couple of very interesting sessions. The first was a Birds-of-a-Feather session called “Supercomputers or Grids: That is the Question!” which was chaired by Wolfgang Gentzsch (D-Grid) and Dieter Kranzlmueller (Johannes Kepler University Linz), and featured panellists Francine Berman (San Diego Supercomputer Center), Erwin Laure (CERN and EGEE), Satoshi Matsuoka, (Tokyo Institute of Technology and NAREGI) and Michael Resch (High Performance Computing Center Stuttgart and PRACE). Despite its title, though, this session focused on the use of supercomputers and grids, although the panellists all had slightly different takes on how the two architectures work together.
Berman, for example, said the focus should be on finding “the right tool for the right job,” and she presented types of applications that are better suited for one architecture over another. In her mind, organizations looking to get important work done need not make a binding decision to use one platform over another when the reality is that both can -- and should -- have a place in an organization’s HPC plans. EGEE’s Laure, on the other hand, is of the belief that supercomputers and grids are “two fundamentally different things living in the same ecosystem.” To add credence to this statement, he presented the notion that while supercomputers exist to solve the most demanding computing problems, the purpose of grids is the federation of computation and data, which makes for an effective tool for collaborative research and allows for dynamic reconfiguration. The next step, he added, is to federate supercomputers and grids so that researchers have seamless access to the features of both. Resch echoed -- to a degree -- this sentiment in his intentionally provocative presentation, concluding that the actual model for the co-existence of the two platforms is “supercomputers on grids.” “Grid is the ecosystem,” said Resch, analogizing supercomputers to power plants and grid to power grids.
In my opinion, though, the star of the show was Matsuoka, who presented his vision of grids shedding their early goals of making PCs pretend to be supercomputers and focusing instead on making supercomputers act like Internet datacenters (IDCs). According to Matsuoka, the ultimate business model for large-scale grids might well be in aggregating HPC resources and granting virtual access to these resources to end-users, much the same way Web standards and protocols make transparent Web access to IDC resources. In such a model, he said, highly managed supercomputers would offer better service quality than, say, a grid of PCs, and offering access to backend resources that exceed what you can do on your laptop is added value that will keep people coming back. It sounds to me like a beefed up Network.com, and not entirely unlike what TeraGrid is doing with its Scientific Gateways, but nonetheless is a grand idea that shouldn’t be too difficult to make happen should the right people wish it so.
I also got a chance to attend a “CTO Roundtable” featuring Nancy Stewart, senior vice president and chief technology officer in the information systems division for Wal-Mart Stores Inc; Kevin Humphries, senior vice president of technology systems for FedEx Corporate Services; Reza Sadeghi, CTO of MSC Software; and Anna Ewing, executive vice president of operations and technology and chief information officer of The Nasdaq Stock Market Inc. As you might imagine, there is no shortage of valuable insights when the IT masterminds of some of the world’s largest corporations share the stage, but I want to share just a few key, if not obvious, observations.
First, and this the obvious one, Wal-Mart is huge, gigantic, ginormous, and any other adjective indicating sheer size. Stewart made this crystal clear when discussing the company’s most-pressing data problem -- its 400-billion-row table, which ultimately will top a trillion rows. Managing this data and the HPC environment necessary to process it is no small undertaking, nor is it a job for anyone but Wal-Mart. According to Stewart, the retail giant doesn’t have SLAs with any of the ISVs with whom it does business because they simply could not afford to pay for an outage of even an hour (the day after Thanksgiving, for example, Wal-Mart expects to be doing business in the neighborhood of $2 billion per hour). For this reason, as well as for ensured reliability, serviceability and dynamic changes, Wal-Mart builds about 80 percent of its software in-house.
Stewart also gave the audience a look into Wal-Mart’s overall environmental policies and efforts, which range from IT concerns like using virtualization to reduce power usage, to mandating smaller packages from product manufacturers. The latter, for what it’s worth, leads to less resource consumption across the board, from the actual materials used in production to the amount of gas used by delivery vehicles in transporting the same number of units.
Finally, and speaking of delivery, FedEx’s Humphries used a good portion of his energy bemoaning the lack of talent available to deal with his company’s increasingly fabric-like IT infrastructure. More and more, he said, and thanks to grid technologies, HPC is becoming embedded in the general IT environment of large enterprises, and the islands of skills that once sufficed are no longer cutting it. Of course, anyone in the grid world has heard this all before, as the elimination of application silos inherently presents its own problems in terms of realigning and retraining IT staff to handle a new platform. The question this begs me to ask is why FedEx -- and any other companies experiencing the same issue -- doesn’t invest in educating university students in the technologies that make its business run. Due to the proprietary nature of the corporate world, I don’t expect them to offer up parts of their software like Google and, most recently, Yahoo, but companies like FedEx could throw a little money at the problem and make sure universities have the resources to teach students about how to build, maintain and manage large-scale, distributed corporate infrastructures.
As for the rest of this week’s issue, make sure to check out the features that originally ran in HPCwire’s live coverage of SC07, and please note that “cloud computing” is officially the new buzzword and buzz technology, with Yahoo following Google in taking it to universities (more on this next week), and IBM now offering its “Blue Cloud” solutions. Other items that definitely are worth checking out include: “OGF Spec Makes Grids Interoperable”; “Azul, GemStone Ally on Extreme Transaction Processing”; “Microsoft Announces New System Center Offerings”; “Majitek Licensing GridSystem for Free to Technical Community”; “Microsoft Supports SOA with Windows HPC Server 2008”; and “HP Advances Flexibility of Blades Across the Datacenter.” Oh, and did I mention that Oracle, Microsoft and Sun all announced new virtualization platforms? Something tells me we’ll be hearing more about this …
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - November 19, 2007 @ 11:10 AM, Pacific Standard Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.