November 19, 2007
For anyone (such as myself) tasked with covering cutting-edge enterprise IT news and announcements, it is unlikely there has been a busier time than last week, which included Supercomputing 07 in Reno, Nev.; Oracle OpenWorld in San Francisco; and the Microsoft TechEd IT Forum in Barcelona, Spain. And although I usually like to comment on the big news of the week, there is no way I can even begin to touch upon everything that was announced within the last seven days, so I’m not even going to try. However, we will do our best to bring you more in-depth coverage of many of these announcements in the coming weeks and into the new year.
Nevertheless, I was able to make it to Reno for a day of SC07, and I sat in on a couple of very interesting sessions. The first was a Birds-of-a-Feather session called “Supercomputers or Grids: That is the Question!” which was chaired by Wolfgang Gentzsch (D-Grid) and Dieter Kranzlmueller (Johannes Kepler University Linz), and featured panellists Francine Berman (San Diego Supercomputer Center), Erwin Laure (CERN and EGEE), Satoshi Matsuoka, (Tokyo Institute of Technology and NAREGI) and Michael Resch (High Performance Computing Center Stuttgart and PRACE). Despite its title, though, this session focused on the use of supercomputers and grids, although the panellists all had slightly different takes on how the two architectures work together.
Berman, for example, said the focus should be on finding “the right tool for the right job,” and she presented types of applications that are better suited for one architecture over another. In her mind, organizations looking to get important work done need not make a binding decision to use one platform over another when the reality is that both can -- and should -- have a place in an organization’s HPC plans. EGEE’s Laure, on the other hand, is of the belief that supercomputers and grids are “two fundamentally different things living in the same ecosystem.” To add credence to this statement, he presented the notion that while supercomputers exist to solve the most demanding computing problems, the purpose of grids is the federation of computation and data, which makes for an effective tool for collaborative research and allows for dynamic reconfiguration. The next step, he added, is to federate supercomputers and grids so that researchers have seamless access to the features of both. Resch echoed -- to a degree -- this sentiment in his intentionally provocative presentation, concluding that the actual model for the co-existence of the two platforms is “supercomputers on grids.” “Grid is the ecosystem,” said Resch, analogizing supercomputers to power plants and grid to power grids.
In my opinion, though, the star of the show was Matsuoka, who presented his vision of grids shedding their early goals of making PCs pretend to be supercomputers and focusing instead on making supercomputers act like Internet datacenters (IDCs). According to Matsuoka, the ultimate business model for large-scale grids might well be in aggregating HPC resources and granting virtual access to these resources to end-users, much the same way Web standards and protocols make transparent Web access to IDC resources. In such a model, he said, highly managed supercomputers would offer better service quality than, say, a grid of PCs, and offering access to backend resources that exceed what you can do on your laptop is added value that will keep people coming back. It sounds to me like a beefed up Network.com, and not entirely unlike what TeraGrid is doing with its Scientific Gateways, but nonetheless is a grand idea that shouldn’t be too difficult to make happen should the right people wish it so.
I also got a chance to attend a “CTO Roundtable” featuring Nancy Stewart, senior vice president and chief technology officer in the information systems division for Wal-Mart Stores Inc; Kevin Humphries, senior vice president of technology systems for FedEx Corporate Services; Reza Sadeghi, CTO of MSC Software; and Anna Ewing, executive vice president of operations and technology and chief information officer of The Nasdaq Stock Market Inc. As you might imagine, there is no shortage of valuable insights when the IT masterminds of some of the world’s largest corporations share the stage, but I want to share just a few key, if not obvious, observations.
First, and this the obvious one, Wal-Mart is huge, gigantic, ginormous, and any other adjective indicating sheer size. Stewart made this crystal clear when discussing the company’s most-pressing data problem -- its 400-billion-row table, which ultimately will top a trillion rows. Managing this data and the HPC environment necessary to process it is no small undertaking, nor is it a job for anyone but Wal-Mart. According to Stewart, the retail giant doesn’t have SLAs with any of the ISVs with whom it does business because they simply could not afford to pay for an outage of even an hour (the day after Thanksgiving, for example, Wal-Mart expects to be doing business in the neighborhood of $2 billion per hour). For this reason, as well as for ensured reliability, serviceability and dynamic changes, Wal-Mart builds about 80 percent of its software in-house.
Stewart also gave the audience a look into Wal-Mart’s overall environmental policies and efforts, which range from IT concerns like using virtualization to reduce power usage, to mandating smaller packages from product manufacturers. The latter, for what it’s worth, leads to less resource consumption across the board, from the actual materials used in production to the amount of gas used by delivery vehicles in transporting the same number of units.
Finally, and speaking of delivery, FedEx’s Humphries used a good portion of his energy bemoaning the lack of talent available to deal with his company’s increasingly fabric-like IT infrastructure. More and more, he said, and thanks to grid technologies, HPC is becoming embedded in the general IT environment of large enterprises, and the islands of skills that once sufficed are no longer cutting it. Of course, anyone in the grid world has heard this all before, as the elimination of application silos inherently presents its own problems in terms of realigning and retraining IT staff to handle a new platform. The question this begs me to ask is why FedEx -- and any other companies experiencing the same issue -- doesn’t invest in educating university students in the technologies that make its business run. Due to the proprietary nature of the corporate world, I don’t expect them to offer up parts of their software like Google and, most recently, Yahoo, but companies like FedEx could throw a little money at the problem and make sure universities have the resources to teach students about how to build, maintain and manage large-scale, distributed corporate infrastructures.
As for the rest of this week’s issue, make sure to check out the features that originally ran in HPCwire’s live coverage of SC07, and please note that “cloud computing” is officially the new buzzword and buzz technology, with Yahoo following Google in taking it to universities (more on this next week), and IBM now offering its “Blue Cloud” solutions. Other items that definitely are worth checking out include: “OGF Spec Makes Grids Interoperable”; “Azul, GemStone Ally on Extreme Transaction Processing”; “Microsoft Announces New System Center Offerings”; “Majitek Licensing GridSystem for Free to Technical Community”; “Microsoft Supports SOA with Windows HPC Server 2008”; and “HP Advances Flexibility of Blades Across the Datacenter.” Oh, and did I mention that Oracle, Microsoft and Sun all announced new virtualization platforms? Something tells me we’ll be hearing more about this …
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - November 19, 2007 @ 11:10 AM, Pacific Standard Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.