November 09, 2012
SALT LAKE CITY, Nov. 7 — Slurm is one of the most powerful and scalable workload managers in HPC, yet it is probably the least well known. This low profile is about to change.
The open source product has gained momentum, with many national laboratories around the world and dozens of universities relying on Slurm — but few people outside of these organizations know this fact. Now, a number of Slurm backers have come together to raise the profile of Slurm, beginning with organizing a Slurm booth at SC12 in Salt Lake City. These initial sponsors comprise Slurm users CEA (French Alternative Energies and Atomic Energy Commission), CSCS (Swiss National Supercomputer Center), and Lawrence Livermore National Laboratory (LLNL). In addition, technology providers Bright Computing, Bull Information Systems, Greenplum/EMC, Intel, NVIDIA and SchedMD are participating.
Mark Seager, Intel CTO for the HPC Ecosystem comments, "Continued innovation in the HPC ecosystem is a sign of the health, growth, and importance of the HPC market segment. It is important for HPC innovation to remain vibrant, and the Slurm activity is an exemplar."
Slurm is an open source workload manager originally developed to schedule compute jobs at LLNL. Ten years later, it is now used on about 30% of the TOP500 systems, possibly more than any other workload manager.
Started by just a few programmers at Livermore, there are now more than 100 developers from dozens of organizations around the world who have contributed to the code. Together they are adding capabilities at high speed, working to a 6-month release cycle. The primary developers of Slurm, Moe Jette and Danny Auble, now run SchedMD, the company that oversees the code base and leads its further development, and offers commercial Level 3 support.
"We built Slurm to efficiently schedule resources for the biggest systems, and have proven this scalability to at least an order of magnitude higher than any currently available system," said Moe Jette, CTO of SchedMD. "It's now one of the most widely used workload managers in the Top500, including on the Sequoia supercomputer at LLNL. As we move to Exascale, Slurm is the workload manager best positioned to schedule jobs at that scale."
Matthijs van Leeuwen, CEO of Bright Computing, also sees the rising importance of Slurm Workload Manager. "We are seeing a strong increase in customer demand for Slurm. Although we have integration with all of the major workload managers as pre-configured options for Bright Cluster Manager, and are partners with most of their vendors, we are now including Slurm as our default workload manager. Further, we are about to launch commercial support for Slurm, to provide a one-stop solution for our customers. This initiative to support the growth of Slurm makes a lot of sense to us. It aligns with our belief that those who manage HPC clusters benefit from the ability to choose the workload manager that best fits their needs."
Slurm presentations scheduled for SC12 Booth #3444 include:
Introduction to Slurm Workload Manager and Roadmap (Moe Jette and Danny Auble, SchedMD)
Bull's Slurm Roadmap (Eric Monchalin, Bull)
MapReduce support in Slurm (Ralph Castain, EMC/Greenplum)
Using Slurm for data aware scheduling to the cloud (Martijn de Vries, Bright Computing)
Slurm on the Sequoia supercomputer (Don Lipari, LLNL)
Slurm at Rensselaer Polytechnic Institute (Tim Wickberg, RPI)
The Slurm BoF meeting is scheduled for 12:15 PM on 15 November, in room 155-A at the Salt Palace Convention Center in Salt Lake City.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.