September 21, 2012
Solutions Architect Robert Stober will demonstrate how tightly coupled Bright Cluster Manager and PBS Professional seamlessly extend HPC clusters into the cloud
SAN JOSE, Calif., Sept. 21 — Bright Computing, the leading independent provider of cluster management software, announced it will present its data aware cloud bursting solution at the PBS Works User Group meeting in San Jose, California. Robert Stober, Bright Computing senior solutions architect, will show how to extend on-premise clusters into the cloud, and manage these resources as part of the local cluster. He will also demonstrate Bright's unique data aware scheduling capability for the cloud, eliminating the need to manually manage data movement. The combined Bright Cluster Manager and PBS Professional workload manager deliver a seamless, intuitive solution for provisioning, scheduling, monitoring and managing the extended cluster and data within one intuitive GUI, or cluster management shell, on-premise or in the cloud.
"Bright Cluster Manager is delivered with PBS Professional as a pre-configured, sys admin-selectable option," said Stober. "Bright's integration of PBS Professional, and other workload managers, helps customers get the most productivity from clusters, whether on premise or in the cloud."
When the system administrator selects PBS Professional as the workload manager of choice, Bright automatically installs and configures the software. Bright continually updates configurations throughout the life of the cluster. The system administrator can then take full advantage of the workload manager's capabilities using the Bright GUI or cluster management shell, without the need to learn additional commands or procedures. Further, PBS Professional can be managed via Bright's SOAP API. Both Bright Cluster Manager and PBS Professional can be extended into the cloud to provide additional capacity, with just a few mouse clicks within the Bright GUI.
The tight integration of Bright's health checking capabilities with PBS Professional also provides protection against the "Black Hole Node Syndrome," when normally undetected node issues cause job crashes, sometimes extending to full job queue flushes. Working closely with the workload manager, Bright's pre-job health checks detect problems before nodes actually fail, and then sidelines these nodes before the job is started.
As another means of maintaining high throughput, Bright Cluster Manager's automatic failover capability also manages the seamless failover of the workload manager, preventing head node crashes from interrupting productivity.
Robert Stober's presentation is at 10:00 am on Tuesday, October 2nd. Also presenting at the conference is Jim Glidewell from Boeing, a joint Bright-PBS Professional user; experts from Idaho National Lab, Nissan, Clemson University, FNMOC, NASA, and the host, Altair Engineering.
About the PBS Works User Group
PBSUG is a two-day event for PBS Works customers and partners, bringing together HPC thought leaders who focus on unique challenges facing businesses today. The event takes place October 1-2 in San Jose, California. For more information and to register, visit the event site.
Source: Bright Computing
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.