March 10, 2008
PROVO, Utah, March 5 -- Cluster Resources Inc., the developers of the Moab family of products, today announced the first annual Moab•Con, a four-day event where industry leaders and experts will join Cluster Resources’ developers for in-depth, interactive presentations and discussions on leveraging maximized performance on compute infrastructures.
“The purpose of the conference is to bring Moab users together to share ideas with each other and with our engineers -- to learn from the engineers and to discuss current and future needs,” said Dave Jackson, CTO of Cluster Resources. “It’s really an opportunity to network and collaborate with some of the best minds in the industry, representing many of the world’s largest and most advanced data center and HPC organizations.”
Moab•Con 2008: Advancing Computing Intelligence, which takes place May 27-30 at the Provo Marriott Hotel and Conference Center in Provo, Utah, will feature keynote addresses by leading Linux architects on future directions in workload management, next generation compute architectures and data-driven workflows in Web 2.0. General sessions will highlight how best to take advantage of new technologies such as autonomics, adaptive data centers, green computing, virtualization and cloud computing, while panel sessions will discuss best practices, current issues and the latest HPC buzz.
Additional sessions will be presented by the foremost HPC vendors, administrators of Top20 clusters and Moab developers, and will address topics such as Windows/Linux hybrid clusters, managing fairness and SLA, holistic resource management and automated capacity planning. Moab users will present interactive sessions with case study presentations targeted at both the HPC and data center communities. The conference also provides forums for extensive one-on-one access time with Moab and TORQUE developers.
“In all cases, sessions will cover how these technologies can be applied immediately to a compute infrastructure,” said Jackson. “This conference will be very real-world, providing discussion on real issues and what can be done today. Many people don’t realize that highly advanced solutions exist right now which can address the most pressing needs of the datacenter and HPC.”
The first day of the conference, Tuesday, May 27, is for users new to Moab and TORQUE with three in-depth tutorials on Moab Cluster Suite, Moab Grid Suite and TORQUE.
General sessions will begin on Wednesday, May 28. Seating for the breakout sessions, which run concurrent with the general sessions, is very limited. Those interested in attending these sessions should return the registration form as soon as possible; spaces will be held on a first-come, first-served basis.
Cluster Resources invites all Moab and TORQUE users, partners and those interested in learning how to leverage their compute infrastructure to attend the conference. Register at www.clusterresources.com/moabcon/moabcon.php, where you will also find a tentative agenda, cost breakdown and other logistical information.
About Cluster Resources Inc.
Cluster Resources Inc. is a leading provider of workload and resource management software and services for cluster, grid, data center and adaptive computing environments. With more than a decade of industry experience, Cluster Resources delivers software products and services that enable organizations to understand, control, and fully optimize their compute resources and related processes. For more information, visit www.clusterresources.com.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.