April 08, 2010
TORONTO, April 8 -- GridCentric Inc., a Toronto-based software company, today announced the availability of Copper, a cluster management system for high-performance computing workloads. Copper combines virtualization and grid computing technologies to enable simple, efficient and flexible cluster deployment, management, and use.
The GridCentric Copper platform enables real-time, on-demand sharing of high-performance computing resources and makes cluster administration a one-click task. With Copper, organizations reduce IT expenditures through significantly lower installation and management costs, higher utilization of their physical infrastructure, and improved sharing of their data assets.
Operators of large clusters face enormous challenges due to system complexity -- compute clusters are typically composed of hundreds or even thousands of individual computers, each requiring separate software components for hardware provisioning and management, resource allocation, job scheduling, and application support. Initial setup of a compute cluster can take several months because of these challenges.
By combining technologies from cloud and grid computing, GridCentric's Copper platform provides server provisioning, resource management, job control, and high-level application support in a single integrated software package. With Copper, the time required to set up and configure a compute cluster goes from months to days. Additionally, Copper has built-in support for operating hundreds of "virtual clusters" on the same set of physical resources, scaling their compute footprints on-demand in real time -- the industry's first true high-performance cloud computing platform. Copper gives cluster users the power of a supercomputer with the ease-of-use of a PC.
"GridCentric's approach to cluster management will enable many organizations to service users in ways that would have otherwise been impractical or impossible. Our experiences with it to date have been very positive. Copper is the way of the future for many organizations," said Professor Michael Bauer of the University of Western Ontario, and Associate Director of the SHARCNET academic supercomputing consortium.
Copper has been in limited trial on a SHARCNET compute cluster located at York University in Toronto, Canada since late 2009, and is now open to all researchers from SHARCNET's 17 academic member organizations.
"GridCentric's Copper product represents a new class of cluster management software," said Tim Smith, co-founder and CEO of GridCentric Inc. "Systems integrators, cluster operators, and end-users will all benefit from Copper's unprecedented ease of installation, management, and use."
About GridCentric Inc.
GridCentric Inc. is a technology-leading systems, networking, and virtualization software company. It is the mission of GridCentric to make high-performance computing easy, without sacrificing performance and flexibility. GridCentric is a privately held corporation, and is funded in part by Rogers Ventures. GridCentric was recently named as one of the "Top 25 Canadian ICT Up and Comers" by the Branham Group.
SHARCNET was established in 2001. It is one of seven world-leading Compute Canada (http://www.computecanada.org/) supercomputing consortia. SHARCNET currently serves 14 universities, two colleges, and one research institute across western Ontario. For more information, visit http://www.sharcnet.ca/.
Source: GridCentric Inc.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.