September 25, 2012
SAN MATEO, Calif., Sept. 25 — ScaleXtreme, the leading provider of cloud-based monitoring and systems management products, released free software for companies to build internal cloud dashboards and self-service portals for developers and other end-users. The technology enables creating a new public cloud instance as simple as using a vending machine. Existing and new customers can get started building their own internal, on-premise self-service portal by downloading the open-source code here:https://github.com/scalextremeinc/scalex-portal.
Many organizations are finding that allowing developers and end-users to provision public cloud servers can dramatically drive down costs and improve satisfaction. Organizations want to balance these new capabilities with an interface that hides the complexity of cloud computing from users and provides appropriate cost controls and visibility into spending. They want to ensure cloud servers are monitored, compliant with security policies and have been appropriately patched.
ScaleXtreme’s new on-premise cloud portal works directly with ScaleXtreme’s cloud service through standards-compliant REST API’s. It takes advantage of ScaleXtreme’s rich monitoring, budgeting, patch management, cloud provisioning and automation capabilities.
ScaleXtreme developed the self-service portal at the explicit request of several of its largest enterprise customers. The customers, leaders in the financial services and entertainment industries, have adopted public cloud as a way to save money and increase agility, but quickly found they lacked visibility and control and wanted to build a simplified dashboard and portal to enable widespread usage of the public cloud.
“Enterprises can get the speed and cost-savings from using the cloud without having to worry about cost overruns or policy violations,” said ScaleXtreme CEO Nand Mulchandani. “The ScaleXtreme on-premise developer portal simplifies enterprise adoption of cloud computing and makes it easy for end users to get access to cloud resources through an interface integrated into a look-and-feel that they’re familiar with.”
ScaleXtreme provides powerful, cloud-based server monitoring and automation products for cloud and on-premise servers. Its richly featured product set is built from the ground up to be simple, scalable and social. ScaleXtreme gives IT administrators a single unified monitoring and automation platform to build and control physical, virtual and public cloud servers. For more information, visit www.scalextreme.com.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.