May 29, 2012
Prestigious technology analyst firm recognizes ScaleXtreme's next-generation cloud-based systems management tools for deploying, monitoring and patching servers
SAN MATEO, Calif., May 29 — ScaleXtreme, the leading provider of cloud and server management products, today announced that it has been named a "Cool Vendor" in the Cool Vendors in IT Operations Management 2012 report by Gartner Inc. The report highlights four exciting technology companies providing new thinking to the operations management sector.
The report, written by Milind Govekar, Ronni J. Colville and Ian Head, calls out ScaleXtreme's unique ability to monitor, alert, patch and control the costs of both public cloud instances and physical servers. ScaleXtreme is at the cutting edge of modern systems management, allowing customers to create server templates, deploy them across on-premise machines as well as public cloud machines and manage those machines with script automation. ScaleXtreme's powerful budgeting and control tools give cloud computing customers granular visibility into what they're spending and the ability to enforce budget limits.
"We're honored to be recognized for the hard work we've put into building ScaleXtreme," said CEO Nand Mulchandani. "Gartner's analysts are the best in the business and have built their reputations on their innate ability to see what's next in the future of technology. We are focused on building the next generation of IT Operations Management products and this recognition affirms our company and product direction."
Other vendors recognized in the Gartner report include AccelOps, Terma Labs and Veloxum. Download the report: www.gartner.com/DisplayDocument?doc_cd=233196
ScaleXtreme provides a single view, unifying the management of an organization's server environment – spanning private and public cloud machines, different public cloud providers running on any virtualization platform, and even physical servers. It works with a variety of operating systems and technology stacks and helps users rapidly scale server deployments using templates. ScaleXtreme recently announced a number of additional features, including the industry's only cloud cost management tools which enables IT professionals to gain visibility into cloud provider costs, establish role-based budgets and prevent the launch of unauthorized cloud machines.
ScaleXtreme products also come equipped with cloud-based patch management automation which provides customers the ability to schedule, deploy and automate patches for multiple machines, as well as the on-the-go iPhone application that gives system administrators a unified view of their cloud instances through a single, simple interface.
Customers can use ScaleXtreme's free product by signing up: www.scalextreme.com/free
ScaleXtreme provides powerful, cloud-based server automation products for the modern distributed data center. Built from the ground up to be simple, scalable and social, IT gets a single unified automation platform to build and control physical, virtual and public cloud servers. For more information, visit www.scalextreme.com.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.