November 08, 2011
SAN DIEGO, Nov. 8 — StackIQ today announced the immediate availability of Rocks+ 6, the comprehensive software suite for automating the deployment and management of Big Infrastructure. Rocks+ 6 is designed for environments having hundreds or thousands of servers supporting big data, analytics, or high performance computing. These environments require powerful management software that turns loosely-coupled commodity hardware and open source software into tightly-coupled enterprise grade appliances, and StackIQ has been building software to do that for years. With thousands of satisfied customers using Rocks today, StackIQ is well positioned to solve the Big Infrastructure problem.
Rocks+ 6 is a major upgrade that includes a new graphical user interface, and a visualization tool that lets IT managers keep an eye on the bare metal, parallel installation process in real time. Also new to this release is support for the latest versions of Red Hat 6, CentOS 6, and Oracle Linux 6.
"We are thrilled to have the support of so many partners as we roll out this release," said Joe Markee, CEO of StackIQ. "The major focus of Rocks+ 6 is Big Infrastructure for Big Data and there is clearly tremendous momentum in this space."
It features support for Apache Hadoop with automated provisioning and management of MapReduce, HDFS, HBase, Zookeeper, and Hive. There's also Beta support for managing the high-performance, open source, NoSQL databases Cassandra and MongoDB.
Rocks+ is available from the company's website and from StackIQ partners, including Dell, HP, and Amazon Web Services. It is free for physical systems of up to 16 nodes, and priced per-node for larger systems. Contact StackIQ for details.
"Dell is an industry-leader in standards-based servers and our customers appreciate the benefits of open, capable and affordable solutions" said Donnie Bell, director, Enterprise Software Product Marketing, "By combining Dell PowerEdge and Dell PowerEdge C solutions with the open infrastructure of Rocks+, we are offering customers choice in the environment used to manage their infrastructure and the applications they run."
The Rocks+ Database-driven Design
The solution is based on a database-driven design that ensures consistent deployments without requiring a team of engineers writing custom scripts. It features optional, plug-in software modules — called "Rolls" — that tailor the system for any Big Infrastructure application.
"As the leader in Apache Hadoop-based software and services, working closely with the developer community and Hadoop ecosystem to create new choices for customers is a top priority," said Ed Albanese, head of Business Development at Cloudera. "StackIQ's release of Rocks+ 6 expands the capabilities of their proven infrastructure management software to include support for Cloudera's Distribution Including Apache Hadoop (CDH), the industry's most comprehensive and widely deployed distribution of Hadoop."
Rocks+ is the fastest way to spin up Big Infrastructure from bare metal. One hyper-scale web company used Rocks+ to deploy their custom Hadoop distribution across a large datacenter. As the metric for their proof-of-concept, they successfully used Rocks+ to upgrade and redeploy the entire cluster from bare metal in less than 30-minutes.
StackIQ's Partner Ecosystem
The StackIQ software partner program includes a wide range of solutions that rely on a Big Infrastructure foundation. By leveraging the management capability of Rocks+, partners can dedicate more resources to delivering added value to their customers.
"StackIQ is focused on providing software to provision and manage Big Infrastructure for Apache Hadoop," said Mitch Ferguson, vice president of business development for Hortonworks. "This helps us achieve our goal of making Hadoop and the Hortonworks Data Platform more robust and easier to install, manage and use."
"We're delighted that StackIQ's Big Infrastructure platform fully supports the MapR Distribution for Apache Hadoop," said John Schroeder, CEO and Co-founder of MapR Technologies. "The Rocks+ automated provisioning features for MapR is further evidence of the strong and growing commercial ecosystem around Hadoop."
By dramatically reducing the time to production, Rocks+ allows customers to begin generating a return on their infrastructure investment sooner than they could using less effective solutions.
"Big Data can require large numbers of servers, and a lot of servers can cause headaches and complexity," said Matt Pfeil, Co-Founder of DataStax. "We're excited to see StackIQ wholly focusing on the problem of provisioning and managing the type of infrastructure that Apache Cassandra runs on."
Rocks+ implements an open architecture that lets customers and partners add modules to meet their needs. Partners such as IBM, Univa, and NVIDIA have collaborated with StackIQ to develop modules that integrate Rocks+ with their solutions.
"Rocks+ is the leading solution for managing big infrastructure," said Gary Tyreman, CEO of Univa Corporation. "With over 10,000 data centers using Rocks we are excited to partner with StackIQ by offering Univa Grid Engine as a modular, integrated workload management solution."
The open architecture also allows Rocks+ to deploy infrastructure equally well whether using physical servers in a data center, or virtual servers in the cloud. Amazon Web Services Elastic Compute Cloud features the Rocks+ Amazon Machine Image, letting customers deploy Big Infrastructure in the cloud.
Rocks+ 6 Makes IT Organizations More Efficient
The wide variety of Rocks+ Rolls, open architecture, and database driven design combine with the parallel installer to make IT organizations more efficient at deploying and managing their big infrastructure, regardless of the application.
"Shortly after Rocks+ became operational at CENPA (http://www.npl.washington.edu/), we made use of it for simulations in phase analysis and for final signal extraction." said Gary Holman, Senior Computer Specialist NPL/CENPA at the University of Washington. "Rocks+ proved to be some 5 times faster than any other infrastructure management system available, and enabled us to meet a tight deadline and present the new results at Neutrinos in New Zealand."
StackIQ (formerly "Clustercorp") is a leading provider of software that automates the deployment and management of Big Infrastructure. Based on open-source Rocks cluster management software, StackIQ's Rocks+ product simplifies the installation and management of the hardware and software that provides the infrastructure for large scale environments having hundreds or thousands of servers supporting big data, analytics, or high performance computing. StackIQ is located in La Jolla, California, adjacent to the University of California, San Diego, where the open-source Rocks Group was founded. To learn more visit http://www.StackIQ.com
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.