May 07, 2012
It's been a little over a year since Univa took over stewardship of the open source workload manager and acquired the founding Sun Grid Engine team from Oracle, and in that time, they've stabilized the product and implemented over 200 bug fixes. Last week, the company announced its third production release, Univa Grid Engine 8.1, which is scheduled for general availability in the first half of 2012.
HPC in the Cloud spoke with Univa CEO Gary Tyreman to learn more about the offering and discuss the company's strategy around cloud computing.
The latest release is targeted at decreasing the TCO of Grid Engine at scale, notes Tyreman. Univa has sought to increase availability by improving the stability of the product, a core focus over the last year. They've added features that target very high-volume clusters with a large number of jobs, in particular small jobs. Plus they've made changes to improve the performance of the cluster overall, which is also good news for those using it in large environments.
They've also been focused on streamlining tasks from the administrator's perspective by helping that person find information faster, diagnose issues sooner and by assisting them with on-boarding and managing new applications in the workflow in a much more seamless and easy manner.
The team is happy with the progress they've made. "Univa Grid Engine is an evolution of a product and a path forward," Tyreman posits. "But more importantly it's a drop-in replacement, so it's really an upgrade, as opposed to a rip-and-replace. That's the first thing we're most proud of."
By focusing on product stability as well as the performance and availability of the cluster, the company has experienced record sales and substantial customer growth. In Q1 of this year, Univa added more customers than in all of 2011. Tyreman points to additional proof points, such as having four of the top five sites as measured by core-count, and notes that four of the top five enterprise or commercial customers have upgraded to Univa Grid Engine versus the open source version or Oracle.
Not surprisingly, the majority of customer sites are still in the traditional science and engineering space, but Univa is seeing a significant uptick in big data and business applications, so-called non-traditional HPC applications. In addition to classic HPC workloads, like semiconductor, EDA, life sciences, bio-genomics, oil and gas, and digital media, Univa Grid Engine customers are using (or asking about using) the product in a Hadoop environment. Rather than dealing with the headaches and costs associated with setting up both a Hadoop cluster and a compute cluster, they are bringing the two together. The other new trend is coming from ISVs who are using Univa's Grid Engine software to run business applications.
Perhaps what makes this data point all the more telling is that it's not part of a concentrated effort. Tyreman's take on it is that the market is pulling them in this direction. Like everyone else in the industry and ecosystem, he feels that Univa is benefiting from the so-called mainstreaming of HPC. The fact that they've seen several of their customers running Univa Grid Engine in a production Hadoop environment speaks to this point. "I think it's good for us and the overall industry," says Tyreman.
Almost every customer outside of EDA is asking Univa about Hadoop and big data, the CEO tells me. From this he infers that executives are asking how these technologies can help them solve their problems. Yes, they could go with Hadoop and buy a new cluster, but doing so would require a significant capital outlay. Bringing Hadoop into an existing cluster allows them to test the waters without a huge investment.
Addressing the needs of these new markets involves a change to the Hadoop environment at an API level, notes Tyreman. It involves broadening and simplifying the API so that your Web 2.0 developer can interface with the product. "The Hadoop integration that we have has a lot of opportunity for improvement as the demand expands," he adds.
Tyreman explains that for much of 2011 and 2012 the company was highly focused on their Grid Engine offering, however cloud, and specifically HPC cloud, is a key tenet of Univa's strategy. Univa built the first Grid Engine cloud and the first Grid Engine hybrid cloud, the CEO points out. "Both of which were used by enterprises in production. Both of which were used to solve real-world problems. And these were completed more than two years ago," he emphasizes.
Univa has Grid Engine customers that are trying to figure out how to pull in resources from a Eucalyptus cloud, and how to use those systems and then push them back into Eucalyptus. They have customers that are looking into building out hybrid infrastructures using public cloud providers like Amazon. The company has successfully integrated with Puppet to allow customers to plug into the cloud ecosystem.
"It's all about adding value to Univa Grid Engine to ensure that other IT assets that have been deployed are being fully leveraged to make the system easier to use and easier to manage on a day-to-day basis," says Tyreman.
Univa's cloud product, UniCloud, is available through RightScale and via the Amazon Marketplace that was launched last week. They have several customers who have already implemented UniCloud and others who are looking into adopting similar solutions, which Univa is working to provide.
That said, the Univa CEO does not view HPC, big data and cloud as delineations onto themselves. "We don't see them as three things that require three hammers; they are fundamentally similar problems that need to be solved."
Taking that one step further, Tyreman says that Hadoop environments today are basically clusters, and clusters require scheduling. As a proof point of big data and HPC coming together, Tyreman points to IBM's acquisition of Platform Computing. Hadoop has kicked off a project to build a scheduler. OpenStack, the poster child for cloud, is currently building its own scheduler. You have all these industry examples, and all these separate parties reinventing the wheel, and "I'm selling rubber," says Tyreman. "We see those things as being very tied together."
"When you take a single backplane that can run compute, big data and other types of workloads, which is what the IBM acquisition was directed at," says Tyreman, "you need something to build and manage the applications that you provision into that environment and that's what UniCloud is being designed and tailored to do."
"The fact that we took a step back and focused on Grid Engine is really a tactical step, but it's also a recognition that the industry is not exactly where we are. So by the end of this year you will see us deliver the next version of UniCloud, which will be specifically targeted at managing those applications within those broader contexts that we have been talking about."
When asked about the challenge of licensing in the cloud, Tyreman replies that Univa is working on a new offering directed toward companies who are using very expensive licenses and want to share them. "The goal with that product," he says, "is to enable very complex environments to share licenses and therefore you don't need to buy as many, specifically for EDA, for example."
"Licensing within the cloud encompasses the same problem," notes the CEO. "It will continue to take time for people to work around it. A lot of ISVs keep talking about it, but people fear the different models without understanding it. There is concern about cannibalizing an existing revenue stream.
"If you go and look at the Amazon Marketplace, the Univa Grid Engine that is available there, pricing is posted. Take the number of hours in a day, multiply by days in a year, and divide by price per core and I'm pricing it exactly the same. There are no premiums. And if you choose to go with that cloud model, we have a second price structure, which is all you can eat for a fixed price. We did that on purpose so we wouldn't become part of that licensing fear discussion. We can move past it pretty quick."
The next version of UniCloud is scheduled to arrive in Q4, and will add a graphical interface. Univa is also preparing for a fourth Grid Engine release, scheduled to roll-out in early 2013, which will add features "to drive the value at scale."
"For a small company," says Tyreman, "we have a pretty aggressive engineering roadmap and delivery mechanisms," adding "we spend a lot of time with large core-count users that have very specific problems that can trickle down and add value for the smaller sites."
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.