November 23, 2011
It's been a little over a year since Nimbix announced the initial beta launch of its Nimbix Accelerated Compute Cloud (NACC). During the SC11 show in Seattle last week, HPC in the Cloud sat down with Nimbix Co-Founder and CEO Steve Hebert to find out where the company fits in with the small-but-growing stable of cloud providers who specialize in supporting HPC workloads.
Nimbix is a cloud-based infrastructure operator that hosts and provisions accelerator-based platforms. Right now those accelerators include GPU and FPGAs, but the company is also keeping abreast of Intel MIC, DSP, and others. On the FPGA side, Nimbix just announced a new partnership with Convey during the show, and on the GPU-side, their solutions are Supermicro-based. The company is also considering other GPU-based platform partners, for example, HP.
Nimbix started out with the premise that power and energy-efficiency is a big problem for the industry, one that will contribute to the rise of new architectures. These may not be intuitive to program yet, but will become more popular out of necessity as we head into the exascale era.
Not only are the problems getting bigger, Hebert notes, but the number of people who need teraflop-level computing is growing. So it makes sense that Nimbix is tuning in to the needs of the so-called "missing middle," the group of small-to-medium sized companies that have, for the most part, been under-served by the traditional HPC delivery model. As one example of the latter, Hebert cites workstation-based energy companies that will soon require the power of more-traditional cluster-based, high-end computing to stay competitive. Without the deep pockets of their larger rivals, these companies need someone to turn to for support, technology and infrastructure. Nimbix is hoping to be that someone.
To build the FPGA-cloud component of their business, Nimbix is concentrating on the bioinformatics field, specifically genomics and sequence alignment, which pump out tremendous amounts of data. A lot of the organizations in this space, including clinical research outfits, research groups and Tier 1 players, just don't have enough compute power on-hand to deal with the amount of data the sequencers are churning out, so there's a backlog of data that needs processing. Nimbix suggests that these companies, instead of relying solely on traditional big-memory, CPU-only machines, consider the acceleration potential of FPGA-accelerated platforms. And with a cloud solution, they won't have to provision them in house.
What Hebert is perhaps most enthusiastic about is the ability to create a purpose-built supercomputer for one application. This is what FPGA enables. "You can queue a bioinformatics workload, and tune the machine for that app," he says, "and then your next workload is, say, a Monte Carlo application, and you'll have a machine that is tuned for that."
While Nimbix primarily supports applications in finance, life sciences and oil & gas, they're also interested in the needs of the digital manufacturing and simulation communities. As Hebert explains, HPC is still an emerging space for these verticals, one that is just starting to gain traction from a cloud perspective. However, these users all share a similar problem: they're running out of compute power. They need extra resources now, but aren't yet ready or willing to invest in additional on-site hardware.
Nimbix offers a different model than the virtualized x86 setup offered by other cloud providers. While it's exciting to think about endless cycles for cloud-friendly applications, Hebert affirms that when you look at the cost for provisioning, running and tear down, it's not cheap. Instead of simply scaling out virtual machines, the Nimbix solution is more akin to an in-house platform. In other words, says Hebert, there's no provisioning or workload headaches. You can think of it like workload-as-a-service or jobs-as-a-service – Nimbix abstracts the problem so that researchers can focus on the job at hand, and only pay for what they need.
Essentially, what Nimbix provides is an on-demand model that reflects traditional supercomputing conventions with the ability to run batch processing jobs on accelerated platforms in a shared facility. However, for users who do not want to run a shared infrastructure, Nimbix also offers a private-hosted model.
Hebert readily acknowledges that there is going to be a premium for the benefits of cloud, such as ease-of-use and quicker turn-around times, although some of that cost is offset by only having to pay for what you use. Hebert believes it's up to the user to judge how they use that capacity and whether the cost/benefit profile makes sense. "If everyone had their best option, they'd provision in-house," he explains. He concedes that it's cheaper, but requires significant upfront capital. Plus, for supercomputing, says Hebert, as the scale gets bigger, so does the investment, but the cloud model allows for the costs to be amortized across many customers.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.