October 27, 2010
Distributed data grids, which are also known as distributed caches, store data in memory across a pool of servers (which could include an HPC grid or in a web or ecommerce farm like at Amazon.com) with a distributed cache for holding on to fluid, fast-moving data. This technology makes any company offering it well-positioned to serve a number of verticals, both in the traditional and non-traditional HPC space, including financial services and large-scale ecommerce organizations.
One company that has been particularly visible on the distributed data grid front for both ecommerce and financial services in particular has been ScaleOut Software, an eight-year-old company that has seen massive growth, due most recently to rising interest from financial institutions.
As Dr. William Bain, founder and CEO of ScaleOut noted of the interest from financial services--a veritcal marked by its need for near real-time results, “Distributed data grids have evolved from a basic data cache into a sophisticated analysis platform to track and process massive market volumes. The ability to quickly and efficiently perform complex analyses on historical and real-time data has become vital to top Wall Street firms seeking competitive advantage.”
The company has garnered signficant market share from the financial side of the spectrum but the talk about distributed data grids has been emerging again, in part due to the more widespread adoption of the cloud in this and other areas coupled with the massive explosion in sheer volumes of data generated in real time that needs to be analyzed in near real-time.
One reason why distributed data grids have received so much attention is because with traditional modes of data storage, there are built-in causes for bottlenecks that prevent scalability that make these less attractive options for some. ScaleOut Software’s founder and CEO, William Bain notes that “bringing techniques from parallel computing that have been in the works for two or three decades to this problem” is relieving some of the inherent weaknesses of traditional storage and is optimizing performance due to refinements in how data is stored, accessed and used.
Dr. Bain spent some time speaking with us about distributed data grids and typical use cases recently and put some of the technology in context—while providing a glimpse into how something that’s been around for some time is now receiving an added boost from the cloud.
Let’s put it in this context; imagine you have hundreds of thousands of users accessing a popular site. The site needs to have the data they’re storing and updating rapidly (as would happen with a shopping cart) kept in a scalable store since this is important to keeping their response times fast. Distributed caches have been used in this way for about 7 years and they’re becoming vital now for websites to scale performance.
In the area of financial services this technology allows the analyst the ability to store data that can be easily stored and then ready for analysis. There are several applications that are written for this area that require distributed data grids to achieve the scalable performance they need.
What’s driving this is that the amount of data being analyzed is growing very rapidly and the latency issues involved means you have to have a scalable platform for analyzing data in real time. This is especially the case for large companies that are doing financial analysis; the kinds of applications these people are running include algorithmic trading, stock histories that predict future performance of stock strategy, and so o and those are a perfect fit to a scalable data store.
The key trends we’re seeing that are making this exciting is one, the value of storing data in memory can dramatically improve performance over other approaches such as doing a map reduce-style computation on data based in a database because in-memory storage eliminates the latency issues caused during transfer.
The second important part of this is the cloud. – the cloud is providing a widely-available platform for hosting these applications on a large pool of servers that are only rented for the time that the application is running. There is a confluence of technologies that will drive this technology area to the forefront of attention because of the opportunity it has created that we’ve been waiting on for 20 or 30 years.
The problem we had before was that it was expensive to buy a parallel computer, then with clusters in the last decade, people could have department-level clustering for HPC--an area that Microsoft’s been delivering software around. But now with the cloud we have a platform that will scale not to tens of nodes, but to hundreds or maybe thousands, which presents the opportunity to run scalable computations very easily and cost-effectively.
Stepping Back for the Bigger Picture
Bill Bain founded ScaleOut Software in 2003 after his experiences at Bell Labs Research, Intel and Microsoft as well as with his three startup ventures, among which were Valence Research where he developed a distributed web load-balancing software product that Microsoft acquired for its Windows Server OS and dubbed Network Load Balancing. He has a Ph.D. from Rice University where he specialized in engineering and parallel computing and holds a number of patents in both distributed computing and computer architecture.
While the focus was initially meant to cover the core technologies behind ScaleOut Software, the conversation during the interview began to drift to some “big picture” issues concerning the cloud and what place it has in HPC—not to mention some of the barriers preventing wider adoption and how such challenges might be overcome in the near future.
Bain reflected on where he’d seen computing head to during his thirty years in HPC stating,
I think we went through a period when HPC became less popular as single-processors got faster in the 90s but with the turn of the century and the peaking out of Moore’s Law people turned back to parallel computing, which is an area we were doing a lot of pioneering work in and the cloud’s the next big thing.
Although we understood how parallel computing could drive high performance, people didn’t have the hardware so you were stuck with department-level clusters unless you were the government doing nuclear research and could buy a 512-node supercomputer. But most people doing bioinformatics, fluid flow analysis, financial modeling and such were stuck were small department-level computers...So the question becomes who are the players who will make it practical to do HPC in the cloud.
I think you should think of our technology not as some arcane cul de sac of technology that might be moderately interesting; it’s bringing core HPC technologies to the cloud. Whereas I think you’ll find that other players are brining technologies to the cloud but aren’t bringing scalability; who are doing scheduling for the cloud, for instance, those platform approaches are not driving scalability. So the confluence of HPC and cloud I think it now occurring and its bringing well-understood parallel computing techniques to this new platform and making it easy for programmers to get their applications up and running.
There’s one critical piece of the HPC cloud puzzle that’s missing and its low-latency networking; if you look at the public clouds, they use standard gigabit networks and very little can be said about the quality of service in terms of the collocation of multiple virtual servers; these are aspects of parallel computing that are vital and people have spent decades trying to optimize. For instance, at Intel we built these mesh-based supercomputers and invested heavily in technology that came out of Cal Tech in doing cut-through networks in order to drive the latency of networks way down. The reason that was done is because programmers learned that you need low-latency networking to get scalable performance for many applications—any that’s sharing data across the servers needs very fast networking. In the cloud we find off-the-shelf networking. Now, it is starting to look hopeful in the next couple of years to break this performance obstacle as more offer options for low-latency networking. Until then we need to work around this limitation.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.