February 13, 2006
With the EMC acquisition of Acxiom's Grid computing software for $30 million last month, enterprise customers started opening their eyes to the fact that Grid is not just about raw horsepower and CPU utilization for high- performance computing environments.
So what was it that Acxiom did so well with its Grid environment that caught EMC's attention? To put it simply: data management.
Acxiom has a very popular data integration application called AbiliTec. It took the "scale-out" commodity hardware route to scale and support this application's growing number of transactions (as Google and Amazon have done) and then built its own Grid software to manage this new environment. In an article on Acxiom's environment last year, Computerworld reported that its Grid had grown to 6,000 Linux nodes, processing more than 50 billion AbiliTec transactions per month.
Performance and reliability have really been at the heart of Acxiom's data management Grid story, but there are some other very specific enterprise data challenges where Grid has been battle-hardened and proven for similar challenges in research and science. Today, enterprises are increasingly evaluating the capabilities of Grid infrastructure to resolve data management issues ... above and beyond data processing horsepower.
Transport of Massive Quanta of Data
True, your typical enterprise is probably not going to be dealing with data on the petabyte (1 quadrillion bytes) level anytime soon, like particle physicists in the e-Science realm do today.
However, many commercial entities do depend on the transport of enormous files across distributed networks on a daily basis. Consider cases like the British Broadcasting Corporation, for whom one hour of pre-processed high definition broadcast averages about 280 gigabits in sheer data size. These organizations are working with Grid technologies today to make their data assets accessible to field reporters and users across a distributed network.
Clearly, moving large data sets at high speeds between distributed sites is a common challenge for many vertical enterprise industries today. Oil and gas is perhaps the poster child for moving large data sets, which they accumulate through seismic analysis and reservoir analysis. Getting the "whole picture" to make sound business decisions requires pulling large quanta of data from many different locations.
Other markets with massive data transport requirements include the automotive industry (for computer aided analysis and simulations), semiconductor (for mask layout based on instruction sets), and pharmaceutical (for molecular matching and chiral synthesis), to name just a few.
Getting the Data Out of Complex Storage Systems
Grid pros have popularized the expression that "access to the data is as important as access to the compute resources," and sometimes in enterprises, the challenge with data access (beyond the size of the data sets) is the complexity of the protocols associated with the storage systems.
A great deal has been accomplished within e-Science grids to overcome the incompatible protocols for data storage; most notably the GridFTP standard. Implementations of GridFTP are built upon the existing file transfer protocol of the early Internet days, which make it easier to pull data out of any file or storage system that uses a flat or hierarchical naming scheme, connected to a TCP/IP network (which applies to the majority of enterprise storage systems in heavy use today).
End-to-End Data Coordination
The reality of being a large IT organization today means having an environment that includes multiple data centers, each a distinct IT island. While each datacenter may be well-managed as far as compute power goes -- and there's no need for better utilization (they're happy to just buy more commodity boxes) -- they're faced with this "Wild West" in terms of managing the data between those different islands. There are common scenarios where enterprises have a distributed organization with large data sharing and distribution needs -- anywhere from replicating data between data centers and clusters, to better flow management, to improving collaboration among a distributed team, and better analysis.
Grid allows these enterprises to tie these multiple IT islands together, without ripping out and replacing existing infrastructure. If a company has a group of users in the United States with one large set of data, and another group in Japan with other large sets of data, it often is not practical to move the data ... but it is possible to run jobs against that data remotely instead.
This is where Grid thrives. Organizations are no longer pressured to either move the compute or move the data. By knitting together distinct IT islands for computation, as well as the data, with security that overlays on existing, often hairy security environments, organizations can begin to tame the Wild West and leave a rip-and-replace mentality behind.
Understanding the Similarities Between Science and Enterprise Data
For many applications, the data environment in enterprises is increasingly similar in nature to that encountered in e-Science and research grids today.
As Grid pioneer Carl Kesselman says, "Scientific data tends to be reasonably large in size and tends to be images, numeric data or experimental data. There are many enterprise-level data and data mining types of activities that have those characteristics. For example, we're looking at large-scale data mining in inventory management and decision support types of data operations which, at the end of the day, is no different from the data environment in a company like Acxiom."
A distinguishing feature of science data is that it may be represented using a mixture of technologies, with the data itself being file-based and the supporting metadata being stored in a relational database -- but in the end it is all structured data. The notion that enterprise data is inherently different from science data is a misconception, which results because people equate "file-based" data to "unstructured" data. As the volume of data managed by enterprise applications increases, we will see more instances of this type of 'hybrid' data set where Grid data management technologies excel.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.