November 30, 2011
HPCC Systems from LexisNexis Risk Solutions is an alternative to Hadoop and legacy technology
NEW YORK, Nov 30 — HPCC Systems from LexisNexis Risk Solutions is now providing its Thor Data Refinery Cluster, which is a Big Data delivery engine for Big Data processing, on the Amazon Web Services platform. HPCC Systems is an enterprise-proven, open source Big Data analytics technology platform. The Thor Data Refinery Cluster is responsible for ingesting vast amounts of data, transforming, linking and indexing that data, with parallel processing power spread across the nodes.
Taking advantage of HPCC Systems' Thor Refinery Cluster on Amazon Web Services (AWS) infrastructure web services platform in the cloud provides a powerful combination designed to make Big Data Analytics computing easier for developers.
Documentation can be found at http://hpccsystems.com/community/docs/aws-install-thor
In addition to the Thor Data Refinery Cluster, HPCC Systems is comprised of a single architecture, a consistent data-centric programming language called ECL, and the Roxie Rapid Data Delivery Cluster. The core of the technology platform is the Enterprise Control Language (ECL), which is a declarative, data-centric programming language optimized for large-scale data management and query processing. The Roxie Rapid Data Delivery Cluster provides highly scalable, high-performance online query processing and data warehouse capabilities.
"We are pleased to offer developers the Thor Data Refinery Cluster on AWS as a proven, enterprise-ready resource to give them the scale they need to either test a new idea or run a large project more cost effectively," said Armando Escalante, senior vice president and chief technology officer of LexisNexis Risk Solutions and head of HPCC Systems. "We see this offering as a way to help all types of organizations improve how they manage their IT costs and bring innovation to market."
HPCC Systems grew out of the need for LexisNexis to manage, sort, link, join and analyze billions of records within seconds. Designed by data scientists, HPCC Systems is a data intensive supercomputer that has evolved for more than a decade, with enterprise customers who need to process large volumes of data in a 24/7 environment.
"Cloud computing is driving a new wave of innovation in both the technology and business of IT. Most organizations will be affected by the changing IT landscape, as the providers and the consumers of cloud services develop new standards, best practices and business models for cloud-based and hybrid computing," said Yefim Natis, Gartner vice president and distinguished analyst. "Vendors and enterprise IT organizations that begin investments in cloud computing early will have a competitive advantage in their respective market segments."
About HPCC Systems
HPCC Systems from LexisNexis Risk Solutions offers a proven, data-intensive supercomputing platform designed for the enterprise to process and deliver Big Data analytical problems. As an alternative to Hadoop and legacy technology, HPCC Systems offers a consistent data-centric programming language, two processing platforms and a single architecture for efficient processing. Customers, such as financial institutions, insurance carriers, insurance companies, law enforcement agencies, federal government and other enterprise-class organizations leverage the HPCC Systems technology through LexisNexis products and services. For more information, visit http://hpccsystems.com.
About LexisNexis Risk Solutions
LexisNexis Risk Solutions (www.lexisnexis.com/risk/) is a leader in providing essential information that helps customers across all industries and government predict, assess and manage risk. Combining cutting-edge technology, unique data and advanced scoring analytics, we provide products and services that address evolving client needs in the risk sector while upholding the highest standards of security and privacy. LexisNexis Risk Solutions is part of Reed Elsevier, a leading publisher and information provider that serves customers in more than 100 countries with more than 30,000 employees worldwide.
Source: LexisNexis Risk Solutions
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.