March 23, 2011
Hadapt, Inc., which spawned from a joint research endeavor at Yale University, fleshed out details today about what it hopes to bring to the increasingly noisy sphere of big data analysis.
The company’s name provides a subtle hint about the nature of the offering (marrying the words Hadoop and Adapt). The startup plans to provide an analytical platform that will allow users to handle complex analytics workloads across what it calls a “cloud-optimized system.”
This morning the New Haven-based crew announced that it received its first round of financing to boost efforts behind their patent-pending technologies for high performance analytics on both structured and unstructured data. Hadapt’s technology is designed to tackle big data in both private and public cloud environments.
According to Hadapt, they are “adapting and expanding the Hadoop architecture to bring a more complete SQL interface, a patent-pending Adaptive Query Execution capability, and a hybrid storage engine to handle structured as well as unstructured data on a single platform.
In short, this means that they are another company seeking to bring the power of Hadoop to enterprise users by expanding the possibilities to analyze even more data types with help from a relational database.
The key piece of the company’s offering lies in the Adaptive Query Execution, which will be capable of dynamically load balancing queries in cloud environments and will allow jobs to be automatically split between relational database engines and Hadoop to maximize performance.
According to Borgman, there is nothing like the Adaptive Query Execution available today. He claims that during testing at Yale the group “was able to document query results 50 times faster than Hadoop with Hive and 600 times faster than Hadoop with HBase.”
On the cloud front, the company’s appeal is that this is native cloud-ware. In other words, as Borgoman stated, this was born for a virtualized environment thus, “unlike other second generation data warehousing companies, Hadapt has optimized the product architecture for big data and big analysis in virtualized environments.”
Hadapt is certainly not the only startup with an eye on the value of big data for enterprise use. Companies like Cloudera, for instance, which has at the heart of their branding the same wish to “bring Hadoop to the masses” are no threat to Hadapt, according to Borgman during a recent interview. He notes that their offering is actually complementary to Cloudera, noting that “Cloudera focuses on tools and services to help ordinary IT staff run Hadoop whereas Hadapt is more fundamentally trying to make Hadoop better.”
Borgman is one of several co-founders behind the startup, which has its public “coming out” party today at the Structure event sponsored by GigaOm.
The founders of Habapt led a research team at Yale University that created and tested the initial prototype. Dr. Daniel Abadi, Chief Scientist and Co-founder is a database systems expert and behind the key Adaptive Analytical Platform at the heart of the company’s model. Abadi maintains a faculty position within Yale’s Computer Science department.
Also supporting the development and research for the Adaptive Analytical Platform is Kamil Bajda-Pawlikowski, who serves as chief software architect and co-founder in addition to his post at Yale. Dr. Avi Silberschatz, who is the team’s technical advisor (and another co-founder) was one of the visionaries behind the HadoopDB research that the company hopes to capitalize upon. Before joining the Yale computer science faculty, he was VP of the Information Sciences Research Center at Bell Labs.
The only co-founder who wasn’t involved in the research and testing directly was Justin Borgman, who serves as CEO comes from software development positions at MIT Lincoln Laboratory and Raytheon and more recently as head of product development at Covectra, a counterfeit prevention company. His Yale connection, however, extends to the university’s School of Management.
Hadapt hopes to tie up their software for formal release later in the year.
Posted by Nicole Hemsoth - March 23, 2011 @ 9:24 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.H
No Recent Blog Comments
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.