October 30, 2012
AUSTIN, Texas, Oct. 30 — Infochimps, the leading Big Data Platform-as-a-Service Cloud Service provider for the enterprise, today announced the addition of open source stream processing technologies Storm and Kafka to its managed, hosted service. Infochimps is the first in the industry to leverage these technologies in a fault-tolerant, linearly scalable platform offering.
Storm and Kafka now power the Infochimps Data Delivery Service (DDS), the platform component that performs data collection, transport, and complex in-stream processing.
Storm is the same technology that enables Twitter to process 400 million tweets per day. Kafka is the foundation of LinkedIn's activity streams, and their operational data processing pipeline. Together, Storm and Kafka form the best enterprise-grade real-time streaming analytics solution on the market.
"These are two of the most exciting open source technologies available today," said Jim Kaskade, CEO of Infochimps. "But who can manage and deploy them for the enterprise? Our customers want to leverage the best technologies available, but they want to do so confidently. We're bringing the next generation of complex event processing and real-time capability to literally any enterprise application. Now the business can analyze and personalize to great advantage. That's what real-time is all about."
Storm and Kafka are already in use at a number of high-profile companies like Twitter and Groupon. Now, customers of the Infochimps Big Data Cloud Service can follow suit with stream processing at infinite scale, resting assured that every message gets processed in real-time, at any velocity, and in a reliable way.
"Storm and Kafka are excellent platforms for scalable real-time data processing. We are very pleased that Infochimps has embraced Storm and Kafka for DDS. This new offering gives us the opportunity to supplement our listening and analytics products with Infochimps' data sources, to integrate capabilities seamlessly with our partners who also use Storm, and to retain Infochimps' unique technical team to support and optimize our data pipelines," said Steve Blackmon, Director of Data Sciences at W2O Group.
"The combination of Storm and Kafka is the future of stream processing for Big Data. We're proud to be the first-to-market with an enterprise-grade Storm and Kafka offering," said Flip Kromer, co-founder and CTO of Infochimps. "We have a track record of embracing the bleeding edge of open source, but also making the latest and greatest Big Data innovations enterprise-ready. Storm is doing to real-time processing what Hadoop did to batch processing. We were one of the first companies to embrace Hadoop, and now we're the first to embrace (and commercialize) Storm and Kafka, too."
Stream processing solutions like Storm and Kafka have caught the attention of many enterprises due to its superior approach to ETL and data integration, in-memory analytics, and real-time decision support. Companies are fast realizing that batch processing in Hadoop does not support real-time business needs. As Hadoop adoption continues to expand, Storm and Kafka follow as big data essentials.
"Storm has gained an enormous amount of traction in the past year due to its simplicity, robustness, and high performance," said Nathan Marz, Storm creator and senior Twitter engineer. "Storm's tight integration with the queuing and database technologies that companies already use have made it easy to adopt for their stream computing needs."
New as well as existing Infochimps customers will immediately begin taking advantage of the new Storm/Kafka framework.
Infochimps helps businesses unlock the value of their data with unprecedented speed, scale and flexibility. The Infochimps Big Data Platform is a managed cloud service that streamlines building and managing complex Big Data environments, and distills analytics to deliver actionable intelligence faster. With Infochimps, companies can feel confident that they have the fastest way to deploy Big Data environments in public, virtual private, or private clouds. Infochimps is a privately held, venture-backed company with offices in Austin, Texas, and the Silicon Valley. For more information, visit http://www.infochimps.com and follow @infochimps on Twitter.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.