August 07, 2012
MOUNTAIN VIEW, Calif., Aug. 7 — Nimbula, the Cloud Operating System Company, today introduced its elastic Hadoop solution with MapR Technologies, the provider of the open, enterprise-grade distribution for Apache Hadoop, to help customers get their Hadoop cluster running on a Nimbula Director-based cloud in minutes. The solution combines Nimbula Director with MapR's Hadoop distribution and includes templates, recipes and verification tests for running Hadoop on Nimbula Director.
Nimbula Director, Nimbula's private cloud platform, takes bare metal servers with local disks and turns them into a large multi-tenant pool of compute with a self-service provisioning interface enabling the repeated provisioning and deprovisioning of Hadoop and non-Hadoop workloads. The automation of the Hadoop services from MapR and the automation of the underlying instances from Nimbula work together to maintain a fully-functional and highly- available Hadoop cluster.
The elasticity and multi-tenancy of Nimbula Director complement the dependability and security of the MapR Hadoop Distribution, allowing clusters to grow and shrink over time and to be completely isolated from one another ‒ all on a single pool of infrastructure. Customers benefit from:
"Nimbula is excited to be working with MapR to deliver the industry's first turnkey Hadoop solution for private cloud," said Jay Judkowitz, director of product marketing at Nimbula. "With this solution, customers can have the best of two worlds. They can have big data processing from Hadoop with private cloud's ability to deliver low cost shared infrastructure that manages elastic demand between multiple tenants."
The main use cases for elastic Hadoop include:
"MapR is always looking to bring its leading Hadoop technology to more environments to serve more customers and use cases," said Jack Norris, vice president of marketing at MapR. "With the Nimbula elastic Hadoop solution, MapR has a streamlined and simplified way to deliver Hadoop onto private clouds. Nimbula's private cloud, with its focus on security, multi-tenancy and high availability, made it a perfect target for MapR Hadoop deployment."
Nimbula Director is available for free for deployment of up to 40 cores. Customers can download a packaged VM template and application definition, load it into their Nimbula Director cloud, launch it, use it as long as they need, and remove it once their jobs are completed. The complete solution can be downloaded from nimbula.com/solutions/hadoop/mapr.
About MapR Technologies
MapR delivers on the promise of Hadoop, making managing and analyzing Big Data a reality for more business users. The award-winning MapR Distribution brings unprecedented dependability, speed and ease-of-use to Hadoop. Combined with data protection and business continuity, MapR enables customers to harness the power of Big Data analytics. Leading companies including Amazon, Cisco, EMC and Google partner with MapR to deliver an enterprise-grade Hadoop solution. Investors include Lightspeed Venture Partners, NEA and Redpoint Ventures. Connect with MapR on Facebook, LinkedIn, and Twitter.
Founded by the team that developed the industry-leading Amazon EC2, Nimbula delivers a comprehensive cloud operating system that uniquely combines the scalability and operational efficiencies of the public cloud with the control, security and trust of today's most advanced data centers. Nimbula was named one of the most promising startups in The Wall Street Journal and was dubbed "one of three cloud properties ready to burst" in Fortune. Nimbula is headquartered in Mountain View, California. For more information, visit: http://nimbula.com.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.