October 15, 2012
A scalable, resilient, and secure OpenStack powered cloud for entrepreneurs and developers
LOS ANGELES, Oct. 15 — DreamHost, a global leader in Web hosting and cloud services, today announced DreamCompute, a highly scalable and cost-effective cloud computing platform for Internet entrepreneurs and developers. Using DreamCompute's open source infrastructure-as-a-service platform that is Powered by OpenStack, customers can create and prosper in the cloud, powering Web and mobile applications, digital media and e-commerce websites, big data analytics, and test and development environments.
DreamCompute is an exciting new edition to DreamHost's growing portfolio of services, which include shared, virtual private, and dedicated Web hosting, as well as cloud object storage with DreamObjects. DreamHost sets itself apart from the cloud crowd by developing innovative and flexible Web and cloud services using open source software. Evolving new cloud capabilities are constantly created, dynamically deployed, and expertly supported. DreamCompute is engineered for scale and efficiency using best-of-breed open source solutions, including the OpenStack cloud platform, scalable Ceph block storage, and Nicira network virtualization. As a champion of the open source community, DreamHost makes major code contributions to open source projects – for example, by working to integrate Ceph into OpenStack.
The newly launched DreamCompute cloud service is highly scalable, quick to provision, resilient, and secure. It is ready to go for any size workload with compute instances from 1GB to 64GB RAM for ultimate scalability. All of this power comes with no capital outlay or a lengthy procurement approval process; it can be billed to a credit card, making it easy to expense. Pricing for the services will be released publicly in the coming weeks and, in the tradition of DreamHost, will be very attractive to Internet entrepreneurs and developers. In addition, DreamCompute customers will benefit from DreamHost's long-standing focus on industry-leading customer support.
DreamCompute block storage is powered by Ceph, the open source, massively scalable, distributed storage software created by Sage Weil, founder of DreamHost. Ceph is now developed and supported by Inktank and a growing community of developers worldwide. The software is capable of auto-scaling to the exabyte level and beyond, it runs on off-the-shelf hardware, it is self-healing and self-managing, and it has no single point of failure.
DreamCompute works seamlessly with DreamObjects, DreamHost's low cost, object-based public storage service based on Ceph. Both DreamCompute and DreamObjects leverage advanced Ceph technology, which is highly scalable and reliable, cost effective, and fully open source. DreamHost customers can now both process and store virtually limitless data in their quest to write and test web-scale applications.
"DreamCompute has been engineered from the operating system up to deliver the next generation cloud compute service that developers are craving," said Simon Anderson, CEO of DreamHost. "With OpenStack virtual machine management, reliable and resilient Ceph block storage, and software-defined networking that truly isolates each instance in the infrastructure, DreamCompute sets a new standard for compute-as-a-service. We're very proud to be a part of delivering the future of the open and scalable cloud."
The OpenStack Foundation
"It's exciting to see DreamHost launch an OpenStack Powered Cloud, expanding the footprint of OpenStack service providers and offering even more choice for our users," said Jonathan Bryce, Executive Director of the OpenStack Foundation. "DreamHost and its engineers have been significant contributors to OpenStack since joining the effort in early 2011, especially in the areas of block storage and secure, flexible networking capabilities. We look forward to their continued participation as we deliver on our mission of creating the most widely adopted open cloud platform."
"With DreamCompute, DreamHost takes service provider innovation in the cloud to a new level," said Steve Mullaney, Vice President, Networking Business Unit, VMware, and formerly CEO of Nicira, Inc. "DreamHost leverages best-of-breed cloud infrastructure solutions, including the Nicira Network Virtualization Platform (NVP) to deliver the true value of cloud. On demand network provisioning moves the network out of the way and allows cloud customers to increase their business velocity."
"DreamHost is answering the entrepreneur and developer appetite for massively scalable cloud computing with a differentiated service combining deep hosting expertise with cloud scale and data center reliability," said Jay Wampold, VP of Marketing, Opscode. "By using Opscode Private Chef to automate the infrastructure behind DreamCompute, DreamHost is making unlimited computing power available to the masses, delivering a platform for everything from early stage entrepreneurial endeavors to highly sophisticated application development."
"Dell and DreamHost are a winning combination in cloud computing. The new DreamCompute public cloud computing service, based on OpenStack and running Dell's innovative PowerEdge C servers at the core, set a new standard for open and resilient cloud delivery," said Forrest Norrod, Vice President and General Manager, Dell Server Solutions. "The future of cloud will be paved by companies such as Dell and DreamHost that can deliver exceptional SLAs at very attractive prices. Dell is very pleased to be working with a company on the leading edge of open source innovation to make this future a reality."
"Ubuntu is the reference operating system for OpenStack and is already powering most of the world's largest public clouds. We're excited that DreamHost has selected OpenStack plus Ubuntu to power its new DreamCompute public cloud infrastructure. Both Canonical and DreamHost have been very active in the OpenStack community for some time, and have together integrated innovations such as Ceph object storage into the latest version of Ubuntu 12.04 LTS, released in May 2012," said Kyle MacDonald, VP Cloud at Canonical. "This is just another example of how we work together to forge the future of the open cloud."
"I originally created Ceph to solve one of the biggest challenges in computing – highly scalable storage. Today it is gratifying to see innovative service providers around the world leveraging Ceph storage in many different ways," said Sage Weil, CEO and Chief Architect at Inktank. "DreamHost funded and supported the Ceph project for many years, so it is very exciting to see DreamHost offering this powerful, low-cost cloud computing platform with Ceph to their customers. Ceph levels the playing field for cloud providers, enabling them to quickly and cost-effectively create cloud offerings that compete with those of the largest vendors."
For more information about the DreamCompute public cloud compute service, please visit the following site: http://dreamhost.com/cloud/dreamcompute
DreamHost is a global Web and cloud hosting provider with over 330,000 customers and 1.2 million blogs, websites and apps hosted. The company offers a wide spectrum of Web and cloud hosting solutions including Shared Hosting, Virtual Private Server (VPS) and Dedicated Server Hosting, Domain Name Registration, the cloud storage service, DreamObjects, and the cloud compute service DreamCompute. Visit http://DreamHost.com for more information.
Ceph is a massively scalable, open source, distributed storage system. It is comprised of an object store, block store, and a POSIX-compatible distributed file system. The software platform is capable of auto-scaling to the exabyte level and beyond, it runs on commodity hardware, it is self-healing and self-managing, and has no single point of failure. Ceph is in the Linux kernel and is currently integrated with OpenStack and other open source cloud stacks. Visit www.ceph.com and www.inktank.com for more information.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.