June 28, 2012
SEATTLE, Wash., June 28 — Opscode , the leader in cloud infrastructure automation, today announced that Opscode Open Source Chef, Hosted Chef and Private Chef solutions provide full integration with Google Compute Engine, delivering full stack infrastructure automation – from server provisioning and configuration management to continuous delivery of infrastructure and applications. Leveraging Opscode's knife plugin for Google Compute Engine, businesses of all sizes can rapidly create, bootstrap and manage Google cloud resources.
"The public cloud holds the potential for true workload mobility, where users can move data, applications and operations between clouds as needed, ensuring the right mix of price and performance," said Christopher Brown, CTO, Opscode. "To realize this potential, public cloud providers need to eliminate barriers to mobility and Google Compute Engine is doing just that by creating an open, easily accessible cloud platform. By leveraging Opscode Chef to build their applications, customers are able to easily migrate their explications from other virtual machine services to Google Compute Engine with a single command, or vice versa."
Google Compute Engine provides a highly reliable, open, cost-effective compute infrastructure that enables any developer or business to run large scale computing workloads on the same infrastructure that runs Google search, Gmail, and ads. Google's world-class data centers and infrastructure technology provide unparalleled performance and value, and its architecture is optimized for predictability, scalability, security, and flexibility.
Opscode Chef combines with Google Compute Engine to enable the creation and management of cloud resources directly from the command line, ensuring infrastructure is consistent and easily scalable. With Chef, Google Compute Engine users can automate the full compute stack, from server provisioning all the way through application deployment. Opscode Chef provides recipes – re-usable configuration templates – for everything from rebuilding environments to configuring resources, ensuring a seamless, enterprise-grade experience. In addition, the open source Chef community features 13,000 registered users, 750 individual contributors, 135 corporate contributors and 500 cookbooks, providing a rich ecosystem of support for Google Compute Engine customers looking to make the most of their investment in the cloud.
"In order to help our customers get the most out of our cloud platform products," explains Shailesh Rao, Director of New Products and Solutions for Google Enterprise, "we have worked closely with technology partners to integrate complementary offerings and with services firms to enable them to build powerful new cloud-based solutions that help customers accelerate their success and innovation."
Google Compute Engine partners include services firms, such as systems integrators, developers, and IT consultants, and technology firms, such as software vendors, platform companies, and management and tools vendors. These partners offer complementary services, solutions, and technologies that have been integrated to provide customers with powerful new solutions using Google Compute Engine. For more information on Opscode and its solutions integrated with Google Compute Engine, please visit http://wiki.opscode.com/display/chef/Community+Plugins, and to learn more about the Google Compute Engine, go to http://cloud.google.com.
About Opscode Chef
Opscode's pioneering software, Chef, is an open-source systems integration framework built specifically for automating the cloud. No matter how complex the realities of business, Chef makes it easy to deploy servers and scale applications throughout an entire infrastructure. Through a combination of configuration management and service-oriented architectures, Chef, Hosted Chef and Private Chef make it easy to create an elegant, fully automated infrastructure while simplifying systems management.
Opscode is the leader in cloud infrastructure automation. Opscode helps companies of all sizes develop fully automated server infrastructures that scale easily and predictably; can be quickly rebuilt in any environment; and save developers and systems engineers time and money. Opscode's team is comprised of web infrastructure experts responsible for building and operating some of the world's largest websites and cloud computing platforms. More information can be found at www . opscode . com.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.