October 15, 2012
SAN DIEGO, Oct. 15 — From the OpenStack Summit, Opscode, the leader in cloud infrastructure automation, today announced Chef for OpenStack, delivering a centralized, defined collection of code and best practices for using Chef to create and automate OpenStack infrastructures, as well as deploying and managing application stacks on OpenStack infrastructure.
Chef for OpenStack combines a centralized repository of cookbooks with key features of Hosted Chef and Private Chef to deliver a comprehensive reference framework for improving business agility and operating efficiency with OpenStack. The new solution is supported by Opscode's enterprise services practice and a broad ecosystem of technology partners, including Rackspace, Dell, DreamHost, HP and Intel. Opscode also today announced integration via the Knife command-line for OpenStack Folsom, enabling users to rapidly create, bootstrap and manage OpenStack Folsom compute instances.
"OpenStack continues to gain traction as the primary open source platform for deploying private, hybrid and public clouds, with a number of large public cloud providers deploying OpenStack to power innovative service offerings," said Jay Wampold, VP of Marketing, Opscode. "Chef for OpenStack leverages our broad partner ecosystem, deep technical expertise and vibrant community to deliver a comprehensive tool kit for getting the most from OpenStack in the least amount of time."
Featuring contributions from OpenStack participants Rackspace, Dell, DreamHost, HP and a number of other contributors, Chef for OpenStack includes five cookbooks – collections of reusable code – enabling rapid deployment and automation of computing, object storage, image services, dashboardand identity services:
Versions of the Chef for OpenStack cookbooks will be published to the Intel CloudBuilders community, a cross-industry initiative aimed at making it easier to build, enhance, and operate cloud infrastructure. An open developer and operator community, including a mailing list and an IRC channel, supports Chef for OpenStack. To access, collaborate or learn more about Chef for OpenStack, visit www.opscode.com/openstack.
OpenStack Folsom, the sixth version of OpenStack, automates pools of compute, storage and networking resources to build private and public cloud infrastructures without vendor lock-in. Written by more than 330 contributors, the Folsom release features a continued focus on stability and extensibility, enabling users across the globe to leverage pools of on-demand, self-managed compute, storage and networking resources to build efficient, automated private and public cloud infrastructures.
Opscode is the leader in cloud infrastructure automation. Opscode helps companies of all sizes develop fully automated server infrastructures that scale easily and predictably; can be quickly rebuilt in any environment; and save developers and systems engineers time and money. Opscode's team is comprised of web infrastructure experts responsible for building and operating some of the world's largest websites and cloud computing platforms. More information can be found at www.opscode.com.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.