February 20, 2013
SEATTLE, Wash., Feb. 20 – Opscode, the leader in cloud infrastructure automation, today announced that Amazon Web Services' (AWS) new OpsWorks application management solution uses Opscode Chef as the framework to automate everything from configuring server instances to deploying applications in AWS cloud environments. AWS OpsWorks comes preconfigured with a default collection of Chef cookbooks for automating standard infrastructure operations, ensuring maximum consistency and control throughout the application development and deployment process. Using Opscode Chef, AWS OpsWorks delivers a flexible, automated, end-to-end solution for simplifying application management and speeding the path to innovation.
"As enterprise organizations look to cloud computing for leverage in the race to market, the scale and complexity of cloud deployments can quickly outpace the skills available to manage them. Automation solves for this skills gap by enabling IT with the tools necessary to effectively build and manage large-scale cloud infrastructure," said Adam Jacob , Chief Customer Officer, Opscode. "Opscode already has hundreds of customers using our Chef-based products to automate AWS clouds, so OpsWorks is not only a logical extension of Chef's benefits for AWS users, but further validation of Chef as a key component in any cloud-based, application management tool chain."
AWS OpsWorks uses Opscode Chef code recipes to automate, simplify, and accelerate the entire application lifecycle – from configuration to staging, deployment to shutdown – all with secure control. Using the Chef framework, OpsWorks provides an easy, step-by-step, highly repeatable and consistent process for building, managing, and deploying applications in AWS clouds. OpsWorks comes with a default set of Chef cookbooks that contain recipes to handle standard operations, including setting up and configuring application servers and deploying applications, across a wide range of scenarios. Because OpsWorks uses Chef, users can easily leverage hundreds of community-built configurations such as PostgreSQL, Nginx, and Solr. OpsWorks also makes it easy to upload and implement custom Chef recipes to perform additional tasks, including specialized configurations and application builds.
The open source Chef community features tens of thousands of active users, more than 1,000 individual contributors, 175 corporate contributors, and 800 cookbooks, providing a rich ecosystem of support for AWS customers looking to make the most of their investment in the cloud.
Opscode's pioneering software, Chef, is an open-source systems integration framework built specifically for automating at scale. No matter how complex the realities of business, Chef makes it easy to deploy servers and scale applications throughout an entire infrastructure. Through a combination of configuration management and service-oriented architectures, Chef makes it easy to create an elegant, fully automated infrastructure while simplifying systems management. Chef is available as an open source download, a SaaS subscription, or as software installed behind the user's firewall.
Opscode is the leader in infrastructure automation. Opscode helps companies of all sizes develop fully automated server infrastructures that scale easily and predictably; can be quickly rebuilt in any environment; and save developers and systems engineers time and money. Opscode's team is comprised of web infrastructure experts responsible for building and operating some of the world's largest websites and cloud computing platforms.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.