January 24, 2012
Event brings IT professionals together to develop Chef-based infrastructure automation skills as cloud computing rises to top priority in the enterprise
SEATTLE, Jan. 24 — Opscode today announced the inaugural #ChefConf 2012 User Conference, taking place May 15-17 at the San Francisco Airport Marriott Waterfront in Burlingame, Calif. Presented by the leader in cloud infrastructure automation, #ChefConf will deliver three days of technical sessions, workshops, training and keynotes designed to help businesses maximize the value of their IT investment and accelerate the speed of business. Registration for #ChefConf 2012 is open at chefconf.opscode.com.
"Tens of thousands of people and thousands of companies use Opscode Chef to automate, manage and scale their infrastructure," said Jesse Robbins, co-chair of #ChefConf, and co-founder and chief community officer of Opscode. "This ability is now a critical skill for every software developer, systems engineer and IT professional who must manage ever-increasing scale and complexity."
As noted by InformationWeek, "2011 was the year that cloud computing knocked virtualization off its perch to become the No. 1 strategy for CIOs to deliver business value, according to Gartner." Just two years prior, the cloud ranked 16 on the CIO priority list. According to a study conducted by CSC, there are three main drivers for this move to the cloud: the increasing speed of business, the desire to cut costs, and an increasingly mobile workforce in a global economy, which requires access to information through multiple devices. Companies must be able to respond to customer demand for 24-hour access to data and resources. As a result, the IT operation must enable dynamic scalability, and provide rapid-response provisioning and deployment of new infrastructure and applications.
"A growing number of organizations, and increasingly enterprises in financial services, media, telecommunications and other key verticals, are working diligently to simplify server configuration and automate application and infrastructure deployment," said Jay Lyman, research senior analyst at 451 Research. "Traditional approaches and skills are no longer sufficient to effectively leverage new tools for virtualization, automation and cloud computing. #ChefConf 2012 is one of the places where enterprise IT professionals from companies of all sizes can get hands-on training and insight into overcoming these new technology challenges."
Today, there are more than 1,400 job postings on sites such as Indeed.com and TechCareers.com that require skills for solutions such as Chef. This signals an increasing demand for specialized training.
"Opscode Chef is more than an automation solution; it’s a foundational skillset required by a growing number of companies," said Robbins. "#ChefConf will help IT professionals learn and expand their understanding of Chef so they can lead this movement to the cloud and be rock stars for their companies."
#ChefConf 2012 will build on the collaborative nature of its open-source community and partner ecosystem. In just three years, Opscode's community has grown to more than 11,000 registered users and 380 community cookbooks, supporting everything from Apache and Zabbix to Windows. More than 500 people and 100 organizations are contributing code to Chef, and helping others successfully develop new skills for the growing movement in the cloud. The conference will provide in-depth discussions on the latest trends in IT infrastructure management, DevOps and cloud configuration, as well as engaging panel discussions highlighting customer use cases.
Registration is $1,200 for the May 16-17 conference, which includes the plenary sessions and two full tracks of deeply technical content. Early bird pricing is $800 and is available for those registering by Feb. 29. Registration for the workshops on Tuesday, May 15, costs $400. Rooms are available at the San Francisco Airport Marriott Waterfront for a discounted rate.
Opscode Chef is an open source systems integration framework built for automating the cloud. It allows software developers, engineers, and architects to easily deploy thousands of servers and scale applications throughout an entire infrastructure. Through a combination of configuration management and service-oriented architectures, Chef, Hosted Chef and Private Chef make it easy to create an elegant, fully automated infrastructure while simplifying systems management. For more information, visit www.opscode.com.
About Jesse Robbins
Jesse Robbins, co-chair of #ChefConf, is chief community officer of Opscode and served as its founding CEO. Robbins is an award-winning innovator and expert in Infrastructure, Web operations and emergency management. He was co-creator and chair of the Velocity Web Performance & Operations Conference, editor of the book "Web Operations: Keeping the Data on Time" and contributes to the O'Reilly Radar. Prior to co-founding Opscode, he worked at Amazon.com with a title of "Master of Disaster," where he was responsible for website availability for every property bearing the Amazon brand.
Opscode is the leader in cloud infrastructure automation. Opscode helps companies of all sizes develop fully automated server infrastructures that scale easily and predictably; can be quickly rebuilt in any environment; and save developers and systems engineer's time and money. Opscode's team is comprised of Web infrastructure experts responsible for building and operating some of the world's largest websites and cloud computing platforms. Opscode is headquartered in Seattle. More information can be found at www.opscode.com.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.