October 09, 2012
FRAMINGHAM, Mass., Oct. 9 — The total number of datacenters (of all types) in the United States declined for the first time in 2009, falling by 0.7%. triggered by the economic crisis of 2008 and the resultant closing of hundreds and thousands of remote locations with server closets and rooms. At the same time, total datacenter capacity grew by slightly more than 1% as larger datacenter environments continued to rise despite the economic slowdown. According to new research from International Data Corporation (IDC), these trends have continued in the years since 2009 and reflect a major change in datacenter and IT asset deployment that will accelerate further in coming years.
The dynamic driving these changes in the U.S. datacenter market center around the fast-growing array of applications and devices used to communicate and conduct business, the rapid digitization of vast amounts of unstructured data, and the desire to collect, store, and analyze this information in ever-greater volume and detail. This dynamic has had a significant impact on how businesses build, organize, and invest in datacenter facilities and assets.
"CIOs are increasingly being asked to improve business agility while reducing the cost of doing business through aggressive use of technologies in the datacenter," said Rick Villars, vice president, Datacenter and CloudResearch at IDC. "At the same time, they have to ensure the integrity of the business and its information assets in the face of natural disasters, datacenter disruptions, or local system failures. To achieve both sets of objectives, IT decision makers had to rethink their approach to the datacenter."
The most notable factor reshaping datacenter dynamics was the dramatic increase in the use of server virtualization to consolidate server assets. Virtualization and server consolidation drove significant declines in physical datacenter size and eliminated the need for many smaller datacenters as applications were moved to larger central datacenters. It also made investments in power and energy management that much more critical for datacenter managers.
While the aggressive use of virtualization has reduced the rate of growth in server deployments in datacenters, the creation, organization, and distribution of files and rich content are creating a rapid and sustained increase in storage deployments. One of the key characteristics of the content explosion is data centralization, driven by performance, compliance, and scale requirements. As a result, midsize and large datacenters are the main segments where the content explosion is having a major impact.
A third factor shaping the datacenter dynamic has been the shift toward a cloud model for application, platform, and infrastructure delivery. Here the focus is on extending the value and scale of virtualization by boosting operational efficiency and improving IT agility. Along with the content explosion, the buildout of public cloud offerings is driving major growth in the number and size of larger datacenters.
Combined, these factors will continue to drive a slow but steady decline in the number and size of smaller internal datacenters. For similar reasons, large internal datacenters will not grow at anywhere near the same rate as very large datacenters operated by service providers. By 2016, IDC expects the total number of datacenters in the U.S. will decline from 2.94 million in 2012 to 2.89 million. This decline will be concentrated in internal server rooms and closets, with a very small decline in mid-sized local datacenters. Despite the slight decline in total datacenters, total datacenter space will increase significantly, growing from 611.4 million square feet in 2012 to more than 700 million square feet in 2016. By the end of the forecast period, IDC expects service providers will account for more than a quarter of all large datacenter capacity in place in the United States.
The IDC report, U.S. Datacenter 2012-2016 Forecast (Doc #237070) provides a census of U.S. datacenters by size, sophistication, and ownership. The report provides a forecast of datacenter investment plans through 2016 and assesses the impact of changing industry business models as well as IT and network developments on datacenter design, build, and management. The report also includes a new datacenter taxonomy based on a multitude of factors, including scope of IT personnel control, physical location, types of applications supported, power and cooling, downtime, floor area, and staff skill sets.
International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. IDC helps IT professionals, business executives, and the investment community to make fact-based decisions on technology purchases and business strategy. More than 1,000 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. For more than 48 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world's leading technology media, research, and events company. You can learn more about IDC by visiting www.idc.com.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.