April 01, 2011
If you take a look back at the commentary that began when the possibilities of clouds were just becoming clear, one of the first bells sounded was the question of what this would mean for mainframes. While there is still no telling what the future holds for the data center, some organizations are trying to put their finger of the pulse of computing to see what IT managers are planning.
AFCOM, an association for data center management professionals, released a report this week entitled “The State of the Data Center” to better understand how data centers are adapting to a number of changes in their industry, including the growing rates of cloud adoption.
In addition to providing some insights about disaster recovery, space, energy and security, the report, which is based on survey results from 358 data center managers concluded that there are threats on the horizon for the trusty mainframe. While it isn’t likely to go down without a long fight, and for some uses, be crushed at all, those in the mainframe business might be finding work a little harder to come by in the next several years if AFCOM's crystal ball is correct.
We asked Jill Yaoz, CEO of AFCOM, how the cloud computing movement is shaping this movement away from mainframes—and to what extent this is really happening versus being noted as a possibility. Based on their results she says that “last year only 14.9 percent of data centers had implemented the technology but today that percentage has grown to 36.6 percent, with another 35.1 percent seriously considering it.”
As AFCOM report indicates, “While historically one of the most critical elements of any data center, today mainframe usage continues to shrink. While we predict mainframes will exist forever in some capacity, their prevalence has been severely diminished.”
In organization’s view, “cloud computing will continue on this trajectory for the next five years, with 80 to 90 percent of all data centers adopting some form of the cloud during that period.”
In some cases cloud computing is replacing the mainframe because of price concerns. As Yaoz stated, “companies are starting to move certain applications off the mainframe and onto servers, especially because of server virtualization that can save companies significant money.”
She notes, however that there are “other applications that absolutely require the capability of a mainframe and its high level of processing and computing power. So in that regard, cloud computing is not affecting the decline of mainframe usage because the applications that run on the cloud are more server-based.”
In her opinion, in order to move high performance computing applications to the cloud “the cloud provider would to have a mainframe with that level of processing power, which is not really possible to do effectively or efficiently.”
The AFCOM figures are different than a report from CA Technologies last year that suggested 79 percent of IT organizations considered mainframes to be a key part of their cloud computing strategy. Based on these results, 82 percent of the respondents said that they planned on using their mainframe in the future either as much or more than they currently do.
In the CA survey, 55% of respondents said they kept mission-critical systems on the mainframe for reliability reasons. Additionally, just under half of those surveyed felt staying on the legacy product was the most cost-effective. Remember, however, this is a survey that was published by CA Technologies, who only a couple of years earlier set forth a major push for is Mainframe 2.0 strategy to modernize mainframes.
The debate about mainframes and the role of cloud computing extends to questions about what the real difference is and what makes them attractive. Many of those who are in the mainframe game might contend, there is nothing new about clouds and really nothing that clouds are capable of that mainframes can’t do.
Jon Toigo, who is the CEO of Toigo Partners International, a mainframe consulting company, told ComputerWorld this week that “a mainframe is a cloud” because its “allocated and de-allocated on demand and made available within a company with security and management controls…all of that already exists in a mainframe.”
However, this brings us back to the question of definitions again—if we consider cloud computing’s value proposition to lie in the idea of dynamic self-service provisioning and easy on and off based on the end user’s whims than mainframes really don’t have the advantage, at least if you’re a user that can make good, quick use of the resources for your particular applications.
Most mainframe systems are kept behind lock and key with dedicated guardians keeping track of its operations. While the concept of self-provisioning is absolutely possible with some custom tweaks, this is not something that generally happens.
While there are some companies that are still pushing forward their mainframe strategies to include cloud computing (IBM and its zEnterprise, which allows for a “hybrid” approach to mainframes—and can also be configured via Tivoli to allow user self-provisioning) there could be other barriers that go beyond hardware or software functionality.
For instance, the mainframe (and computing in general until recently) has been tied to licensing costs that are tied to the physical hardware for the duration. Additionally, the distributed software licensing models can be very high, especially for companies that have an IT policy based on “bring on capacity to ensure peak needs are met” versus dynamically scalable and available based on actual demand.
The release of the CA survey caused a stir and reawakening the debate about mainframe health just as the AFCOM survey did this week. Surveys like these tend to put folks on edge on either side and invigorate fresh questions about true capabilities.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.