December 01, 2010
In 2008, Andi Baritchi wrote that cloud computing is nothing but a clever detour back to the era of the mainframe.
“I see this whole cloud computing movement as nothing more than a reincarnation of the classic mainframe client-server model. People want painless access to their data and applications from wherever they are, from whatever electronic gizmo they happen to be using” and that cloud is delivering on this same promise with different marketing behind it—and a few variations on services.”
This is an argument that has been echoed elsewhere (see “Cloud Computing: Is it Old Mainframe Bess in a New Dress circa 2008) around the web for a few years now, but the rebuttal that nips these questions in the bud most often is not about access; it's tied to economies of scale—in short, to economics versus productivity.
There have been numerous discussions about how cloud computing is the next great paradigm shift in computing, following the initial mainframe explosion, then on to client/server.
Others have put forth the argument that clouds are, in fact, the culmination of these two separate movements—if not the very revolution capable of maximizing the “best of both worlds.”
A vast number of conversations about cloud computing as a technological and business model have revolved around the various components and innovations that make it viable rather than the core economic rationale. It is in this economic model alone where the economies of scale and efficiencies are realized—but far too often these conversations are overlooked in lieu in favor of tech talk.
This week Microsoft published a paper entitled, “The Economics of the Cloud for the EU Public Sector” which, while delivering on its title’s specific emphasis on European government clouds, also does a rather thorough job of putting the pre-cloud movements in broader tech history and economic contexts. While it’s easy to go general in an exploration of this, it’s worth looking back before we look forward to cloud computing and what it will deliver for the future.
In Microsoft’s view, what’s happening now in terms of adoption is similar to the process that IT underwent when the client/server movement began to take hold initially. The authors set the reminder that, “During the mainframe era, client/server was initially viewed as a ‘toy’ technology, not viable as a mainframe replacement. Yet, over time the client/server technology found its way into organizations of all types.
Similarly, when virtualization technology was first proposed, application compatibility concerns and potential vendor lock-in were cited as barriers to adoption yet still, this too is being addressed as more are willing to work within the new paradigm.
Part of what separates these movements and brings cloud to bear as the “best of both worlds” are the economies of scale that neither mainframes or client/server models could deliver, whether on a utilization/efficiency scale, infrastructure investment or other scale.
On the mainframe side, the Microsoft authors note that this era was marked “by significant economies of scale due to high up-front costs of mainframes and the need to hire sophisticated personnel to manage the systems. As required computing power increased, cost declined rapidly at first, but only large central IT organizations had the resources and the aggregate demand to justify the investment. Due to the high cost, resource utilization was prioritized over end-user agility; users’ requests were put in a queue and processed only when needed resources were available.”
In the personal computer age and subsequent client/server transition, “the minimum unit of purchase was greatly reduced and the resources became easier to operate and maintain. This modularization significantly lowered the entry barriers to providing IT services, radically improving end-user agility. However, there was a significant utilization tradeoff, resulting in the current state of affairs: datacenters sprawling with servers purchased for whatever needed existed at the time, but running at just 5% to 10% utilization.
Cloud “offers users economies of scale and efficiency that exceed those of a mainframe, coupled with modularity and agility beyond what client/server technology offered, thus eliminating the tradeoff.” These benefits are seen in terms of power and server utilization/efficiency and staffing requirements as well as in the refinement of large datacenter operation.
As far as economies of scale are concerned (and with the other major movements in context) we can go back to a paper published in 2009, “Above the Clouds: A Berkeley View of Cloud Computing” wherein some of the core issues are explained very clearly—and are still as valid today. All of these go back to highlighting the difference between mainframe and client-server and do tend to show that cloud computing does indeed offer the best of both of these worlds.
• In deciding whether hosting a service in the cloud makes sense over the long term, we argue that the fine-grained economic models enabled by cloud computing make tradeoff decisions more fluid, and in particular the elasticity offered by clouds serves to transfer risk.
• As well, although hardware resource costs continue to decline, they do so at variable rates; for example, computing and storage costs are falling faster than WAN costs. Cloud Computing can track these changes—and potentially pass them through to the customer—more effectively than building one’s own datacenter, resulting in a closer match of expenditure to actual resource usage.
• In making the decision about whether to move an existing service to the cloud, one must additionally examine the expected average and peak resource utilization, especially if the application may have highly variable spikes in resource demand; the practical limits on real-world utilization of purchased equipment; and various operational costs that vary depending on the type of cloud environment being considered.
As cloud computing becomes more pervasive and the underlying technology ripens, parallels between previous major shifts will be less simple to make. For now, however, opening this topic to discussion seemed relevant given Microsoft’s piece on the matter.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.