January 21, 2011
At a very young age, my mother told me to consider the source—to never take authoritative pronouncements at face value. I hadn’t thought much about this back in those more innocent days, but as my cynicism has matured, I’ve learned that behind such declarations is often an agenda that’s hidden only in vain.
The worst offending is often the impossibly pithy—slogans that are a little too packaged and pristine to represent any authentic truth. If it sounds too neat and convenient, chances are it’s an agenda-backed instrument of manipulation.
A good example of this is the cautionary refrain from Marc Benioff, CEO of Salesforce.com; and Werner Vogels, CTO of Amazon Web Services:
“Beware of the false cloud!”
These gentlemen are visionaries and revolutionaries in their own rights; they have a great deal of credibility and they’ve made contributions to IT that will be remembered for many decades. But, on this particular topic, they’re anything but credible. As the titans of the public cloud, they have an obvious axe to grind.
As Public Enemy protested: “Don’t believe the hype!”
As their argument goes, if you own the hardware, it’s not a cloud.
I’m not buying it. The private cloud is anything but illegitimate.
Cloud is About Agility
IT used to be your cable company. It held the local monopoly for IT services. The wait you experienced was a frustrating, but necessary part of your relationship with IT. You had no choice. But now you do. Public cloud has ended the wait.
Why wait three months when three minutes will do?
That question has captured the attention of IT leadership, which realizes that the public cloud has dramatically changed performance expectations for IT.
Traditionally, the CIO was expected to improve performance incrementally year over year; last year’s metrics were next year’s benchmarks and your goal was to ensure the curve was moving in the right direction. Today, the expectation is for a radical transformation in agility and responsiveness—from months to minutes.
Amazon can do it. Why can’t you?
But this question isn’t unleashing a wholesale migration of enterprise workloads to the public cloud; it’s the impetus for the private cloud transformation.
Private Cloud is the Entry Point for Enterprise IT
I’ve yet to see an analyst projection that doesn’t point to the private cloud as the beachhead for enterprise IT organizations making this transformation.
Three reasons that is the case:
Private Cloud Was the Entry Point for Amazon EC2...
That’s right: Before Amazon EC2 was a public cloud it was a private cloud!
Why? For the same reasons enterprise IT organizations are building private clouds today: Flexibility and agility.
So, to call the private cloud illegitimate isn’t just illogical; it’s a little hypocritical.
It’s important to acknowledge that all of this hullaballoo around clouds may be an issue of semantics. Ultimately, it doesn’t much matter if you call this internal elastic infrastructure a cloud, a grid or some other such thing.
What matters is the acknowledgement that enterprise IT organizations must become self-service on-demand providers of infrastructure, platforms and applications. In the future, IT must look like a public cloud in its own right.
It’s also important to acknowledge that private clouds are the starting point on this journey and not necessarily the final destination. Most enterprises will want to blend together a variety of internal and external resources to create an integrated “hybrid cloud” that allows workloads to be dynamically retargeted to optimize for price, policy, performance and various service level characteristics.
Of course, this argument deserves one final acknowledgement: I, too, have my own biases and my own axe to grind. I’m pretty sure we all do.
So, don’t take my pronouncements at face value. Consider them as one perspective in forming your own version of the truth.
Jake is a seasoned software marketing executive with a strong product strategy and communications background. Previously, he was SVP of marketing and business development for JustSystems, the largest ISV in Japan and a leader in XML technologies. Before that, Jake was VP of product marketing with Mercury Interactive (now part of HP Software), where he was responsible for the Systinet product line. He joined Mercury though Mercury's $105 million acquisition of Systinet Corporation. Before Mercury, Jake led marketing for two WebSphere products at IBM Software Group, which he joined through the acquisition of Venetica. Prior to Venetica, Jake was director of product marketing with Documentum, Inc. (now part of EMC), which he joined through the acquisition of eRoom Technology.
Jake has a BA in english and political science from University of New Hampshire and an MBA from the McCallum Graduate School of Business at Bentley College, where he was an American Marketing Association George Hay Brown Scholar.
Posted by Jake Sorofman - January 21, 2011 @ 7:38 AM, Pacific Standard Time
Jake is a software executive, writer and blogger. Based in Raleigh, North Carolina, he is currently the chief marketing officer for rPath. Feel free to contact Jake via email at firstname.lastname@example.org
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.