August 11, 2008
Did anyone actually think a conference called Next Generation Data Center (NGDC) would come and go without addressing "the cloud?" In 2008 -- a year destined to go down in the IT annals as the "Year of the Cloud" -- that's not even a possibility. However, cloud computing wasn't the only topic discussed at the show, and even when it was the paradigm du jour, its presentation ranged from "this is what it is" to "this is how it looks" to "this is how we're using it -- today." (And I didn't even attend all of the sessions dedicated to cloud computing.)
The whole NGDC/LinuxWorld show (held last week in San Francisco) kicked off with a keynote by Merrill Lynch Chief Technology Architect Jeffrey Birnbaum, who outlined the investment bank's move to "stateless computing." Actually, he explained, it's not so much about being stateless as it is about where the state is. Merrill Lynch is moving from a dedicated server network to a shared server network, functioning essentially as a cloud that allows Merrill Lynch to provision capacity rather than machines.
Aside from architectural change, Birnbaum says another key element of Merrill Lynch's stateless infrastructure is its enterprise file system, which he believes really should be called an "application deployment system." A namespace environment like the Web, all the components needed for an application to run are referenceable through the file system, thus negating the need for heavy-duty software stacks and golden images. The file system works via a combination of push and pull, or of replication and caching, said Birnbaum. The strategy also works for virtual desktops, he said, with all applications -- including the operating system -- being stream to the thin client.
But keeping things lightweight and flexible is only part of the challenge; workload management also is important. Birnbaum says widespread virtualization is a key to this type of infrastructure, but some applications can't handle performance overhead imposed by running in a virtual environment. For these types of applications, a stateless computing platform needs the ability to host applications either physically or virtually. Additionally, says Birnbaum, everything has to be policy-based so primary applications get their resources when they need them. On the workload management front, Merrill Lynch is working with Evergrid, Platform Computing and SoftModule.
For the folks concerned about capital expenditure, the best part about Merrill Lynch's stateless vision is that it can be done on mostly (if not entirely) commodity hardware. Because the state is in the architecture instead of an individual machine, Birnbaum says you can buy cheaper, less redundant and less specialized hardware, ditching failed machines and putting the work elsewhere without worry.
One of the big business benefits of stateless computing at Merrill Lynch is that it lets the financial services leader maximize utilization of existing resources. If someone needs 2,000 servers for an exotic derivatives grid and the company is only at 31 percent utilization, it has that spare capacity and doesn't have to buy those additional servers, Birnbaum explained. Offering some insight into the financial mindset, Birnbaum added that Merrill Lynch buys new servers when it reaches 80 percent utilization, therefore ensuring a capacity cushion in case there is a spike.
Speaking less about a real-world internal cloud deployment and more about the building blocks of cloud computing was Appistry's Sam Charrington. One of his key takeaways was that while virtualization is among cloud computing's driving technologies, a bunch of VMs does not equal a cloud. It's great to be able to pull resources or machines from the air, Charrington explained, but the platform needs to know how to do it automatically.
Beyond getting comfortable with underlying technologies and paradigms like virtualization and SOA, Charrington also advised would-be cloud users to get familiar with public clouds like Amazon EC2, GoGrid and Google App Engine; inventory their applications to see what will work well in the cloud; and to get a small team together to plan for and figure out the migration.
Looking forward, Charrington says the cloud landscape will consist not only of the oft-discussed public clouds like EC2, but also will include virtual private clouds for specific types of applications/industries (like a HIPAA cloud for the medical field) and private, inside-the-firewall clouds. Citing The 451 Group's Rachel Chalmers, Charrington said the best CIOs will be the ones who can best place applications within and leverage this variety of cloud options.
The cloud also was the focus of grid computing veteran Ravi Subramaniam, principal engineer in the Digital Enterprise Group at Intel. Subramaniam led his presentation by noting that cloud computing is not "computing in the clouds," mainly because whether it is done externally or internally, cloud computing is inherently organized, and users know the provider -- be it Amazon, Google or your own IT department. Illustrating a sort of cloud version of Newton's third law, Subramaniam pointed out that for every one of cloud computing's cons, there is an equally compelling pro: security issues exist, but CAPEX and OPEX savings can be drastic; end-users might have limited control of the resources, but those resources are simple to use by design; and so on.
Subramaniam focused a good portion of his talk on the relationship between grid computing and cloud computing, positing that the two aren't as different as many believe. However, he noted, coming to this conclusion requires viewing grid as a broad, service-oriented solution rather than something narrow and application-specific. In their ideal form, he explained, grids are about managing workloads and infrastructure in the same framework, as well as about matching workloads to resources and vice versa.
For all of its strengths, though, grid computing does have its weaknesses, among which Subramaniam cites a lack of straightforwardness in applying and limited usefulness in small-scale environments. Cloud computing attempts to simplify grid from the user level, he said, which means utilizing a uniform application model, using the Web for access, using virtualization to mask complexity and using a "declarative" paradigm to simplify interaction. Essentially, Subramaniam summated, the cloud is where grid wanted to go.
If users approach both cloud computing and grid computing with an open mind and applying broad definitions, they will see that the synergies between the two paradigms are quite strong. The combination of grid and cloud technologies, Subramaniam says, means virtualization, aggregation and partitioning as needed, a pool of resources that can flex and adapt as needed, and even the ability to leverage external clouds to augment existing resources.
Of course, cloud computing wasn't the only topic being discussed at NGDC, and one of particular interest to me was the concept of "virtualization 2.0." In a discussion moderated by analyst Dan Kuznetsky, the panelists -- Greg O'Connor of Trident Systems, Larry Stein of Scalent Systems, Jonah Paransky of StackSafe and Albert Lee of Xkoto -- all seemed to agree that Virtualization 2.0 is about moving production jobs into virtual environments, moving beyond the hypervisor and delivering real business solutions to real business problems.
But the real discussion revolved around what is driving advances in virtualization. Xkoto is a provider of database virtualization, and Lee said he has noticed that the first round of virtualization raised expectations around provisioning, failover and consolidation, and now users want more. In the usually-grounded database space, he noted, even DBAs are demanding results like their comrades in other tiers have seen.
Another area where expectations have increased is availability, said StackSafe's Paransky. While it used to be only transaction-processing systems at big banks that demanded continuous availability, Paransky quipped (although not without an element of truth) that it's now considered a disaster if e-mail goes down for five minutes -- and God forbid Twitter should go down. People just expect their systems and applications will always be available, and they're expecting virtualization to help them get there.
Lee added that once you jump in, you have to swim, and users want to continue to invest in virtualization technologies.
However, there are inhibitors. Lee contends that adopters of server virtualization solely for the sake of consolidation risk backing themselves into a corner by relying on fewer boxes to run the same number of applications. If one box goes down, he noted, the effect is that much greater.
Fear of change also seems to be inhibiting further virtualization adoption. Scalent's Stein said companies see the value of virtualization, but getting them to overcome legacy policies around new technology can be difficult. What's more, he added, is that it's not as easy as just ripping and replacing -- virtualization needs to work with existing datacenters. Paransky echoed this concern, noting that virtualization can mean uncontrolled change, which is especially scary to organizations with solid change management systems.
Also, he noted, Virtualization 1.0 isn't exactly past-tense, as 70-80 percent of IT dollars are spent on what already is there. Paransky assured the room that although they're not sexy, people still have mainframes because of this compulsion to improve or keep up existing systems rather than move to new ones.
Moderator Kuznetsky was not oblivious to these obstacles, asking the panel what will drive organizations to actually make the leap to Virtualization 2.0, especially considering the general rule that organizations hate to change anything or adopt new technologies. Xkoto's Lee commented that the IT world responds to pain, resisting change for the sake of change and holding out until there are real pain points.
Paransky took a more forceful stance, stating the organizations no longer have the luxury to resist change like they did in the past. Customers pay the bills, he says, and they don't like the turtle-like pace of change -- they want dynamism. He noted, however, that organizations don't hate change because they think it is bad, but rather because it brings risk. The trick is balancing the benefits that virtualization can bring with the needs to keep things up and running.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.