September 29, 2010
If we're to listen to all the hype and buzz about cloud computing it seems that everything is covered—that we do not need to worry or even think about anything. It's just going to happen in the cloud, right?
It is hard to ignore the barrage through email on 10-ways to fix this cloud issue or 8-ways to solve that cloud issue or 7 deadly sins on cloud security. One area that is simply not discussed is how do you write next generation applications that can exploit the underlying server architecture in a manner that is simple, intuitive and sustainable.
Today's cloud platforms are nothing more than racks of multicore x86 processors, network switches and storage. But how do you program them so that you can get multiple cores, processors, or even racks of servers to crunch the application and utilize the underlying parallelism?
Parallel processing, specifically large scale or massively parallel processing has been one of those grand challenges from a programmer's perspective. There is no question that multicore, multicluster architectures are here to stay, and compared to the mid-eighties, very affordable.
In the eighties, there were a couple of architectures that vied for the performance king, Massively Parallel Processing (MPP), Vector architectures, and symmetric Multi Processor (SMP). The winning architecture was SMP. The main reason, in my mind, is that SMP offered an easily programmable environment compared to other architectures of that time. Compilers were developed that offered a way for programmers to just write code and the compiler would take care of the rest, now it is true that the more you knew about the dataset and its characteristics the better the outcome. Nevertheless, it was far simpler than trying to decompose the application and figure out how to get the application to run on 100 or 1000 processors. The downside of SMP is that it ran out of steam for most applications at about 12 processors whereas MPP could go big, really big. The problem with MPP, on the other hand, was that you had to re-engineer the software every time the underlying hardware changed.
Over these past twenty-five years not much changed. Yes, there have been advances in tools, MPI, AVM, etc. but you still have to roll up the sleeves and mess with the code and there is nothing worse than trying to get old code written by someone else to work on MPP architect. Where's the documentation?
Today's x86 architecture is the foundation, the building block for the next generation software development for cloud computing. Imagine a software development environment specifically for cloud computing that would automatically decompose the problem, create the code, document the code, create the process to solve the problem and create the control or automation of the process. The code would run on single core or exploit the underlying, multi-core and multi-server architecture of the available cloud resources without any manual intervention.
Add to this development environment an ecosystem that would track the software that had been developed, who and where it was being used, and issue a license payment without the developer even being aware when the code was being used.
This is similar to the music business today; a song is played on radio stations across the country, the artist knows nothing about this, yet the value chain is compensated for in its air time. That's what I am talking about here.
This week Massively Parallel Technologies announced Blue Cheetah, an application development ecosystem for cloud computing. This is the industry's first application ecosystem software for cloud computing environments. The Blue Cheetah application ecosystem is suitable for a wide variety of cloud computing applications such as massive multi-player gaming, numerically intensive application, or even business analytics.
"The Blue Cheetah application ecosystem is first to provide a single environment for both creation and monetization of highly optimized modular applications," said Bobbi Hazard, CEO of MPT. "Cloud computing and multi-core processors provide an immense potential for high performance and operational efficiency, but they create new problems for application development and commerce. MPT solves these problems with a new holistic solution that goes well beyond existing products."
Key to this development environment is the ability to monetize the development, enter iCode or the iApp store for cloud computing.
MPT has a rich pedigree of developers resulting in a large IP portfolio. In addition, they have surrounded themselves with rich talent including such industry luminaries as board members: John Gustafson, PH.D, Member of the Board of Directors is a noted luminary in the High Performance Computing market. Perhaps their biggest luminary is Dr. Gene Amdahl, Member of the Scientific Board of Advisors.
One of the icons of the computing industry, Dr. Amdahl known for Amdahl's law, is a founder of four companies, and one the original architects of the business mainframe computer. Gene was featured in the company's launch event.
Finally, it's clear that someone is paying attention the developer community and taking a critical look at how to transition to cloud computing solutions. MPT draws from their rich technological foundation to significantly improve developer productivity and enhance user experience.
Finally, MPT takes this development ecosystem to a higher level by offering cloud-computing services for the developer. A one stop shop for development testing, managing and monetizing software. Nirvana.
Posted by Steve Campbell - September 29, 2010 @ 11:35 AM, Pacific Daylight Time
An HPC industry consultant and cloud evangelist, Steve Campbell is a seasoned senior HPC executive.
No Recent Blog Comments
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.