September 29, 2010
If we're to listen to all the hype and buzz about cloud computing it seems that everything is covered—that we do not need to worry or even think about anything. It's just going to happen in the cloud, right?
It is hard to ignore the barrage through email on 10-ways to fix this cloud issue or 8-ways to solve that cloud issue or 7 deadly sins on cloud security. One area that is simply not discussed is how do you write next generation applications that can exploit the underlying server architecture in a manner that is simple, intuitive and sustainable.
Today's cloud platforms are nothing more than racks of multicore x86 processors, network switches and storage. But how do you program them so that you can get multiple cores, processors, or even racks of servers to crunch the application and utilize the underlying parallelism?
Parallel processing, specifically large scale or massively parallel processing has been one of those grand challenges from a programmer's perspective. There is no question that multicore, multicluster architectures are here to stay, and compared to the mid-eighties, very affordable.
In the eighties, there were a couple of architectures that vied for the performance king, Massively Parallel Processing (MPP), Vector architectures, and symmetric Multi Processor (SMP). The winning architecture was SMP. The main reason, in my mind, is that SMP offered an easily programmable environment compared to other architectures of that time. Compilers were developed that offered a way for programmers to just write code and the compiler would take care of the rest, now it is true that the more you knew about the dataset and its characteristics the better the outcome. Nevertheless, it was far simpler than trying to decompose the application and figure out how to get the application to run on 100 or 1000 processors. The downside of SMP is that it ran out of steam for most applications at about 12 processors whereas MPP could go big, really big. The problem with MPP, on the other hand, was that you had to re-engineer the software every time the underlying hardware changed.
Over these past twenty-five years not much changed. Yes, there have been advances in tools, MPI, AVM, etc. but you still have to roll up the sleeves and mess with the code and there is nothing worse than trying to get old code written by someone else to work on MPP architect. Where's the documentation?
Today's x86 architecture is the foundation, the building block for the next generation software development for cloud computing. Imagine a software development environment specifically for cloud computing that would automatically decompose the problem, create the code, document the code, create the process to solve the problem and create the control or automation of the process. The code would run on single core or exploit the underlying, multi-core and multi-server architecture of the available cloud resources without any manual intervention.
Add to this development environment an ecosystem that would track the software that had been developed, who and where it was being used, and issue a license payment without the developer even being aware when the code was being used.
This is similar to the music business today; a song is played on radio stations across the country, the artist knows nothing about this, yet the value chain is compensated for in its air time. That's what I am talking about here.
This week Massively Parallel Technologies announced Blue Cheetah, an application development ecosystem for cloud computing. This is the industry's first application ecosystem software for cloud computing environments. The Blue Cheetah application ecosystem is suitable for a wide variety of cloud computing applications such as massive multi-player gaming, numerically intensive application, or even business analytics.
"The Blue Cheetah application ecosystem is first to provide a single environment for both creation and monetization of highly optimized modular applications," said Bobbi Hazard, CEO of MPT. "Cloud computing and multi-core processors provide an immense potential for high performance and operational efficiency, but they create new problems for application development and commerce. MPT solves these problems with a new holistic solution that goes well beyond existing products."
Key to this development environment is the ability to monetize the development, enter iCode or the iApp store for cloud computing.
MPT has a rich pedigree of developers resulting in a large IP portfolio. In addition, they have surrounded themselves with rich talent including such industry luminaries as board members: John Gustafson, PH.D, Member of the Board of Directors is a noted luminary in the High Performance Computing market. Perhaps their biggest luminary is Dr. Gene Amdahl, Member of the Scientific Board of Advisors.
One of the icons of the computing industry, Dr. Amdahl known for Amdahl's law, is a founder of four companies, and one the original architects of the business mainframe computer. Gene was featured in the company's launch event.
Finally, it's clear that someone is paying attention the developer community and taking a critical look at how to transition to cloud computing solutions. MPT draws from their rich technological foundation to significantly improve developer productivity and enhance user experience.
Finally, MPT takes this development ecosystem to a higher level by offering cloud-computing services for the developer. A one stop shop for development testing, managing and monetizing software. Nirvana.
Posted by Steve Campbell - September 29, 2010 @ 11:35 AM, Pacific Daylight Time
An HPC industry consultant and cloud evangelist, Steve Campbell is a seasoned senior HPC executive.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.