July 02, 2007
What a week in the world of grid computing: while the HPC folks were busy in Dresden, Germany, discussing whose supercomputer is the fastest, the grid world continued steadily down the path toward enterprise ubiquity, moving even further from its research roots (although I’m not saying there isn’t a lot of important work being done on that side of the aisle).
One shining example of this is the news of Sun’s Network.com application catalog getting its first financial services application online. I’ve been told that Sun actually expected a high demand for such applications, but the early push from partners and users turned out to be around biomedical applications instead. Now, however, Sun feels it can gain some momentum and really deliver a good number of useful applications to the ever-important financial services market. What’s more is that CDO2, the ISV that put its app online, has nothing but positive (well, mostly) things to say about its experiences with grid computing, and utility computing specifically. In fact, CDO2 believes there is a big market for this type of access to resources in mid-tier banking institutions, and the company isn’t counting out putting more applications on Network.com, and it already has begun a grid project with England’s University of Surrey.
In related news, Callidus, provider of several popular sales performance management solutions, has expanded its partnership with Sun to include a Europe-based datacenter to ensure European customers have access to the Callidus On-Demand suite of applications. Callidus has offered a grid-enabled version of its TrueComp applications for a few years now, and it was happy enough with the success of that to last year, and in the United States, to host on-demand versions of its applications for customers who want high performance without managing a grid infrastructure. Considering that the company already has more than 8,500 customers for its on-demand solutions, expanding to Europe probably wasn’t such a bad idea.
Moving away from Sun, last week also brought the release of DataSynapse’s GridServer 5.0, which offers a slew of new capabilities designed to enable, and ease, management of what the company is calling “mega-grids.” This focus, CTO Jamie Bernardin told me, is a result of the more than a dozen customers DataSynapse has who currently are running connected grids of 3,000-plus nodes, with some managing well over 5,000 nodes. As you can see from reading the announcement, the company placed a lot of emphasis on providing management and security capabilities, which Bernardin believes are necessary in order to mitigate the complexity issues inherent in large, geographically distributed datacenters.
And while it is his belief that “we’ve kind of written the book on large-scale, enterprise, financial grids, particularly within a business setting,” Bernardin readily admits that 5.0 is significantly more functional than previous releases of GridServer, which tended (perhaps necessarily) to focus on scalability from performance perspective rather than from a management or data access perspective. Along these lines, Bernardin says beta customers have been “totally blown away” by the global grid tracking, searching, etc., functions of version 5.0. As for data access, the new release is better-suited to work with distributed caching technologies, actually allowing the scheduler to route workloads to machines that already have the necessary data cached on them. This will become increasingly important, said Bernardin (and I wholeheartedly agree) because data-aware grids are the “next step in terms of on-boarding new applications that require much more state and data.”
In other news, I want to offer my best wishes to former Evergrid CEO Dave Anderson, who left the company on his own terms after helping to get if off the ground. Anderson was the primary source for my recent article on the company, and, in my humble opinion, he did a fantastic job of getting the company’s name out there and explaining just what makes Evergrid’s software so unique. I also want to note that, as you might have seen, The 451 Group just released the latest report from its Grid Adoption Research Services, this one tackling the issue of grid adoption among health care institutions. While I haven’t yet had a chance to take a look at it, I hope to do so relatively soon and offer my thoughts on it in the weeks to come. I’ll analyze the analysts, I guess.
Finally, be sure to check the announcements in this week’s issue, many of which highlight even more advances in grid management software and grid-enabled ISV applications, and even more of which hit on SOA and virtualization – two themes that will continue to be omnipresent in the market, even when I don’t discuss them. Some of the more noteworthy ones include: “Microsoft's Burton Smith Keynotes on Reinvention of Computing”; “eXludus Debuts Grid Optimizer Enterprise Edition”; “Evident Brings Service Level Reporting to Grids”; “Mimosa Systems Intros Grid-Based Email Archiving”; “Composite Software Advances SOA Data Virtualization”; “W3C Completes Work on Critical Web Services Standard”; and “Catbird Launches Security Solution for Virtual Networks.”
Posted by Derrick Harris - July 02, 2007 @ 11:09 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.