June 25, 2007
After being drained of energy by the Las Vegas heat this weekend (for some reason, we found it necessary to make yet another trip to the Hoover Dam, among other outdoor locales), I don’t know that my brain is functioning well enough to put enough thought into my task of giving our readers the lowdown on this week’s happening in the world of distributed computing. Luckily for me, this week’s issue pretty much speaks for itself, and what it’s saying is nothing we haven’t heard before: “Virtualization is here to stay, and financial services firms love high-performance technologies!”
For starters, we have the article on Evergrid that I’ve been promising for a couple of weeks now. Now, while anyone closely following this market likely has read about Evergrid in several different places by now (including in last week’s GRIDtoday), this article adds a little more insight than you might have seen, especially in terms of where the company sees itself and what kinds of results its customers have seen thus far. In my opinion, the most noteworthy aspect in regard to the latter is the “top-five Wall Street firm” that Evergrid CEO David Anderson told me saw an 85 percent reduction in TCO by migrating to the Evergrid solution. Regardless how one spins statistics, 85 percent is a huge number, and if Evergrid can build a stable of customers with similar results, who knows what the future will hold for the company. Well, actually, Forrester vice president and analyst Jean-Pierre Gardani thinks he might know. Read the article to find out what he has to say, as well as for more insight from Evergrid’s Anderson.
Last week also gave us the 2007 Securities Industry and Financial Markets Association (SIFMA) Technology Management Conference, which brought with it no shortage of news from the (you guessed it!) financial services market. Among these is news of a Microsoft-sponsored survey that reveals strong customer demand for high-performance computing, as well as 63 percent of respondents deploying their HPC environments in a centralized or shared utility model. Considering the traction we have seen for grid software in the financial market, this certainly shouldn’t come as a surprise -- and considering how many times financial services CIOs have flat-out stated the competitive importance of cutting-edge IT platforms, no news of this sort should ever again come as a surprise. In other financial news surrounding the conference, we have: “GigaSpaces, Microsoft Deliver Solutions for Capital Markets”; “Voltaire, HP, Intel Bring Low Latency to Advanced Trading”; “Wombat Data Fabric Tops Million Messages per Second”; “Novell, Voltaire Partner on Real-Time Trading Apps”; and “Digipede Partner Delivers Solution to Financial Customer.” Speaking of Digipede, president and CEO John Powers has commented on the conference in his latest blog post.
Finally, we also have two articles (“Virtualization for Business Continuity” and “Orchestrating Virtualization”) authored by vendor representatives this week, both of which focus on the usefulness of virtualization and effective ways of deploying the technology. Written by leaders in their fields, DataSynapse and Novell, respectively, these two articles give some good insights into how these companies view the increasingly important and ever-more-mainstream virtualization technologies.
In other news, be sure to check out the following items while I attempt to recuperate: “RENCI Taps MIT Scientist to Lead Infrastructure Development”; “3Tera Enables Largest Virtual Private Datacenter”; “WS-ReliableMessaging Approved as OASIS Standard”; “IBM Unveils New Virtualization Software”; and -- even more from Big Blue – “IBM Extends Deep Computing on Demand Offering.”
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - June 25, 2007 @ 11:11 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.