July 30, 2007
Have we made it clear over the past year that grid computing is huge in the financial services market? Have we exposed you, our loyal readers, to enough quotes from financial end-users and vendors touting just how important they are to one another? Are you punch drunk from absorbing too many blows along this front? I certainly hope not because, if so, this week might put you down for the count, as we whack you in the head once more with our “grid in financial services” two-by-four.
However, rather than hear the too-sunny take from a vendor or the carefully crafted company line from a user, this week we have a column from someone who bridges the gap between the two. Marc Jacobs, who spends his days with Lab49 building “advanced applications for the financial services industry,” shares with us his insights into how distributed computing is affecting the financial services industry -- and it shouldn’t be surprising to find out that the technology is having a big impact. Taking a look at the past, present and future of grid computing technologies in the market, Jacobs illustrates how far we’ve come and how far we’ve yet to go. While he makes it rather clear that technology to develop high-performance distributed apps currently exists, there are still hurdles to clear. Anyhow, I don’t want to give the whole thing away here, so please take a look for yourself.
This week’s issue also includes an interesting announcement from United Devices that shows just how the company is looking to redefine “grid computing” with its line of solutions. Rather than simply connecting PCs, the new UD is all about optimizing the datacenter and bringing to it new opportunities for innovation, and its work Japan’s NTT West is a prime example of this. Essentially, by deploying UD’s managed services offering, the telecommunications provider is able to offer its broadband customers the ability to sell CPU cycles into NTT’s Hikari Grid, which, in turn, provides computing resources to the company’s business customers. There has been plenty of talk over the past few years about how badly telcos want to offer grid services to customers, and in this case the telco doesn’t even have to maintain the entirety of the hardware resources it makes available. It’s like volunteer computing meets utility computing, and I, for one, love the idea.
Elsewhere on the Web, in the blogosphere, to be exact, there were a couple of items that piqued my interest last week. The first came from GRIDtoday favorite John Powers (Digipede CEO), who couldn’t help but let slip a few details of the upcoming Digipede Network v.2.0. I have yet to speak with someone who was anything but supremely impressed with 1.x versions of Digipede’s product, so if the improvements to which John alludes are as great as promised, I expect uptake of 2.0 will be even greater – and I expect to hear from, if it’s possible, even happier customers.
The other blog that caught my eye comes from someone a little less-known (and by “less-known” I mean not known at all) in the grid community, software engineer Corey Goldberg. Corey pointed me to an overview of Google’s distributed infrastructure on http://highscalability.com, a site dedicated to the building of Web-scale architectures. What Google and its Web cohorts are doing with distributed technologies never ceases to amaze, although it probably shouldn’t if I were to stop and consider what it takes to offer a seemingly endless list of services while serving billions upon billions of requests every day. Anyhow, when the site is up and running (which it wasn’t when I wrote this), you can see Google’s specs here: http://highscalability.com/google-architecture.
In other news, be sure to look at this week’s entire table of contents, as we have a good number of big announcements from across the distributed computing spectrum, including ones from Cisco, Sun Microsystems and Univa, TIBCO, Oracle, CANARIE, VMware and 3Tera. The Cisco news, especially, is worth reading if you haven’t already, as the networking giant is looking to make waves in the virtualization marketplace.
Finally, as I noted last week, be sure to tune back in over the next couple of weeks as we present special coverage around the Next Generation Data Center event taking place Aug. 7-9 in San Francisco. We’ll have some pre-show features next week, and I’ll be sharing my thoughts about and experiences at the conference in the following weeks.
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - July 30, 2007 @ 11:16 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.