May 21, 2007
If you're a sports fan, there aren't many better times of year than right now, as spring comes to an end and summer officially kicks off. Baseball is in full-swing (no pun intended, I swear), as are playoffs in the NBA and NHL, the U.S. Open is approaching for the PGA and, perhaps most importantly for this discussion, the NFL off-season is going strong, with teams strengthening through free agency and the recent draft. It is with the latter in mind that I looked at last week's announced partnership between GemStone Systems and Rogue Wave Software -- a move that could significantly increase the pair's competitiveness as datacenters around the globe continue to seek out solutions offering capabilities like those GemStone and Rogue Wave are offering.
While neither company alone is large enough to spread much fear into the hearts of the IT world, a technological and business collaboration between the two definitely is greater than the sum of its parts. After all (and as I mention in my article on the move), the grid-based application platform space is taking off (at least from my perspective), and any successful solution in this space is going to need to combine the ever-important performance factor with the increasingly important data access factor. After seeing GemStone's chief rival, Tangosol, get bought by Oracle, and already knowing that competitors like GigaSpaces and Appistry already address distributed data access and CPU performance in their respective offerings, I began to wonder what moves GemStone would make to level the playing field. Well, the Rogue Wave partnership begins to answer that question.
Although, as GemStone Chief Architect Jags Ramnarayan told me, the partnership is just in its initial stages, it could be a big deal for all parties involved. Let's face it: distributed data caching is available in some form in a variety of applications, and there is no shortage of solutions addressing grid from a service-oriented point of view. However, a tandem of two companies each specializing one of these components could lead to customers getting one heck of a complete solution. I understand that it's hardly the first strategic partnership in the IT universe, but in this emerging market, it's one I consider worth watching.
Which brings me to a blog that I read this week. Written by Nikita Ivanov, a research adviser for Java-based grid computing vendor GridGain, this particular entry ("What is Grid Computing?") focused on the various types of grids currently available -- a list that didn't include any of the various datacenter-driven grids we've recently seen touted by companies of all shapes and sizes. Perhaps I'm surprised by this only because I've been paying altogether too much attention to this market, which isn't yet big enough for anyone to care about. Of course, the explanation for its omission simply could be that vendors in this space, which now include DataSynapse, United Devices and Platform, don't necessarily use the word "grid" when referring to their "distributed," "virtualized" and "service- oriented" platforms for running business-critical applications. Call them transactional grids, call them grid-based application platforms (like Gartner does), or don't even call them grids (like DataSynapse), but don't underestimate what they'll mean to the future of IT. Other than that, however, I found Ivanov's blog, which is updated regularly, both interesting and informative, especially as it relates to his comments on OGSA, which he calls "nothing more than a fiction at this time."
Coincidentally, GridGain's inaugural product announcement is in this week's issue. The solution is an open-source, Java-based platform being targeted to small- and medium-sized businesses.
Moving away from the world of start-ups and emerging uses of grid technology, I feel obligated to point out that the world of big IT vendors and highly specialized, highly proven grid solutions is still turning. In this case, IBM just announced the latest in a series of vertical market-oriented grid solutions, this one for the health care industry. Dubbed the Grid Medical Archive Solution, the offering includes storage, servers, software and services that provide "hospitals, clinics, research institutions and pharmaceutical companies with a multi-tier, multi-application and multi-site enterprise storage archive for delivering medical images, patient records and other critical health care reference information on demand." While it's fun to spend some time looking at new vendors and new technologies, we must not forget that companies like IBM still know a thing or two about grid computing, and while some vendors are struggling to get noticed, IBM is pumping huge resources into its grid R&D and pushing into the marketplace with one specialized offering after another.
As for the rest of this issue, there are a lot of announcements worth checking out, some of which include: "GigaSpaces to Develop Advanced Electronic Trading System"; "Appistry Announces Support for Fair Isaac's Blaze Advisor"; "Report: Insurance Market Needs SOA Standards"; "Gear6 Unveils Terabyte-Scale Caching Appliances"; "I-CHASS Names Kevin Franklin Executive Director"; and two from Altair (here and here).
For our readers in the United States, have a great Memorial Day weekend!
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - May 21, 2007 @ 11:03 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.