September 18, 2006
The 451 Group has found that companies that extract, make a market in and retail natural resources are increasingly disposed to running risk management applications using Grid computing to balance their supply and demand ratios. Understanding risk and asset positions and being able to respond more quickly to price fluctuations, especially in today's volatile oil market, are executive-level requirements that demand robust computing resources. These findings appear in a report released last week by The 451 Group.
"Oil and gas companies were early adopters of grids, but their deployments had progressed little beyond those applications for which they were originally procured," said William Fellows, principal analyst at The 451 Group. "We now find good reason to be more optimistic about conceded Grid use among oil and gas companies. There appears to be more business pressure to make better use of resources, and the application of new techniques to ever-increasing data sets provides a growing opportunity for all kinds of vendors."
The 451 Group found that seismic data processing is one area that has dominated high-performance computing (HPC) and Grid use across the energy sector. Seismic processing is an overwhelmingly parallel HPC application, and is therefore well suited to running on grids. When it can cost anything from $40 million to $100 million just to drill a hole in the ground, there are clear business benefits to improving the analysis of seismic data before drilling. The oil and gas industry has another application that can be readily processed on grids -- reservoir modeling. This is the analysis of subterranean oil reserves, and it has become ever more important as oil prices rise and oil companies seek to maximize the benefits of reserves. Grid computing enables users to extend conventional data modeling and 3-D visualization techniques with other attributes, such as examining the effects of time (for 4-D representation) or determining how the application of different operational conditions (more/less pipe pressure, etc.) will affect oil delivery.
451 analysts also found that many oil and gas companies own their own grids or clusters but also have experience in outsourcing to meet some additional capacity requirements. Some companies are awaiting the ability to seamlessly overflow Grid queues to an outsourced service provider, which would bill them only for actual use. The issue for vendors is that oil and gas companies can be difficult to support since they typically want to use thousands of CPUs, but only for a very short time. Other sectors mostly purchase longer contracts.
"Given the natural resources sector's appetite for outsourced capacity to meet peak workloads and project requirements, ISVs and vendors should enable enterprise IT managers to access these resources as seamlessly as possible. More-flexible license mechanisms, automatic 'overflow' to additional resources and billing for actual use should be the focus for utility-model vendors and ISVs," said Fellows.
This report, "Grid Computing -- Adoption in the Energy Sector," is the 12th report in the 451 Grid Adoption Research Service (GARS) -- an investigation into user experiences and vendor strategies. The 61-page report was written by William Fellows, principal analyst, and Steve Wallage, director of research. In this report, The 451 Group has examined a range of organizations representing global economic interests, from the 'oil majors' to national energy and IT utility providers. The report analyzes the status of Grid activity at leading oil and gas companies, and assesses when and how they could move beyond HPC applications. In addition, it compares the deployment experience in the pharmaceutical industry with that of the oil and gas companies.
The report includes in-depth competitive assessments of the following vendor companies (although this is not a complete list of companies covered in various sections of the report): Altair Engineering, DataSynapse, Hewlett-Packard, IBM, Platform Computing, Sun Microsystems, SunGard and United Devices.
User case studies include the following early-adopter companies: 3DGeo, BP, ConocoPhillips, Duke Energy, EDF, Halliburton, Norsk Hydro, Offshore Hydrocarbon Mapping, ParadigmGeo, Schlumberger, Total and Virtual Computer Corp.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.