December 11, 2006
Fritz Ferstl is the director of Grid Computing Engineering at Sun Microsystems. He manages a multi-national team of development engineers, project and product managers located in Prague, Czech Republic, Regensburg, Germany and Menlo Park, California. Ferstl is recognized as a world expert in productizing Grid solutions for robust enterprise and industrial services. Recently GRIDtoday had the opportunity to ask Ferstl about the Grid Engine, the Grid computing software product that he architected.
GRIDtoday: There are still many notions of what Grid computing is. How do you define Grid?
FRITZ FERSTL: A Grid to me is the combination of distributed resources with a corresponding management infrastructure hosting at least one type of service or workload. This is a very generic definition but we make the experience every day that the same Grid technologies are utilized in extremely diverse application scenarios. So I see no reason to limit the scope of Grid computing artificially.
I also would not use crossing of organizational or geographical boundaries as an identifier for Grids. Whether a Grid needs to orchestrate resources across such boundaries or not is largely an application scenario question. Grids which are local to an organization today may grow a need to go across organizations or geographies over time. In both cases, a certain set of basic technologies is being used. Why should we refer to those cases differently?
Gt: What are the most important challenges to universal Grid adoption? Standards? Interoperability?
FERSTL: Crawling comes before walking. I think this simple fact has been neglected a bit in the early days of Grid computing. And to some extent this is still the case. Standards have been driven forward without making sure they really address the most pressing needs of Grid adopters in commerce and research. There was also a lack of stability, both in the standard efforts themselves and in the infrastructures delivering them. The dangers are frustration of early adopters and bifurcation of efforts. Both effects have been observable very clearly.
I actually would argue that there is no shortage of standards and interoperability at all. And if there is in some specific areas, growing demand will highlight those cases and market forces will make sure there is resolution.
In my opinion, the most pressing need is execution. We need to fulfill the promises of Grid computing. We need to provide dependable solutions. Components which need to work in concert for such solutions need to be open and we have to integrate them into reliable infrastructure. If a standard is required in such a context then let's identify and drive it but let's stay focused on the problem.
Grid computing has become main stream and the hype-days are over. Maybe that's less exciting for some people. But for others, like me, there's nothing more exciting than seeing next-generation airplanes, cars, chips being designed, spectacular advances in pharmaceutical research and bio-technology being made or completely new approaches in finance, commerce, energy and telecommunications being developed using Grid technology!
Gt: How is Sun facilitating the adoption of Grid?
FERSTL: We are building industry leading, dependable and open products as well as solutions in the Grid space and in the adjacent technology areas. Examples are OpenSolaris & Solaris, OpenSparc and our Sparc and x64 based servers, OpenJava and Java, Grid Engine and Sun N1 Grid Engine, Identity Management and more. We adhere to and drive applicable standards, such as around Web Services, identity management or DRMAA.
Gt: Can you give us some background on the Sun N1 Grid Engine and Grid Engine? What do you think is unique about these offerings?
FERSTL: Sun N1 Grid Engine is an industry leading workload management solution with unmatched functionality and scalability. Full 24x7 support for it is available from Sun on all major platforms including Solaris Sparc/x64 and various flavors of Linux, Microsoft Windows, Max OS/X, IBM AIX, HP-UX and SGI Irix.
But what differentiates it most from similar products is its' openness. It's available for free unlimited trial from the Sun web site. Moreover, it is developed in the Grid Engine open source project http://gridengine.sunsource.net/ under a flexible and open license. This openness has led to a huge adoption with many thousands of sites in basically all market areas.
We are further emphasizing this openness by adding new as well as previously proprietary technology to the Grid Engine open source project by mid of December. The new technology is called Grid Engine Service Domain Management. It is an entirely new paradigm that provides policy and demand-based re-allocation of arbitrary resources across service domains. Service domains are totally autonomous Grids which are controlled by a workload management facility, such as Grid Engine, but also by arbitrary other service infrastructures like application servers or web servers. The Grid Engine Service Domain Manager allows the control of the resource allocation to each of these services in an automated fashion while preserving their full autonomy.
The second addition we are making to the Grid Engine open source project is the Grid Engine Accounting and Reporting Console. It has been previously a closed-source part of the Sun N1 Grid Engine product and provides web-based accounting, reporting and diagnostics. The Grid Engine Accounting and Reporting Console functionality stores accounting date in a standard SQL database and thus features an open interface for integration.
Gt: What are Sun's plans for Grid computing in 2007? What are you most excited about?
FERSTL: We'll be further driving adoption of our technology through our open source efforts. On the Grid Engine side, the Grid Engine Service Domain Management will provide a new and exciting platform for integration and contributions plus it will open up completely new Grid application opportunities.
One area I'm particularly keen to see evolving is the combination of our Service Domain Management with virtualization technologies at all levels, be it server, network or storage virtualization. Virtualization turns application frameworks and infrastructure components into commodity appliances. The Grid Engine Service Domain Management will make it possible to have as many of those appliances as needed by a service and to have them equipped with the appropriate physical resources.
Gt: What is your overall sense of the popularity of utility computing? Do you see its role expanding in the next few years?
The adoption of utility computing is largely dependent upon trust, security and legislation in and around utility grids. It is not so much a technical issue. Given enough legal freedom to operate and utilize utility grids plus sufficient trust and security, there is not doubt in my mind that utility computing will be a thriving business.
But even if there are issues which restrict the applicability of public utility grids, I'm convinced that the operational models and the corresponding technologies required for utility grids will become an important part of the next generation Grid architectures.
Gt: Would you like make any additional comments?
FERSTL: I've been in what's now the Grid market for over 13 years. Some people ask me whether it's not getting boring. Quite the contrary is true! I've never seen so much potential in this space than today. And I'm talking about very tangible business potential. And even then, I still feel like we are just at the beginning! How could I get bored? I'm excited to be part of this movement and am proud to be with a team and with a company that makes a difference.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.