January 31, 2005
ETSI and INRIA organized a three-day Grid Plugtest, which started on Oct. 18. The objective was to learn, through user experience and open discussions, about the future features needed for the ProActive Grid middleware as well as to get important feedback on the deployment and interoperability of Grid applications based on the ProActive library, distributed across various Grid platforms. On the three days, the event drew 80 participants from 10 different countries: France, Chile, United States, England, Holland, Switzerland, Spain, Italy, Japan and Korea. All these people met to share their views of ProActive, the Grid middleware developed in the OASIS team, INRIA. This event was organized under the supervision of UNSA (University of Nice), I3S and CNRS, and was sponsored by IBM, Sun and ObjectWeb.
The event consisted of three different happenings. On the first day, ProActive talks were held. In the morning, the general features offered by the middleware were presented, with talks underlining its main aspects such as its programming model, group communications, mobility, Grid deployment capabilities, Grid component model and security. On the afternoon session, users were invited to speak about their use of the middleware. During the evening, future work was presented, and a panel of experts was invited to talk about actual problems in the Grid domain: "Stateful vs. Stateless Web Services for the Grid: how to get both scalability and interoperability?" with Denis Caromel (UNSA), Tony Kay (Sun Microsystems), Jean-Pierre Prost (IBM EMEA Grid Computing), Vladimir Getov (University of Westminster), Marco Danelutto (University of Pisa) and Christophe Ney (ObjectWeb).
Regarding ProActive, it is an LGPL Java library for parallel, distributed and concurrent computing, also featuring mobility and security in a uniform framework. With a reduced set of simple primitives, ProActive provides a comprehensive API allowing simplifying the programming and deployment of applications on Local Area Network (LAN), on clusters of workstations, or on Internet Grids. The deployment infrastructure based on XML files provides a level of abstraction that allows removing from the source code of the application any reference to software or hardware configuration. It provides an integrated mechanism to specify external processes that must be launched and the way to do it. The goal is to be able to deploy an application anywhere without having to change the source code, all the necessary information being stored in an XML descriptor file. ProActive features also a well-defined Grid component programming model.
The second day was dedicated to a contest between 6 teams: AlgoBar, Tournant and INRIA from France, University of Chile, NTU from Taiwan, and University of Southern California, where the aim was to find the number of solutions to the N-queens embarrassingly parallel problem: N being as big as possible, count the number of solutions for placing N non-threatening queens, in a limited amount of time. The world record is for N=24, having 227,514,171,973,736 solutions, calculated on 64 CPUs (Pentium4 Xeon 2.8 GHz in a FireCore cluster) with 75 516 tasks using MPI (standard parallel programming) in 22 days. Actually, the INRIA team equaled this world record in the offline challenge (qualifications for the real event), in 17 days on a P2P desktop Grid of more than 300 heterogeneous machines using the ProActive middleware.
To be able to run such contest, a Grid was built up, with the help of our different partners on 20 different sites, in 12 different countries. We gathered a total of 473 machines, bearing 800 processors, totaling 100 Gigaflops (measured with the SciMark 2.0 agent for computing, this measure was performed using a pure Java benchmark). One very important and interesting aspect was that resources used to build the Grid were heterogeneous in terms of OS (Linux, Windows XP, MacOS, SGI Irix and Solaris), access protocols (ssh, gsissh), Grid middleware (Globus), Job Scheduler (PBS, LSF, Sun Grid Engine, OAR, Prun), Security policy (Firewalls, NAT, Private IP addresses, ...) detailed after, Java Virtual Machines (Sun, BEA, SGI). The deployment and interoperability between all resources/sites were achieved using ProActive.
Most of our concerns were about the different security policies we encountered on each sites during the set up. The challenge was about being able to access each site according to their security policy, in which we defined four levels of friendliness:
We also added new features in ProActive to cope with some internal sites configurations (DNS missing, machines with two network interfaces, ...)
All contestants were asked to use ProActive as their middleware, and could freely use the power of the 800-plus processors (during their slot of time: 1 hour) which were dispatched around the world (Australia, Europe, North and South America, India). This event was strictly an engineering event, not a conference, nor a workshop. As such, an active participation was requested from contestants who had to provide their own implementation of the N-Queens algorithm, and eventually modify existing xml deployment file to adapt to their strategy.
There was no compulsory programming language, but all teams used Java to write their code, except from the NTU team which hid some native routines inside a Java wrapper. This scheme led to a faster algorithm, but lost Java portability. The sites had to be updated with this native code, which would be hard to do on a bigger scale. On the other hand, the all-Java approach allowed for transparent migration of code to distant nodes, with no manual code exportation.
The criterion for deciding the winners were based on
The Chilean team got ahead of the other five participants. They found the number of solutions for 18 queens, 19 queens twice, 20 queens four times and 21 queens once, in an hour. They were the best when considering the number of solutions found in one hour (800 billion), the number of nodes used (560), and the speed of the algorithm (21 queens in 24 minutes, 38 seconds).
The Plugtest, co-organized by INRIA and ETSI, pleased all the participants. It was an event both useful for the users, who received help from the ProActive team, and the OASIS team, who received feedback from the users. We were forced to add functionalities to the middleware to be ready and effective for the Plugtest, and have a stable system. We had to develop certain aspects that had been left out, due to time restrictions and priorities that were in fact of primary importance. We are also very satisfied with the results obtained during the N-queens contest, which showed that applications could take advantage of the Grid in a simple way. Another happy discovery was the number of different scientific domains which could use our middleware in their applications. This is a direct effect of the generic programming model used inside, which can be reused for biology, physics and evolving phenomenon.
We did have trouble setting up the Grid to work, as mentioned earlier but once this configuration was achieved, the work for the users was simple. Indeed, the deployment on the different sites was not a source of problems, which is an indicator of how ProActive is fit for usage, as users were not bothered by system configuration, and could instead focus on the internals of their application.
Pressed by the general demand, INRIA and ETSI will be organizing another Plugtest on Oct. 10-14, 2005. The event is planned to be larger, on all scales: we expect more people (more than 150), a longer time span (five days), a larger Grid, the use of other middleware, and an even wider panel of domains. This future event will involve several European projects. Indeed, two workshops will be held during these five days, a GridCoord workshop: Open middleware for the Grid, and a CoreGrid workshop: Programming Models and Components for the Grid. The application used for the contest and interoperability Plugtest is not yet fixed, but we have been thinking about a travel sales man problem, which needs many more communications, and will be even more interesting and demanding to supervise.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.