October 23, 2006
In this Q&A from the Globus Consortium Journal, Ravi Subramaniam, principal engineer in Intel's Digital Enterprise Group and Grid veteran, talks about his strategy for defining how Intel's products can deliver "industry-leading" value with technologies that include Grid computing.
GLOBUS CONSORTIUM JOURNAL: Can you tell us what you do, and how long you've been doing it?
RAVI SUBRAMANIAM: I'm a principal engineer in the Digital Enterprise Group, which is our group that designs the products (e.g. CPUs, chipsets, storage and communications) for the enterprise segments. I've been working on grid for about ten years, although that depends somewhat on how you define the basic technology. Most recently, I've been working on defining how Intel's products can and may deliver "industry leading" value in the enterprise by understanding and defining compelling usages and architectures, including grids.
GCJ: What can you tell us about Intel's use of grids?
SUBRAMANIAM: So as far as the production environment is concerned, our compute engines are being shared through pools of various sizes all over the globe. And of course we have software that uses pretty sophisticated policies to manage scheduling. The machines themselves are heterogeneous-pretty much any kind of OS, including handheld devices, we support. And these resources are used mainly for activity related to chip design and software development.
GCJ: How large, from a number of machines and compute cycle perspective, is the grid today?
SUBRAMANIAM: I can't give you the exact number, but I can tell you that it exceeds 60,000 machines worldwide. Most of our grid-enabled work is focused on EDA, engineering design automation, and a lot of that activity is focused on what's called validation, which is a very compute-intensive activity that occurs early in the design process.
GCJ: So you mentioned that the grid has been in place for about ten years or so. Any idea what size it was initially?
SUBRAMANIAM: I don't know exactly but it grew naturally as Intel grew. Initially, Intel only had sites located in the U.S. and one in Israel, but as Intel began to grow into other countries, the software also grew to hook up the machines at these different locations. So whenever Intel needed resources to design chips, the software has always been there.
GCJ: What specifically about the design of chips and the semiconductor industry leads you to use grid computing?
SUBRAMANIAM: One of the main issues is that the number of requisite cycles and data sets to be processed keeps increasing from one generation of chips to the next, and we really cannot afford to scale up our environment without first maximizing the utilization and throughput of every single machine we already have. By putting all our resources-including machines in data centers and those sitting by an engineer's desk site-into these pools, we've maximized the amount of computing that we can provide to engineers.
GCJ: So you save mainly on infrastructure by using grid?
SUBRAMANIAM: Well, grid also helps us use our human resources more efficiently. We don't want our engineers to be watching every one of their jobs and waiting for them to get done. With grid-enabled workstations they can "fire and forget," which allows them to get a lot more done in a given amount of time. Additionally the highly collaborative nature of the projects across the sites makes it necessary to bring the resources of these distributed teams together so that they can work seamlessly.
GCJ: What kind of software does Intel use for its grid deployment?
SUBRAMANIAM: The software is mostly homegrown, stuff that we have developed for, maybe fifteen, years. It's very similar to the well-known models available out there from key vendors. It leverages some open source code, and we also have vendor-supplied back-ins, such as databases, storage solutions. So it derives from a variety of sources, but the majority of it has been written in-house.
GCJ: During the time you oversaw grid, were there any surprises?
SUBRAMANIAM: Nothing really shocking, but one of the main issues that we had to deal with was managing data. And that was a little bit outside the purview of the particular product that we developed so we were forced to make significant software architecture changes to increase scale.
There are actually very few products out there that support the kind of pool sizes we do. Currently we have close to 8,000 - 10,000 machines in our largest pools, which is very high compared to even the best products.
GCJ: Have you managed to reduce your data management overhead?
SUBRAMANIAM: No, actually the data has always been a problem, because it has kept exploding through the years. It's continually a problem, and that factor in to how we handle it. I would not say that we have licked the problem.
It is definitely one of our goals, but that's been true for a while, to be quite honest.
GCJ: Have you built reporting into what you've done?
SUBRAMANIAM: We have. We have some very sophisticated reporting within the system. Managers can review data about utilization and allotments and things like that by looking at reports on the Web.
We also have a lot of monitoring and discovery capabilities built into the tool itself. Any administrator who has access and authorization can actually query and find out a lot of details about the resources that are being pooled together, not only within a single pool, but across pools, too.
GCJ: Has the team looked at the Globus Toolkit at all?
SUBRAMANIAM: Yes, we have. Actually, I was largely behind the focus on Globus at Intel because I was thinking about service-oriented grid as a successor to the current grid implementation. I was looking for code streams that we could use to help us showcase the full service-oriented grid implementation, and that's when I started looking at Globus and started using some of the features. We initially did not use Globus as it was originally intended -as an enabler to hook up local clusters, for instance. But we investigated and have actually prototyped a system with Globus code to enable the basic WS/Service infrastructure primarily at the node level and to enable consistent patterns and standard interfaces throughout the hierarchies that define federated Grid environment. The definition of Globus as a toolkit was helpful in this regard.
GCJ: And is Globus used today at all, or was that an experiment?
SUBRAMANIAM: Yes and no. There was significant progress made towards realizing a service-oriented grid, but that hasn't gone into production yet. A lot of that experience is being used in the forward-looking effort that I just mentioned to you. And there is active work to take parts of Globus into production where it has been used in an inter-cluster mode for helping manage data. But most of the other parts of Globus have not hit production yet.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.