In this interview, Kurt Ziegler, ASPEED Software's executive vice
president of marketing and product management, speaks with GRIDtoday
editor Derrick Harris about Ziegler's recent whitepaper, "Designing
Programs for Performance," and about the Grid market in general --
especially from ASPEED's position as a provider of application
How long have you been at ASPEED?
I have worked with ASPEED since July 2004.
What is your background in Grid computing?
I have worked with distributed computing since the late
'70s. I have had a book, "Distributed Computing and the Mainframe,"
published by John Wiley & Sons in 1991 and numerous papers on
distributed systems implementations published dating back to the late
'70s, and I have worked with various Grid approaches and
implementations since early 2000.
From ASPEED's perspective, how does the Grid market look?
It is very exciting. The biggest challenge to more
rapid acceptance is that the applications need to be "touched" with
varying degrees of skill and re-engineering effort.
What factors are driving business?
The business is being driven by the need for more runs,
shorter run times, more transactions, faster response times and the
need for more voluminous input -- doing this all at significantly lower
What obstacles need to be overcome?
The primary obstacles are: knowledge/comfort level with
the applicable technologies/approaches; concern about the skills and
time required to adapt existing applications to Grid environments; and
concern that adapting to the Grid for complex applications will require
major re-engineering and involve rethinking the application flow and
potentially changing computational results.
How do you think the market looks for Grid computing overall?
The market continues to grow, bolstered by the business
demand for capacity and the underlying economic pressures. More and
more successful pilot efforts are completing. I see 2005 as a major
turning point driven by the collision of more pervasive acceptance of
Linux, cluster and commodity systems computing, as well as the
emergence of higher level tools to adapt and manage the distributed
I see ASPEED is a member of the GGF. What kind of work have you done with them as a result of that membership?
Our participation was more as an observer; we offer
feedback when asked since the GGF focus on more on the infrastructure
components that the application per se.
Has ASPEED given any though to becoming a member of the EGA? Why or why not?
I track the results of the standards, but our focus is
more on the application and masking the aspects of distribution. Our
emphasis is to provide a sufficiently high application interface that,
once included in the application, will leverage the underlying
infrastructure. Obviously, the more companies that embrace the
standards, the easier it is for us since we must support a plethora of
If there were a "Distributable Applications" group, I would be very
interested in participating. The reason I used the word "distributable"
rather "distributed" is that, ideally, the application should be
designed to run equally as well in a single CPU, n-way CPU system or
distributed across and cluster or Grid. The value adds of such a focus
on the API and services would be to ensure that the application
receives optimum performance based on the configuration specifics and
the run time consumption choices. I like to refer to such an approach
as "future-proofing." The biggest challenge is that the solution should
not require application re-write or re-engineering.
So much of the negative reaction to Grid seems to center
around its perceived complexity and the difficulty of Grid-enabling
applications. Yet, ASPEED's recent whitepaper discusses how companies
can "quickly upgrade" existing apps. How is this possible? Why all the
confusion over upgrading, or Grid-enabling, applications?
Yes, the challenge is Grid-enabling applications if you
don't use the right tools. ASPEED provides a high level application
program API, and the tools to prepare the application for distribution
and the run time library that enables a programmer to either annotate
(sometimes referred to as "instrument") the source code or wrap a
binary. The first piece of the solution is that that program is not
functionally re-engineered; it is simply annotated to identify the
parallelizable portion(s) of it that are to be run across multiple CPUs
or systems. The decision of many copies to run is made when the program
is launched. The run time libraries take care of the data movement and
range distribution among the allocated copies. This is done dynamically
and adaptively to balance the completion times of the executing copies.
The ASPEED run time functionality also detects stalls or environmental
failures and seamlessly redistributes the work.
In other words, the ASPEED ACCELLERANT run time software has taken care
of all the very tedious functionality, data serialization,
unserialization, bringing up the required copies, chunking and
coordination required without involving the programmer. Because this is
all masked from the application, the application retains its original
What I mean here is that the application is neither sliced up into
functional components, which are subsequently scheduled, nor are any
distribution mechanics added to the application logic. Instead, the
entire application is scheduled across multiple workers and the ASPEED
ACCELLERANT runtime functionality passes the flows to the appropriate
portions of the distributed copies and coordinates the progress of the
while they are executing. What this means is that the only special
considerations that application programmers must concern themselves
with are to identify the loops to be parallelized to the ACCELLERANT
pre-processor -- this is what I referred to as "annotation." One would
simply apply the API syntax around the code to be parallelized. The
pre-processor would identify the effected data and insert the
appropriate includes to provide the distributed functionality. The
resultant-linked ACCELLERATED code can then easily be launched on a
Grid fabric or simply run across multiple systems without Grid
If that sounded too simple, it is because it is. There is more than a
simple API and pre-processor needed to effectively parallelize some
loops. This is why there is so much confusion about what can be
parallelized and what can't be easily parallelized. For example, a
Monte Carlo algorithm lends itself to be split across numerous copies
because each iteration is independent. But what happens if the
algorithm has a non-linear relationship or geometric relationship? Lets
take the geometric algorithm first. The problem with atomizing or
transactionalizing it is that as the number of branches expands so does
the number of resources and then network chatter leading to terrible
performance characteristics. ASPEED offers a breakthrough in dealing
with this and other tough algorithms. ACCELLERANT includes
algorithm-aware APIs that treat the distribution and flows differently
based on the algorithm and the input characteristics. This enables many
applications that were labeled undistributable to be parallelized to
use the Grid.
Speaking of the whitepaper, it also discusses the "six
myths" of multi- processing programs. Can you speak a little about some
of the major myths and why they still exist?
The five-second answer is that many of the
practitioners are coming into this space without having the benefits of
experience or having suffered through the consequences of some of the
design decisions. There is really not much new conceptually, but there
is very little advice available. This is what prompted me to write the
white paper in hopes of helping some folks skip the land mines, or at
least know the options as they build or adapt or re-engineer
applications to run on a Grid.
The longer winded answer ...
The real reason such myths continue is because they are sometimes
validated. For example, if you have a very bright and highly skilled
programmer, it is quite possible to gather a collection of tools and
middleware and compilers and, in a relatively short time, create a
distributed application that meets the performance design points. The
problem, however, is that the more optimum the solution, the greater
the likelihood that very low level interfaces, specific target system
and input specific criteria were used for the implementation. The
result is that you now have a distributed application that is sensitive
to change, is dependent on a very skilled individual to maintain, and
typically doesn't include failure or recovery services. The best way to
detect myths is with some targeted questions if we do this (the list is
not in any specific order):
- What skill is required to maintain the code? Multithreading is not easy.
- Is the resultant performance predictable? Can you model and capacity plan for the resultant run time?
- Is the solution portable across operating systems (e.g., Windows, Linux, UNIX)?
- Is the solution dependent on specific data inputs? Does it scale? What happens if the user or business changes some input?
- Is the solution dependent on a specific configuration (i.e. memory, shared memory, etc.)?
- Is the solution dependent on a specific proximity (e.g., connectivity, can it be distributed geographically)?
- Can additional functionality be added without major surgery or unique skills?
What industry sectors (e.g., financial services, pharma, manufacturing, etc.) does ASPEED do the most business with?
We started in the financial services sector because that is
where one of our founders came up with the idea of adapting
applications rather than re- engineering them. Since then, we are doing
business in the pharma area, where we have parallelized some non-linear
models which were heretofore unparallelizable, at least at the
fine-grain level, and in the government sector where we see
conditioning the calculation dynamically and then balancing it promises
to provide much better scalability than the hard-coded MPI-based
models. We are also starting to see engineering applications dealing
with sparse matrices, etc.
Are there any sectors that you see as being ahead of the curve in
regard to developing Grid apps, or Grid adoption in general? Are there
any sectors that you see lagging behind?
This is an excellent question. But it appears to me that the
challenge (lagging) is more of a horizontal than vertical. What I mean
by that is that certain portions of the business could leverage
Grid/cluster technologies while others portions can't see how their
applications apply. For example, in the financial services area,
modeling, simulation and analysis while being the best candidates, are
slow to move because re-engineering the model means significant
re-investment in validation of the results. This is why ASPEED's design
point was not to require any change to the calculations or the program
flow. This concern about validation is even more prevalent in the
pharma space, where hundreds of runs are required to validate the
results prior to submitting a drug to the FDA. Applications like
payroll, which calculate individual payments and deductions, are rather
simple to parallelize because the individual calculations are
independent and the results are easily matched to parallel runs. I
think that the challenge to adoption is that the notion of spreading
applications across the enterprise's computers is being looked at as an
"all or nothing" proposition by some.
Having said that, I see another consideration that is probably even a
bigger factor that often gets overlooked: elapsed time, run time or
response time. When applications are parallelized intuitively, the time
to run the application is potentially greatly reduced and the workload
can be spread to smaller systems, even idle desktops. The problem is
what happens when one of the parallel pieces of the application is
stuck behind other components. The law of large numbers says that this
won't be a problem given enough participating systems. Unfortunately,
during pilots, the number of available processors is finite and tends
to belong to a population in the same time zone doing the same work. If
this is the case, the resultant workload utilization be very appealing
but the elapsed times of some time critical jobs may suffer. This leads
to dedicated clusters rather than Grid solutions as the first step.
ASPEED helps this situation in that once the workers are allocated, the
application is managed to optimum completion time eliminating any one
stuck process from holding up the timely completion.
Finally, there has been a lot of talk lately of Grid computing
being overhyped, and I'm wondering where you, as a vendor
representative, stand on this? If the technology is as world-changing
as many vendors would have us believe, why are companies not jumping on
board en masse?
I think I have answered some of this question in my earlier
commentary: I believe that the dramatic change that Grid computing
offers in terms of:
enabling businesses to do things they couldn't do in terms of
enabling applications that couldn't run on a single systems because of
prohibitive cost and physical constrains.
the potential to more effectively exploit the existing capacity.
the ability to use less expensive commodity hardware and open operating systems.
the ability to leverage the Grid to achieve predictable response time reductions are real.
The realization is a function of the applications that can effectively
exploit the Grid infrastructure. The challenge here is that I think
that from both a vendor and IT standpoint, the ideal solution would be
bottom-up and, unfortunately, it isn't that easy. By bottom-up, I mean
an easy knob or appliance that could be applied as an operating system
or configuration additive to magically distribute the workload and be
sensitive to the response-time implications and the result
consequences. The fact of the matter is that some applications can very
easily be distributed with a little effort and sometimes even
auto-magically, but the second reality is that there are many more
business critical applications with extensive programming and testing
investments which require changes to the application itself. ASPEED has
embraced this challenge and has developed a methodology, best practices
and software that bring the ideals closer together by providing a way
to adapt the tougher to distribute applications. Our initial focus has
been on the computationally intensive applications and this focus in
now extending to work with data management vendors.