December 12, 2005
Azul Systems and GigaSpaces Technologies,
a provider of innovative Grid-based solutions for
data-intensive, business-critical applications, announced test
results showing phenomenal enterprise application performance with a 60
percent reduction in hardware costs using Azul Compute Appliances
with the GigaSpaces Enterprise Application Grid (EAG). The test
demonstrated the ability of Azul and GigaSpaces to improve transaction processing efficiency across an enterprise service
bus (ESB), which is typically at the heart of a service-oriented
architecture (SOA). With the combined Azul and GigaSpaces solution,
organizations can lower total cost of ownership for
their IT infrastructure by increasing throughput,
consolidating hardware and reducing costs for real estate, electricity
The benchmark test, conducted at the Azul Center for Unbound
Compute, simulated a typical financial services SOA workload that
correlates a trade order lifecycle from disparate order events. This
correlation requires significant computation and generates large
amounts of traffic to the database. Performing 100 million computations
took 14 hours on six 4-way x86-based servers for a baseline of
approximately 120,000 computations per minute. GigaSpaces EAG was
then used to create a cached shared object store of 16 million objects,
thus moving database requests, which slow down application performance,
into the application memory heap. The test compared the baseline to the
total throughput achieved using GigaSpaces on the provided servers,
then to a configuration of only one server attached to an Azul Compute
By leveraging GigaSpaces EAG with
the same six 4-way x86-based servers, the application was able to
handle more than 1 million computations per minute (up from 120,000 per minute), cutting the 14-hour job down to approximately one
hour and 40 minutes.
Taking away 90 percent of the server resources, a
single 2-way x86 server was then authorized to tap into an Azul Compute
Appliance 1920B, and was able to double the throughput again to more
than 2 million computations per minute. With the combination of
GigaSpaces and Azul, the 14-hour job was accomplished in just 50
minutes. Furthermore, the Azul approach reduced the total
infrastructure acquisition cost by more than 60 percent, from
approximately $480,000 to $165,000. Fewer hardware systems and lower
management, maintenance, real estate, electricity, and cooling costs
would result in significant additional savings.
"This test reflects the power of Azul and GigaSpaces to address the
cost and complexity our customers are facing as they scale their
mission-critical applications," said Yaron Benvenisti, CEO of
GigaSpaces. "Building and scaling these high-throughput, low-latency
applications is no easy task and the impressive results of this test
demonstrate that our joint solution delivers the dynamic scalability
and breakthrough performance our customers need."
As more and more companies move to SOAs, they face a number of
critical business issues such as ensuring service availability,
maintaining adequate response times at peak periods and minimizing
excess capacity. Current servers alone are weak in these areas,
especially when running virtual machine-based applications, such as the
Java-based applications that are the basis of many SOA deployments.
Azul technology is optimized for such workloads and is transparently
delivered - as a shared resource -- to enterprise applications running
on existing servers.
GigaSpaces EAG lets application developers rapidly build and deploy
high-performance, highly-reliable, business-critical applications that
run on a distributed set of IT resources. It enables an organization to
dynamically distribute data and achieve in-memory levels of performance
with significantly less administration and costs than dedicated
hardware solutions. The EAG is based on a leading commercial
implementation of clustered JavaSpaces that provides a rich set of
tightly-integrated application services (such as JMS, Clustering,
Caching, JDBC and parallel processing) -- on top of a federated shared
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.