November 05, 2007
BEAVERTON, Ore., Oct. 30 -- GemStone
Systems, the leading provider of distributed data management and
virtualization solutions, today announced GemFire Enterprise 5.1, a
core component of its high-performance, enterprise data fabric (EDF).
The new GemFire Enterprise 5.1 release serves as a distributed
operational data management infrastructure that sits between clustered
application processes and back-end data sources to provide very
low-latency, predictable, high-throughput data sharing and event
distribution. By managing data in-memory, GemFire Enterprise 5.1
enables extremely high-speed data sharing that turns a network of
machines into a single, logical data management unit or a data fabric.
GemFire Enterprise 5.1 introduces an advanced set of technical features to deliver powerful, end-to-end scalability and performance improvements. By augmenting native C++/C# caching capabilities, GemFire Enterprise 5.1 provides highly available asynchronous cache update notifications to ensure clients are protected against server failures.
“As enterprises seek to move from a typical disaster recovery scenario to a resilient architecture, companies need a dynamic distributed cache to support next-generation enterprise utilities, especially for compute-intensive, fault-tolerant applications,” said Chris Wolf, senior analyst with Burton Group.
“There are a large number of variables in a distributed system which significantly increase the possibility of an error, such as loss of data consistency, missed event notifications, or failure conditions arising from applications, resource limitations or machine failures,” said Jags Ramnarayan, chief architect at GemStone Systems. “With this release, GemFire Enterprise 5.1 minimizes the application risk under such conditions and specifies any level of redundancy when partitioning the data across the cluster. GemFire Enterprise 5.1 will control how the concurrent load is handled on any server by a configurable set of workers and be assured that events enqueued for delivery to clients can survive server failures.”
The combination of distributed data caching with reliable message delivery provides customers with the tools to build next generation high-performance, real-time applications. For grid users, GemFire Enterprise 5.1 offers scalability and the predictability that it becomes near linear when additional resources become available to the data fabric.
“As more and more organizations turn to distributed data grids to improve application performance, minimize latency and reduce operating expenses, they must address the growing reliability and scalability challenges,” continues Ramnarayan. “GemFire Enterprise 5.1 will allow users to leverage native client cache enhancements, configure more than one level of redundancy and optimize for high concurrency to guarantee data availability and integrity. This release reinforces our commitment to delivering reliable solutions to improve and simplify our client’s most critical IT processes and deliver best-in-class scalability for distributed data grids with sub millisecond latency.”
New features of GemFire Enterprise 5.1 include:
About GemStone Systems Inc.
GemStone Systems is a privately held infrastructure software company that provides data services solutions for enterprise business architects and data infrastructure managers that are building, enhancing or simplifying access, distribution, integration and management of information within and across the enterprise. Founded in 1982, and with over 200 installed customers, GemStone is recognized worldwide for its unique competency and patented technology in object management, virtual memory architectures, high-performance caching and data distribution technologies.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.