November 05, 2007
To Keep It Running, Keep It Simple
Today’s electronic world has resulted in a major shift in terms of how organizations think about resiliency. Firms of all sizes are not only faced with the challenge of determining how resilient their mission-critical systems need to be, but also how they can efficiently and cost-effectively architect a “resilient” system. With millions of dollars per minute running through electronic channels 24x7, traditional high availability and disaster recovery notions are no longer good enough.
For the past five or 10 years, high availability meant the ability to recover from a server outage within about 15 minutes. Solutions like N+1 clustering and storage area network replication were perfectly acceptable. Today, however, the recovery time associated with these high-availability schemes can cause millions of dollars in lost revenue.
To avoid a potentially massive loss in revenue and efficiency in today’s fast-moving markets, firms must significantly improve their enterprise resiliency. Continuous availability is now the acceptable level of resiliency, and it is quite common in Web-based or other electronic channels to use load-balanced, hot/hot clusters of servers to serve up the business logic. These servers typically are stateless in design, so it is easy to add or remove servers and re-balance the work load. The difficult part is designing a resiliency architecture that makes the data behind those business services hot/hot.
Meeting the Resiliency Challenge: An EDF Approach
The best way to provide nearly 100 percent uptime for data and deliver maximum resiliency is by using data management middleware to ensure there are multiple consistent copies of the active business objects in-memory at all times. As firms strive to get ever closer to 100 percent uptime and ensure resiliency, distributed data caching is gaining in popularity.
Solutions such as an enterprise data fabric (EDF) are ideal for meeting those demands. Presented as a simple HashMap API, the EDF programming paradigm is extremely simple and familiar yet delivers maximum value behind the scenes: You simply “put” your state into the HashMap and, under the covers, the middleware takes care of replicating this business object to multiple additional servers.
Sounds easy, right? It is -- until you start to think about the various failure modes, guarantees around zero data loss, low latency and scalability. That’s what makes a product like an EDF worth its weight in gold. The most difficult parts of data management are resiliency, scalability, throughput, latency and dataset size -- and you have to get it right. Every time.
By deploying an EDF, firms will benefit from a very fast, highly scalable distributed caching system. An EDF is designed for use in many diverse data management situations, but is especially useful for high-volume, latency-sensitive, mission-critical, transactional systems. There are several critical features to consider when evaluating an EDF, including:
So how does it work? As soon as an application puts data into the cache it is replicated synchronously to at least one additional member of the cache. It also can be replicated to additional members or written to a persistent store, but this can be done on a low priority, asynchronous thread so it doesn’t hold up mainstream processing.
Leveraging Multiple Topologies to Deliver Maximum Value
A true EDF should use three topologies in order to achieve the highest levels of reliability, scalability and speed. The first -- and the backbone of the system -- is the peer-to-peer topology. In this configuration, everybody knows about everybody else. If a new node joins the distributed system, everybody gets notified, and if a node leaves, everybody gets notified. This enables users to dynamically adjust distributed systems. There is no notion of a “broker” and no single point of failure; fault tolerance is designed right in.
The trouble with peer-to-peer architectures is that they have so much metadata flying around that they can only have limited scale. In most cases, this topology should only be scaled up to about 100 or 200 nodes.
Scalability can be improved by using a second type of topology -- client-server -- where we elect some of the peers from the peer-to-peer backbone to be servers for client applications (your business logic servers). Each server should be able to manage as many as 100 clients. As there is much less metadata overhead in this topology, it can scale to thousands of nodes.
The third topology is a WAN gateway topology, which can glue together multiple client-server distributed systems. This is an ideal way of creating an enterprise data grid that is globally distributed and appears as one large distributed system, even though it is really many distributed systems glued together.
Appropriate use of these three topologies will enable you to achieve your business requirements around recovery point objective and recovery time objective. Data is replicated across the entire distributed cache, and replication is transactional and performed at the in-memory object level. As soon as an object is put into the cache, it is replicated in-memory to at least one additional node. The data can be replicated to additional nodes either synchronously or asynchronously depending on sensitivity to latency and tolerance for data loss in the event of a catastrophic failure. Write-through to a database or other persistent store is done asynchronously as time permits. In essence, the distributed cache behaves much like RAID for the enterprise.
Additionally, the data can be actively used in both the primary and secondary sites. In fact, the only thing that typically drives the notion of one site even being primary is the external connectivity to the exchanges or ECNs.
Another factor to consider when evaluating an EDF is what we’ll term a "shared nothing" architecture. Because the data in an EDF can be mirrored across multiple nodes in a distributed cache, it eliminates the need for any type of fancy shared storage. In fact, the local disks that are on the blades themselves are often sufficient. In the event that a disk fails, only one node is taken down in the distributed system and there are other nodes alive and ready to take over that workload. Finally, the workload itself is distributed across all the nodes in the distributed system. Exchanges may be split between the two sites and clients will likely be distributed across the two sites, as everything except external connectivity is in a hot/hot configuration.
Let's walk through the simple H/A recovery process for a single node failure: detect the failure; reconnect the clients; recovery is complete. In total, there is less than 1 second from detection of the failure to complete recovery. A little better than 15 minutes! Because the data is all in-memory in the form of business objects all the time, there is no re-booting, no re-fetching of data and no re-creation of objects.
But what about a catastrophic failure? EDF clusters are virtual, so the nodes needn't be located close together within the datacenter -- they can be on separate subnets, using separate routers, power sources, etc. In fact, some of the nodes actually can be physically located in a different site. Therefore, the notion of losing a “cluster” is non-existent; we’re actually talking about loss of an entire datacenter.
If a disaster occurs and the entire primary data-center fails, the recovery process goes like this: detect the failure; reconnect the exchange at the alternate site; reconnect the clients; recovery is complete. The typical time to recover from point of detection is around 1 second. That's a huge difference to the typical 1-4 hour disaster recovery time common in business today!
As distributed computing deployments become the norm rather than the exception, resiliency will become one of the most critical issues facing global corporations. By using an EDF, firms can achieve nearly instantaneous recovery from outages -- real business continuity -- while simultaneously simplifying their architectures. This one product takes the place of an H/A solution, a shared-storage environment, storage-level replication and wide-area data distribution, removing the need to design a data resiliency architecture for mission-critical systems.
About Mike Stolz
Mike Stolz is vice president of architecture and strategy for financial services at GemStone Systems. In his role, Stolz leverages his expertise in targeting, developing and delivering innovative technology solutions to expand GemStone's global financial services offering and cultivate its growing capital markets division. Stolz served during the last nine years as director and chief architect of Merrill Lynch’s global markets and investment banking debt division. In this role, Stolz was responsible for the design and development of trading systems and trading support systems for interest rate, credit and asset backed derivatives, as well as FX and repos and fixed income products.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.