CloudTran Home

 
  
<< Back Contents  >  7.  The Replicator Forward >>

7.1 Replicator Overview

This section gives an overview of the Replicator and its main components.

Transactional changes in the local grid ("Grid1") are replicated to a remote grid ("Grid2"). Configuration options are set in a deployment to Active-Passive or Active-Active operation. In an Active-Passive deployment, the local replicator send transactions to the remote replicator, which then commits them. In Active-Active mode, both replicators send transactions to the other replicator, and both commit incoming transactions.

To enable replication, you must set the ct.replicator.enabled config property to true. The default is false, i.e. replication is off.

 7.1.1  Overriding the Replicator
 7.1.2  Replicator Election
 7.1.3  Replication Processing

7.1.1  Overriding the Replicator
CloudTran's replicator can be replaced by implementing the committing() method in your own ManagerEventListener class. If you return true from this method, the transaction manager does not invoke the CloudTran replicator - even if it is enabled.


7.1.2  Replicator Election
The replicator service in CloudTran requires a node on each Grid to be elected as the 'Replicator'. The replicator is chosen from members of the Isolator service - replicator calls use the Isolator service. One of the Isolator nodes is elected at start of day - as the Isolator nodes start up - or after the current Replicator leaves the cluster. Only one node runs the replicator at any given time.

All the Isolator nodes are eligible to become the (primary) 'Replicator' node. However, it is best to have the primary Isolator and the replicator running on different nodes. This is the reason for having an election, rather than just using the primary Isolator also as the replicator.

There is a default replicator electer - DefaultReplicatorElecter. You can implement your own electer class, which must implement ReplicatorElecter.


7.1.3  Replication Processing
During the start up phase the Replicators on both Grids will start communicating with each other. This link between replicators is maintained by the replicator service

The replication of transactional information starts on the manager in the commit phase of a transaction, from the Committing event, by a call to the Replicator's /javadocs/com/cloudtran/replicator/Api.html#replicate(com.cloudtran.coherence.ManagerEvent, com.cloudtran.util.NamedSemaphore) Api.replicate() method. Each manager aggregates this information and passes it to the local Replicator.

The first task of the local Replicator is to store the transaction locally. In production this should be done in a durable local store. An SSD is preferable; hard disks can be used, but with a big performance hit. There is also a non-durable store provided with uses the Coherence cache; this should be used for development or comparison testing only.

You can implement your own LocalStore too, as described on the next page.

If this call returns successfully to the manager, the replication is guaranteed to complete - sooner or later. The replicator returns as soon as it has stored the transaction. If either (local or remote) replicator goes down or the replication process is paused, the replication is completed when the channel is reestablished. The local commit does not wait for the remote replicator to get the message or finish committing.

Once the Local Store has stored the transaction, the information is sent to the remote replicator. The remote replicator's job is to pass the transactions to the appropriate manager on the remote Grid.

Copyright (c) 2008-2013 CloudTran Inc.