CloudTran Home

 
  
<< Back Contents  >  5.  Running CloudTran Forward >>

5.1 Planning

 5.1.1  Deployment Styles
 5.1.1.1      Extend Clients
 5.1.1.2      Clients
 5.1.1.3      Managers
 5.1.1.4      Isolators
 5.1.2  Enabling Application Cache Storage

5.1.1  Deployment Styles

5.1.1.1  Extend Clients
CloudTran does not directly support Extend clients. In other words, the application server or standalone JVM that runs the client must be part of the grid.

5.1.1.2  Clients
A "client" is both a role and a deployment unit.

In theory, any JVM in the cluster can act in the role of a CloudTran-Coherence client and make ORM calls through TopLink and CloudTran. This means you could have a manager or isolator nodes run programs that act as clients. Specific situations may require this, but rarely. This sort of deployment is not supported as standard and will require a specialised initialization sequence to succeed.

In most architectures, clients are deployed as standalone clients - separately from managers and isolators - which means they do not store application entity data or CloudTran control information.


5.1.1.3  Managers
There are two standard deployment styles for CloudTran-Coherence managers.

With uniform deployment, there is a single manager node configuration, so the CloudTran transaction manager runs alongside the data caches. This spreads the transaction processing load across all manager/data cache nodes. It is easiest to scale because there is no need to balance the grid between data and manager nodes.

With mixed deployment, the manager functionality is limited to a subset of nodes. Although it is possible to have manager nodes also holding application cache data, normally they don't - manager nodes are dedicated to processing transactions and holding transaction data.

For large grids - of the order of one hundred nodes or more - mixed deployment

  • reduces the number of database connections required
  • makes it easier to manage transaction logs after a complete failure.

5.1.1.4  Isolators
The isolator nodes provide the central point for ordering transactions across all the managers. It prevents an update from one manager containing a given row from being persisted a later transaction containing the same row.

The isolator is a very low-cost node because

  • it has no cache storage - all data is held in memory - so there is no storage or processing for cache backups
  • the representation of the rows is highly optimised (typically the data is only 12 bytes long).
  • the isolator algorithm is highly tuned - CPU utilisation on the isolator is around 5% at 10,000 5-row transactions per second.
Although only one 'primary' isolator node is active at any given time, there must be a backup isolator too, to ensure uninterrupted service if the primary goes down.

There are two options for deployment:

  • given the low utilisation of the isolator, in small grids it makes perfect sense to allocate a small isolator node alongside a manager node on one machine.
  • for large or very high performance applications, place the isolator on its own machine. This avoids delays at the isolator from other nodes using the network and CPU. These delays are part of the critical path of committing a transaction; the call to the isolator is not overlapped with any other calls. This means that, if the manager-isolator request has to wait an extra millisecond to reach the isolator, the time to commit the transaction increases by the same amount.

5.1.2  Enabling Application Cache Storage
CloudTran defines private caches to hold transaction manager and isolation information. It enables its cache storage with a pair of system properties, shown here with their defaults:
cloudtran.isolator.storage.enabled=false
cloudtran.manager.storage.enabled=false
The isolator flag above is switched on when the isolator is started.
The manager flag is switched on when the manager is started.
Neither flag is switched on when a client is started.

If you are using the uniform deployment described above, then you will want to locate the main application data along side the managers. To do this, you can use the manager flag on the definition of the entity data caches by putting this line in the cache scheme:
<local-storage system-property="cloudtran.manager.storage.enabled">false</local-storage>
This technique is used in the coherence-cache-config.xml file in the ChildActivities example project.

Copyright (c) 2008-2013 CloudTran Inc.