CloudTran Home

 
  
<< Back Contents  >  4.  Developing with CloudTran and TopLink Grid Forward >>

4.3 Caches And Threads

 4.3.1  Configuring Threads
 4.3.1.1      Manager Threads
 4.3.1.2      Application Threads
 4.3.2  Application Caches

4.3.1  Configuring Threads

4.3.1.1  Manager Threads
CloudTran protects itself again excessive demand by restricting the number of threads that can be used inside the manager.

Each call into the manager to begin and commit transactions, triggered from JPA's transaction.start() and transaction.commit(), holds a manager cache service worker thread. There must be enough of these threads to service your maximum expected load. This number is given by the ct.txb.coherence.managerCacheServiceThreadCount property. To calculate this property, start with the number of JPA transactions you expect to be active at any one time - from the start of 'begin' to the end of 'commit' - across the whole application. Then simply divide by the number of manager nodes.
managerCacheServiceThreadCount = nJpaTransactions / nManagerNodes
The downside of creating too many manager threads is that it reduces performance, partly because the sheer number of threads in a machine may add operating system-level overhead, but more importantly because threads can back up when there are excessive simultaneous cache writes or network traffic. On a Gigabit network using four-core server machines, with the optimal number of manager threads, CloudTran can commit in approximately 10 ms. with moderate traffic and 20ms. with maximum load. When too many manager threads are configured and the transaction managers are overloaded, overall performance decreases significantly - by as much as a factor of 10.

The downside of creating too few manager threads is that there will be more backoffs at high load. These are handled within CloudTran and do not cause the transaction to fail, so they are only a minor performance hit because they make an unsuccessful call from one manager node to another. Because this impact is small, it is best to allocate slightly too few than too many manager threads. You will know when there are too few threads because the manager will report

Invoke on cache ManagerControlCache delayed for 20 ms.
where '20' is replaced by the actual delay. If you see more than one or two of these logs in the manager console and you are looking to improve aggregate throughput, try increasing the managerCacheServiceThreadCount and see if it helps.

4.3.1.2  Application Threads
The 'Manager Threads' of the previous paragraph are the worker threads on the ManagerControlCache, which is defined by CloudTran.

The 'Application Threads' described here are the worker threads on the Application cache. The application caches (also referred to as entity caches because they cache the application's entity objects) are not defined by CloudTran; they are defined in the application configuration.

However, you must take note of the following CloudTran requirements when defining application caches:

  • the definition of the number of worker threads on the application caches should use the property
    ct.txb.coherence.applicationCacheServiceThreadCount
    
    to define the number worker threads on the application caches.

    In the coherence-cache-config.xml file, this looks like this
    <thread-count system-property="ct.txb.coherence.applicationCacheServiceThreadCount">
        20
    </thread-count>
    
    This is necessary so CloudTran can know how many worker threads are available to avoid deadlock. See the configuration property ct.txb.coherence.applicationCacheServiceThreadCount for further information.

  • the enabling of local storage. This is an issue in "uniform" deployment mode, as described in the Managers deployment description. In this case, the goal is to get the application data on the same nodes as the managers and their caches. To do this, you should also use the manager storage flag to enable the storage of application cache data. The <local-storage> element in the <distributed-scheme> then looks like this
    <local-storage system-property="cloudtran.manager.storage.enabled">false</local-storage>
    

4.3.2  Application Caches
By default, TopLink Grid creates Coherence caches for entity data using the name of the entity as the cache name.

You can change the cache names for the entities and you can even share caches. This would make sense if you are using the same table to generate IDs. An example of doing both is below, where the Child uses the Parent table to allocate IDs and shares the Parent cache.
@Entity(name = "Child")
@Table(name="CHILD")
@Customizer(CTReadWriteCustomizer.class)
@Property(name=COHERENCE_CACHE_NAME, value="Parent")
CloudTran includes an example application cache definition, which in many cases will be sufficient for live applications, but it is not necessary. In other words, CloudTran finds the entity caches using TopLink Grid's APIs so you do not need fixed names for the application caches and services. (This is in contrast with the CloudTran-specific caches, which must have the names defined in the coherence-cache-config.xml.)

Copyright (c) 2008-2013 CloudTran Inc.