CloudTran Home

 
  
<< Back Contents  >  4.  Developing with CloudTran and TopLink Grid Forward >>

4.5 Default 'config.properties' File

This section describes the configuration properties file - config.properties. config.properties is based on the Java properties file - it gives configuration property names and default values. A copy of the file is here.

There are four types of values you can specify:

  • boolean - "true" or "false"

  • classes - the fully-qualified class name of an available class. e.g. com.cloudtran.persist.JdbcLoaderFramework

  • int - the simple format is a number. You can also specify "p/d", where 'p' and 'd' are both numeric values; 'p' is used in production (i.e. when trace.debug is *not* set) and 'd' when DEBUG is set (i.e. when trace.debug is set). For example, "800/200" means the default value is 800 in production but 200 when in DEBUG mode.

  • a list of int's, separated by commas. As for 'int' above, each value can be a single number or 'p/d' for production/debug. e.g. "30/300, 60/600, 100/1000, 250/2500, 500/5000"

 4.5.1  'config.propertiesReaderClass' Property
 4.5.2  'ct.client.cacheWrite.threads' Property
 4.5.3  'ct.client.primaryKeyPreallocationSize' Property
 4.5.4  'ct.client.threads' Property
 4.5.5  'ct.coherence.decoration' Property
 4.5.6  'ct.coherence.mvcc' Property
 4.5.7  'ct.coherence.mvcc.readRetryCount' Property
 4.5.8  'ct.coherence.mvcc.readRetryDelayMillis' Property
 4.5.9  'ct.coherence.prepareFromManager' Property
 4.5.10  'ct.coherence.toplinkgrid.wrapperClasses' Property
 4.5.11  'ct.coherence.useClientCache' Property
 4.5.12  'ct.config.overrides' Property
 4.5.13  'ct.init.user.class' Property
 4.5.14  'ct.isolator.itemsPerBatch' Property
 4.5.15  'ct.isolator.maxMicrosWaitPerBatch' Property
 4.5.16  'ct.jmx.rmi.port' Property
 4.5.17  'ct.logger.compress' Property
 4.5.18  'ct.logger.directory' Property
 4.5.19  'ct.logger.diskBlockSize' Property
 4.5.20  'ct.logger.diskSpaceLowBusyCount' Property
 4.5.21  'ct.logger.diskSpaceLowWarningCount' Property
 4.5.22  'ct.logger.disposer' Property
 4.5.23  'ct.logger.fileBaseName' Property
 4.5.24  'ct.logger.fileDatePattern' Property
 4.5.25  'ct.logger.fileExtension' Property
 4.5.26  'ct.logger.fileSizeMB' Property
 4.5.27  'ct.logger.logAfterCommit' Property
 4.5.28  'ct.logger.logBeforeCommit' Property
 4.5.29  'ct.logger.maxWriteSize' Property
 4.5.30  'ct.logger.threadCount' Property
 4.5.31  'ct.logger.zapLogFiles' Property
 4.5.32  'ct.manager.coherence.applicationCacheServiceThreadCount' Property
 4.5.33  'ct.manager.coherence.invocationServiceThreadCount' Property
 4.5.34  'ct.manager.coherence.isolatorControlCacheServiceThreadCount' Property
 4.5.35  'ct.manager.coherence.manager2IsolatorThreadCount' Property
 4.5.36  'ct.manager.coherence.managerCacheServiceThreadCount' Property
 4.5.37  'ct.manager.coherence.nTooBusyServiceThreads' Property
 4.5.38  'ct.manager.coherence.primaryKeyCacheServiceThreadCount' Property
 4.5.39  'ct.manager.defaultTimeoutSeconds' Property
 4.5.40  'ct.manager.distributedCommit.threads' Property
 4.5.41  'ct.manager.event.listener' Property
 4.5.42  'ct.manager.llapi.persisterClass' Property
 4.5.43  'ct.manager.logwriteBufferTime' Property
 4.5.44  'ct.manager.maxTransactions' Property
 4.5.45  'ct.manager.microsPerLogWrite' Property
 4.5.46  'ct.manager.mvccTimestampGenerator' Property
 4.5.47  'ct.manager.persistLingerTimeMillis' Property
 4.5.48  'ct.manager.primaryKeyAllocationChunkSize' Property
 4.5.49  'ct.manager.startBackoffMS' Property
 4.5.50  'ct.manager.txStatusLingerTimeMinutes' Property
 4.5.51  'ct.persist.maxDatasourceThreadCount' Property
 4.5.52  'ct.persist.maxRowsForDataSourceTransaction' Property
 4.5.53  'ct.persist.threadsPerDatasource' Property
 4.5.54  'ct.production' Property
 4.5.55  'ct.replayer.continueAfterError' Property
 4.5.56  'ct.replayer.retryCount' Property
 4.5.57  'ct.replicator.canReplicate' Property
 4.5.58  'ct.replicator.dataCenter' Property
 4.5.59  'ct.replicator.electer.class' Property
 4.5.60  'ct.replicator.enabled' Property
 4.5.61  'ct.replicator.inbound.remoteDcName' Property
 4.5.62  'ct.replicator.initial.state' Property
 4.5.63  'ct.replicator.link.initializer' Property
 4.5.64  'ct.replicator.localStore.class' Property
 4.5.65  'ct.replicator.max.unAckedPackets' Property
 4.5.66  'ct.replicator.maxObjectSizeKb' Property
 4.5.67  'ct.replicator.outbound.remoteDcName' Property
 4.5.68  'ct.replicator.passive' Property
 4.5.69  'ct.replicator.put.direct' Property
 4.5.70  'ct.replicator.ssd.blockSize' Property
 4.5.71  'ct.replicator.ssd.directory1' Property
 4.5.72  'ct.replicator.ssd.directory2' Property
 4.5.73  'ct.replicator.ssd.fileBasename' Property
 4.5.74  'ct.replicator.ssd.fileExtension' Property
 4.5.75  'ct.replicator.ssd.fileSizeMB' Property
 4.5.76  'ct.replicator.ssd.init' Property
 4.5.77  'ct.replicator.ssd.threadCount' Property
 4.5.78  'ct.replicator.ssd.totalSizeMB' Property
 4.5.79  'ct.replicator.toManagerThreadCount' Property
 4.5.80  'ct.replicator.txToBuffer' Property
 4.5.81  'ct.timesync.callsPerBlast' Property
 4.5.82  'trace.*' Property
 4.5.83  'trace.adjustNanoTimer' Property
 4.5.84  'trace.assert' Property
 4.5.85  'trace.cacheEvents' Property
 4.5.86  'trace.eventHistory' Property
 4.5.87  'trace.eventHistory.waitTime' Property
 4.5.88  'trace.file' Property
 4.5.89  'trace.fileSizeInMegaBytes' Property
 4.5.90  'trace.formatForDiff' Property
 4.5.91  'trace.messages' Property
 4.5.92  'trace.module' Property
 4.5.93  'trace.operation.timer' Property
 4.5.94  'trace.wireshark' Property

4.5.1  'config.propertiesReaderClass' Property
config.propertiesReaderClass = com.cloudtran.util.ConfigPropertiesReader
This is the name of the class that reads the configuration properties. You may want to override the default value to pull in values from somewhere other than 'config.properties'. For example, this class could have operators type in passwords manually.

This property is the very first property used. If you want to override it, this *must* be entered as a system property (java -Dconfig.propertiesReaderClass=x.y.z). Once this property is read, CloudTran
    • resolves the class name
    • attempts to load it and instantiate an instance of it
    • calls the getConfigProperties() to get the overriding properties.
If you want to just override a few parameters, you can do this:
    • extend the built-in reader
    • this means all the normal config.properties processing occurs
    • but then override the result for special properties you want to change or protect
     class MyPropertiesReader implements IConfigPropertiesReader
                              extends    com.cloudtran.util.ConfigPropertiesReader
     {
         public Properties getConfigProperties()
         {
             Properties rv = super.getConfigProperties();
             //
             // now tweak 'rv' as you need
             //
         }
     }

4.5.2  'ct.client.cacheWrite.threads' Property
The number of threads the client uses to send transaction data changes to the data caches. These threads are needed so all the data caches can be updated in parallel by the client. The default value is 1 in debug mode and 100 in production.
4.5.3  'ct.client.primaryKeyPreallocationSize' Property
ct.client.primaryKeyPreallocationSize=1000
The number of primary keys that should be allocated per client request, which is made when a client's cache of primary keys for a given entity runs out.

The higher this number, the less overhead there is in allocation, but the gaps between allocated ranges on the database will be larger. In order to avoid latency from id allocation requests, CloudTran always keeps one extra batch of Id numbers on hand, and lazily refreshes it in parallel with the client threads.

Each client request for a new block of numbers results in a write to a backed-up cache.
4.5.4  'ct.client.threads' Property
ct.client.threads = 75
ct.client.threads is the number of threads for parallel execution in the client.

The default is 75.

Parallel execution is used to put transactional entries into the grid in parallel and to retrieve blocks of sequence numbers.

This property is not actioned by CloudTran - but you can use it as a standard place to configure the number of client threads in your application.
4.5.5  'ct.coherence.decoration' Property
ct.coherence.decoration=10
ct.coherence.decoration is the decorator number that CloudTran uses. You don't need to worry about this unless your application uses Coherence decorations on cache entries itself, or these is a conflict with another application, or you wish to use Coherence 3.6.

These decoration numbers are defined in com.tangosol.util.ExternalizableHelper.

The default '10' is ExternalizableHelper.DECO_APP_1.
4.5.6  'ct.coherence.mvcc' Property
ct.coherence.mvcc defines that a cache should use entries capable of holding MVCC information. The default is that a cache can only operate in PL (Pessimistic Locking) mode.

The format of this property is

     ct.coherence.mvcc.<namespec>=[true | false]

where namespec defines the names of one or more caches.

If the value is true, the specified caches use mvcc; otherwise, the caches use Pessimistic Locking. As the default is false, '...mvcc=false' properties, while useful as documentation, are not strictly necessary: a cache named not matching one of the properties is assumed to be not MVCC.

The namespec can be an exact name ("myCache"). It can also end in '*' - e.g. "base*" - which means that all caches whose names begin with "base", including "base" itself, will operate in MVCC mode. The no-base specification "ct.coherence.mvcc.*" means that all caches will operate under in MVCC transaction.

The reason this flag is needed is that in PL mode a cache only stores a single value whereas in MVCC mode a cache entry can hold multiple committed and dirty values.
4.5.7  'ct.coherence.mvcc.readRetryCount' Property
ct.coherence.mvcc.readRetryCount=10
When trying to resolve a consistent read in MVCC (e.g. in NamedCache.get() within an MVCC transaction), CloudTran may find an uncommitted update in the cache. If so, it retries the read, because it cannot be sure if the uncommitted update should affect the value to be returned.

ct.coherence.mvcc.readRetryCount is the number of retries that will be made, by default. The default value is 10.
4.5.8  'ct.coherence.mvcc.readRetryDelayMillis' Property
ct.coherence.mvcc.readRetryDelayMillis=5
When trying to resolve a consistent read in MVCC (e.g. in NamedCache.get() within an MVCC transaction), CloudTran may find an uncommitted update in the cache. If so, it retries the read, because it cannot be sure if the uncommitted update should affect the value to be returned.

ct.coherence.mvcc.readRetryDelay is the delay between these retries, in milliseconds. The default value is 5 (milliseconds).
4.5.9  'ct.coherence.prepareFromManager' Property
ct.coherence.prepareFromManager = false
ct.coherence.prepareFromManager defines where the Prepare instruction is done. By default, it is done from the client and so is carried along with the update into the grid. If you set "prepareFromManager" in a transaction, the Prepare instruction is delayed until the transaction manager starts committing. It then sends a separate Prepare instruction to each cache entry. For more details, see the manual section on prepareFromManager.

You can change the default for prepareFromManager by setting the ct.coherence.prepareFromManager config property true.
4.5.10  'ct.coherence.toplinkgrid.wrapperClasses' Property
ct.coherence.toplinkgrid.wrapperClasses directory tells us whether to write wrapper classes - and if so, where. This is used in CloudTran for TopLink Grid only, not for the low-level API.

Wrapper classes are a standard part of TopLink grid, which CloudTran also uses. These wrapper classes are generated at runtime by TopLink grid and used immediately. The reason why you might want to write these classes out to a directory is to be able to use a Monitor like RTView OCM, which requires the classes to be specified at load time - because TopLink grid is not itself run!

By default, this property is empty, which means the wrapper classes will not be written. If a non-empty string is specified here, it is the absolute or relatively directory to write the wrapper classes to.

After you initialize CloudTran, this directory will contain a number of ".class" files: these are the compiled Java classes for the TopLink Grid wrapper files. If you are using OCM to view the grid, you will need to add this directory to your class path.

The recommended usage for this property is:
  • set it in the Isolator start job (and, if you have a specific one, the primary Isolator start job)
  • do *not* specify it in the manager and client
  • but make sure that the wrapperClass directory is on the classpath of the nodes that don't have this property specified.

4.5.11  'ct.coherence.useClientCache' Property
ct.coherence.useClientCache=false
ct.coherence.useClientCache defines the default for client caching - see the manual section on useClientCache.
4.5.12  'ct.config.overrides' Property
This property allows you to define a file for override properties. The value of the property must be a valid file name (not a 'jar' resource). It is a fatal error if this property is specified and the named file does not exist.

This is intended to make it easy to run variants of tests without a complete rebuild and redeployment.

Properties defined in the override properties file override values from config.properties; they in turn can be overridden by system properties (-D... on the command line).

This property's value can be specified either as a system property (-D on the Java command line) or from an entry in config.properties. One way to use this in testing would be to specify the name of a shared file that is accessible from all deployments. The specification could be put the filename in the Application model object's 'configProperties' - the override would be available in all deployments.

This is the second property read, immediately after the 'config.propertiesReaderClass' property.
4.5.13  'ct.init.user.class' Property
This is the name of a class to do user application initialization.

The application developer can provide a class to do application-specific initialization across all three CloudTran environments - Client, Manager and Isolator.

If this property is specified, the initialization class must have a no-arg constructor and must implement java.lang.Runnable. The class's run() method will be called before the CloudTran module's main initialization.

However, CloudTran does some pre-initialization in order that CTConfig's methods, such as isCloudTranClient() or isManager(), will be initialized to give correct values by the time the run() method# is called.

The run() method must not make any calls into CloudTran or the underlying cache (Coherence); the intent is that it initializes external links or other subsystems used by the application.

There is no default for this property; by default, there is no user initialization.

Here is an example of an initialization class:
    public class InitTest implements Runnable
    {
        @Override
        public void run()
        {
            System.out.println( "isCloudTranClient()  =" + CTConfig.isCloudTranClient() );
            System.out.println( "isManager()          =" + CTConfig.isManager() );
        }
    }
If you need to do work on only the Isolator or Manager instances, it is preferable to call StartIsolator or StartManager. Do pre-start-up initialization before you make the calls; do initialization that depends on CloudTran after the StartManager/Isolator call.
4.5.14  'ct.isolator.itemsPerBatch' Property
ct.isolator.itemsPerBatch=70
The maximum number of "items" that a manager can send to the isolator in one batch. This controls aggregation at the manager; CloudTran uses its own aggregation here (rather than Coherence's) because this is a very high-speed link.

An "item" roughly equates to a JPA object, or a data row in a database. So if the average transaction has 2 rows, halve the number specified here to get number of transactions in a batch.

The encoding of one row occupies 16 bytes, so a standard MTU of 1500 bytes can hold about 90 rows if it were all data and a jumbo frame of 9000 bytes can hold over 500 rows. The default is 70 rows, allowing space for Coherence overhead in a standard MTU.
4.5.15  'ct.isolator.maxMicrosWaitPerBatch' Property
ct.isolator.maxMicrosWaitPerBatch=2000
This is the number of *microseconds* to wait to batch up requests going from a manager to the isolator. Default is 2000, i.e. 2milliseconds.

Because this is done as the very first thing when a transaction commits and it is not overlapped with any other processing, about half this time is added to a transaction. For the default value, this means an overhead of 1 millisecond on average. If you have very demanding performance requirements, reduce this value.
4.5.16  'ct.jmx.rmi.port' Property
ct.jmx.rmi.port=1099
Defines the port to be used for the RMI registry of RMI servers on a node (which CloudTran uses for JMX reporting). Port 1099 is the default port for an RMI registry. You may need to change this if
  • another application (e.g. Firefox) grabs this port
  • you want to run two distinct clusters on the same physical machine

4.5.17  'ct.logger.compress' Property
ct.logger.compress=false
ct.logger.compress is a flag that tells CloudTran whether to compress transactions before writing to the log file. By default this is false, so the data is not compressed. Set it to "true" to compress the log file. Using compressed logging is indicated if your transaction data is large or the logging disk is a 'disk as a service'. Compression saves at least 50% in most cases, and sometimes as much as 80-90% of the space if the data is repetitive.
4.5.18  'ct.logger.directory' Property
ct.logger.directory=transactionLogs
The log directory is where the transaction logs go, for transactions sent through CloudTran. (It has nothing to do with logging and tracing of program events.) The path is relative to the manager's current directory at run time.

If there is a crash of a complete processing unit, do not change this value - this is where any un-committed transactions are stored. If you reconfigure after a crash and before log replay has completed, CloudTran will not be able to find this information.

The log directory holds the a number of subdirectory which is where the information files are stored. The number of subdirectories is defined by the 'logSubDirectoryCount' property, described below. In CloudTran GigaSpaces, the Transaction Log directories are put in the directory "<MainProject>_CoordinatorPU/.transactionLogs". In CloudTran Coherence, the Transaction Log directories are put in the .transactionLogs subdirectory of the current directory for the Manager.

The information stored in these logs is for crash recovery. It is removed when a transaction is committed.

Note that the default setting of 'ct.logger.directory' should not be used in Amazon EC2 or similar public cloud, where the local disk is no longer accessible after a crash. In EC2 for example, you should map the log directory to an EBS (Elastic Block Storage) volume.
4.5.19  'ct.logger.diskBlockSize' Property
ct.logger.diskBlockSize = 4096
The size of the block size we should round up to. Every write to the transaction log files starts on this boundary - extra bytes at the end of a transaction's write are ignored. This is to try to minimise a power out while writing one transaction affecting the previous transaction.

The standard block size on modern drives is 4096 - change this at your peril.
4.5.20  'ct.logger.diskSpaceLowBusyCount' Property
ct.logger.diskSpaceLowBusyCount = 3
This is the number of full files-worth of free space that must remain on log disk partition for processing to continue. If there is less that this amount, then the TxBufferManager.start() method throws TransactionExceptionTxbTooBusy. This does not affect transactions that have already started. The default is 3, which, for the default ct.logger.fileSizeMB value of 100MB, gives 300MB of disk space. For typical transaction sizes, this allows o(1 million) in-flight transactions to complete before disk space runs out.
4.5.21  'ct.logger.diskSpaceLowWarningCount' Property
ct.logger.diskSpaceLowWarningCount = 10
This is the number of full files-worth of free space that must remain on log disk partition. If there is less than that, a "low log space" warning is issued If not specified, the default value is 10x the size of a log file - giving space for 10 log files from this point on.
4.5.22  'ct.logger.disposer' Property
ct.logger.disposer = com.cloudtran.log.LogDefaultDisposer
This names the class of the log disposer, which responsible for logfile disposing. This class must implement ILogDisposer. The default implementation deletes the file once all transactions listed in the log file as 'persisting' have been successfully persisted.
4.5.23  'ct.logger.fileBaseName' Property
ct.logger.fileBaseName = Logfile_
Logfile base name. By default this names log files like "Logfile_yyyy.MM.dd.HH.mm.ss.SS"
4.5.24  'ct.logger.fileDatePattern' Property
ct.logger.fileDatePattern = yyyyMMdd_HHmmss.SSS
The date pattern that should be included into the logfile name. Ends up looking like 20100211_173000.500 (11 Feb 2010, 5:30 pm, plus 500 milliseconds).
4.5.25  'ct.logger.fileExtension' Property
ct.logger.fileExtension = log
Extension for the log file.
4.5.26  'ct.logger.fileSizeMB' Property
ct.logger.fileSizeMB = 20/5
The file size of each log file in megabytes. The considerations on this are

  • increase to reduce disk allocation/switching time
  • reduce to (1) improve convenience for operators (2) reduce risk of files being lost.
The default log file size is 20MB in production, 5MB in DEBUG mode
4.5.27  'ct.logger.logAfterCommit' Property
ct.logger.logAfterCommit=false
ct.logger.logAfterCommit enables lazy logging of the transaction at the Manager - after the commit has returned. This flag is not relevant if 'ct.logger.logBeforeCommit' is set. Only one of ct.logger.logBeforeCommit and ct.logger.logAfterCommit can be set true.
4.5.28  'ct.logger.logBeforeCommit' Property
ct.logger.logBeforeCommit=true
'ct.logger.logBeforeCommit' requires the Manager to log a transaction before returning from (i.e. positively acknowledging) a 'commit()' method. This flag is true by default, which means a transaction is guaranteed to be recorded in the transaction log before returning to the application program. If you set this false, in the case of complete lights-out or grid failure, some transactions are likely to be lost. Only one of ct.logger.logBeforeCommit and ct.logger.logAfterCommit can be set true.
4.5.29  'ct.logger.maxWriteSize' Property
ct.logger.maxWriteSize = 5242880
The amount of data, in bytes, below which the log thread will keep processing outstanding committed transactions. Once the amount of data to be logged exceeds this number, the buffered data is written out. Default is 5242880, which is 5 MB.
4.5.30  'ct.logger.threadCount' Property
ct.logger.threadCount=2
if ct.logger.logAfterCommit or ct.logger.logBeforeCommit is true, this number of threads will be created to do the logging and deleting of logs after persistence. The default is 2, which is sufficient for transaction logs that use hard disks. Transaction logs that use disks as a service may benefit from more threads.
4.5.31  'ct.logger.zapLogFiles' Property
ct.logger.zapLogFiles=false
This is an option for developers which must be used with care!! It will zap the log files at start of day, without attempting recovery of transactions to disk. This will be useful after a fast recovery or during unit testing.

Do not set this flag in UAT (user test) or PROD (production) deployments. In that case, if there are log files which have un-persisted transactions, the normal production run will not start and the operator will need to use the Log File Replay utility.
4.5.32  'ct.manager.coherence.applicationCacheServiceThreadCount' Property
ct.manager.coherence.applicationCacheServiceThreadCount=20/4
ct.manager.coherence.applicationCacheServiceThreadCount is the number of worker threads on the application cache service.

This property must also be specified in the coherence-cache-config on the schema for application caches, so that CloudTran can correctly back off when too many threads are being used, and so avoid deadlock under high load, e.g.

<thread-count system-property="ct.manager.coherence.applicationCacheServiceThreadCount"></thread-count>

You then define the number of threads by setting this system property (just like for other configuration properties). If not specified, the default number used is 20 in production, 4 in DEBUG. Note that these values override the value in the coherence-cache-config, which are indicative placeholders.
4.5.33  'ct.manager.coherence.invocationServiceThreadCount' Property
ct.manager.coherence.invocationServiceThreadCount=10
ct.manager.coherence.invocationServiceThreadCount is the number of worker threads on the CloudTran Manager Invocation service.

If not specified, the default number used is 4. The invocation is only used during cache repartitioning.
4.5.34  'ct.manager.coherence.isolatorControlCacheServiceThreadCount' Property
ct.manager.coherence.isolatorControlCacheServiceThreadCount=4
ct.manager.coherence.isolatorControlCacheServiceThreadCount is the number of worker threads on the Isolator Control cache service. This is only used at start of day.

If not specified, the default number used is 4.
4.5.35  'ct.manager.coherence.manager2IsolatorThreadCount' Property
ct.manager.coherence.manager2IsolatorThreadCount=3
ct.manager.coherence.manager2IsolatorThreadCount is the number of worker threads sending on the Isolator Control cache service. This is only used at start of day.

If not specified, the default number used is 3.
4.5.36  'ct.manager.coherence.managerCacheServiceThreadCount' Property
ct.manager.coherence.managerCacheServiceThreadCount=10
ct.manager.coherence.managerCacheServiceThreadCount is the number of worker threads on the CloudTran Manager Cache.

If not specified, the default number used is 10.
4.5.37  'ct.manager.coherence.nTooBusyServiceThreads' Property
ct.manager.coherence.nTooBusyServiceThreads=2/1
ct.manager.coherence.nTooBusyServiceThreads is the number of threads that will not be used on a cache service. For each cache service, CloudTran keeps track of the number of service threads in use (to service Coherence entry processors). If it starts to use a thread and there are no longer nTooBusyCacheServiceThreads free, it will not continue with entry processor. Instead, it will return an internal "too busy" indication, which will cause the caller of an entryProcessor to retry after a short delay (10ms).

If not specified, the default number used is 2 in production and 1 in debug mode.

The reason for backing off in this way is to avoid deadlock. By unblocking the cache's service thread, it can still do put and gets on caches in the service. Without this, there is a possibility of all the service worker threads being in use with another invoke() request waiting - this will wait on the service queue and then block any further reads and writes on the service caches.
4.5.38  'ct.manager.coherence.primaryKeyCacheServiceThreadCount' Property
ct.manager.coherence.primaryKeyCacheServiceThreadCount=10
PrimaryKeyCacheServiceThreadCount is the number of worker threads on the CloudTran Isolator's Primary Key Cache.

If not specified, the default number used is 10, which should be ample in most configurations.

The primary key cache service runs on the isolator and is called when the client needs to acquire more primary keys for a given entity. Primary keys are allocated in batches of 1000 at the isolator and by default returned in batches of 20 to the client application.
4.5.39  'ct.manager.defaultTimeoutSeconds' Property
ct.manager.defaultTimeoutSeconds = 10/300
The default timeout, in seconds. The default valie is 10 seconds in production, 300 in debug mode. If this is too small, transactions will time out. If it is too large, deadlocked threads will hang for that amount of time.
4.5.40  'ct.manager.distributedCommit.threads' Property
ct.manager.distributedCommit.threads=20/10
The number of threads the manager uses to send COMMITTED or ABORTED instructions to the data caches. These threads are needed so all the data caches can be updated in parallel. The default value is 10 in debug mode and, in production, the 'ct.manager.coherence.applicationCacheServiceThreadCount' value.
4.5.41  'ct.manager.event.listener' Property
ct.manager.event.listener=com.cloudtran.coherence.DefaultManagerEventListener
The LLAPI manager event listener configuration specifies the class to be used to process LLAPI transaction events at the manager. Only one class can be specified here. An instance of this class is created in each manager at start of day (during StartManager). The class must have an public no-args construct and must implement ManagerEventListener. The default is DefaultManagerEventListener.
4.5.42  'ct.manager.llapi.persisterClass' Property
This is the name of the user-provided class that deals with aspects of object persistence. It is only used in LLAPI (the Low-Level API). A singleton of the class will be instantiated by CloudTran if specified. The specified class must have a public 0-arg constructor and implement com.cloudtran.coherence.llapi.Persister

If you do not specify this method, CloudTran will not persist any objects - the data in the cache will not be persisted at all.
4.5.43  'ct.manager.logwriteBufferTime' Property
ct.manager.logwriteBufferTime = 500
This is the number of microseconds to leave for the write to process. The default is half a millisecond (500 microseconds). The actual tx write buffering time therefore is microsPerLogWrite minus logwriteBufferTime. In practice, you don't need to tune this value: just tune the microsPerLogWrite value.
4.5.44  'ct.manager.maxTransactions' Property
ct.manager.maxTransactions = 12000/600
ct.manager.maxTransactions is the maximum number of *distributed* transactions that can be started at the Transaction Manager. These are the ones that are in play (i.e. started by a client, and not completed at the Transaction Manager). A transaction is completed when it is committed by the client, and then logged, persisted and committed to the cohort(s). When a transaction is completed, its data can be purged from the space, reducing the data requirements at the server. Note that Cohort.commit() only flushes subtransactions at the cohort, it does not commit the transaction at the Transaction Manager. Also, it does *not* include ones that are waiting to be completed (to log, cohort or persist)

The default is 12000 in production and 600 in debug.

The 12000 number is a starting point: it will probably be too low if the transaction manager JVM has many gigabytes of memory available.

Once this limit is breached, the Transaction Manager returns a TransactionExceptionTxbTooBusy to the transaction start() request.
4.5.45  'ct.manager.microsPerLogWrite' Property
ct.manager.microsPerLogWrite = 4000
ct.manager.microsPerLogWrite determines for how long do we wait between each write. This number is in microseconds.

For physical disks, the idea is that this is the same as the disk's minimum access time - i.e. the time to do one rotation of the disk. For a 7200 rpm, which does 120 revolutions per second, i.e. one revolution in 8.3 ms, then this should be set to 8300 (8.3ms expressed in microseconds).

If this is a disk service in a Cloud (i.e. limited by disk access time) then the time should be set so as to minimise the wasted I/O's (which seem to be the limiting factor for example on EC2/EBS). Aim for an average write at a 4k boundary. For example, 5 rows/transaction, 100bytes each (serialised) = 500 bytes. At maximum 2000 transactions/second, this is 1MB/sec for logging, or 1000 bytes/msec. In that case, 4ms between writes (which is the default) gives 4KB/write every 4 ms.
4.5.46  'ct.manager.mvccTimestampGenerator' Property
ct.manager.mvccTimestampGenerator=com.cloudtran.coherence.txb.DefaultMvccTimestampGenerator
Use this to plug in your own transaction timestamp generator for MVCC. An instance of this class is created in each manager at start of day (during StartManager). The class must have an public no-args construct and must implement MvccTimestampGenerator. The default is com.cloudtran.coherence.txb.DefaultMvccTimestampGenerator. This uses calls to the primary isolator to get transaction timestamps.

There is a timestamp generator which is faster and more scaleable - com.cloudtran.coherence.txb.DistributedMvccTimestampGenerator.
4.5.47  'ct.manager.persistLingerTimeMillis' Property
ct.manager.persistLingerTimeMillis=0
ct.manager.persistLingerTimeMillis is the default amount of time that the commit method should wait for the persist to complete.

By default, this is 0, which means there will never be any persistLingerTime and therefore no waiting for persistence to complete before returning from the transaction. A value of 0 will give better response time than waiting, but at the expense of writing the persistence data into the cache - i.e an extra 2 network hops (one to write to the backup node and another to remove it).

A non-zero number here may avoid that - but only if the persist is quick enough. If the database write does not return with the number of milliseconds shown here, CloudTran saves the persistence data in the cache and returns. For large transactions, this optimisation can improve overall performance significantly because the cost of the writing the backup may be many packets.

This value is used to initialize the persistLingerTimeMillis field in DefaultCTxDefinition. You can change this before starting a transaction using it.
4.5.48  'ct.manager.primaryKeyAllocationChunkSize' Property
ct.manager.primaryKeyAllocationChunkSize=1000
ct.manager.primaryKeyAllocationChunkSize is the number of keys to allocate per request to the KeyGenerationService. The value here must balance the number of unused primary keys a client may leave unused after each interaction or run, versus the cost of the calls to allocate each one.
4.5.49  'ct.manager.startBackoffMS' Property
ct.manager.startBackoffMS = 100/500, 300/800, 500/1250, 500/1250, 1000/2000, 2000/5000
This is the length of backoff time, in milliseconds, that the Transaction Manager start() method will back off if the ct.manager.maxTransactions is exceeded at a given cohort. There is no default - in otherwise, by default, CloudTran fails immediately. If you do want to provide automatic backoff in the Transaction Manager, then here is a possible set of values:
4.5.50  'ct.manager.txStatusLingerTimeMinutes' Property
ct.manager.txStatusLingerTimeMinutes=2
ct.manager.txStatusLingerTimeMinutes is the number of minutes that a distributed transaction record should stay in the Transaction Manager's memory after it has completed (i.e. it has been aborted, or committed+logged+persisted).

Leaving the distributed transaction record to linger has two purposes:
  • if a client crashes and wants to check the status of a transaction, it can call the Transaction Manager's whatHappenedTo() method. The linger time determines how long after the transaction completes this call can give an answer: when the transaction record is deleted, this method returns null
  • if a client tries to start a transaction with the same business transaction ID as one that has already committed or aborted, the Transaction Manager detects this situation and fails the start(). This also depends on the distributed transaction record being in the Transaction Manager's memory.##
  • the amount of memory required to keep 'lingering' distributed transaction records can be very high so take that into account if you increase this number. For example, at 3,000 transactions per second, a linger time of 10 minutes will start to impact the overall memory used significantly, by order of 1-2 GB.

4.5.51  'ct.persist.maxDatasourceThreadCount' Property
ct.persist.maxDatasourceThreadCount=20
This property defines the maximum number of datasource store threads that will be started for each datasource. Although this number of threads is started, the number used at any given instant is determined by the ct.persist.threadsPerDatasource property (which can be set by an MBean).
4.5.52  'ct.persist.maxRowsForDataSourceTransaction' Property
ct.persist.maxRowsForDataSourceTransaction=100/4
When collecting objects to send to the databases, CloudTran aggregates a number of transactions together. It is possible to do this because CloudTran knows the object identity and detects when two transactions share an object (because it is of the same class and primary key).

During the aggregation process, CloudTran checks the number of rows already collected for the next database transaction. If this number is >= to 'ct.persist.maxRowsForDataSourceTransaction', then no more rows are added.

The default for this value in production is 100, 4 in DEBUG mode.

Tests show that it is more effective to use smaller chunks (i.e. not 512) and more threads (e.g. 15) when the database is MySQL. For other databases, it's worth playing with this parameter and the threadsPerDatasource above.
4.5.53  'ct.persist.threadsPerDatasource' Property
ct.persist.threadsPerDatasource=15/2
ct.persist.threadsPerDatasource specifies the number of threads that will forward transactions to each datasource. The initial value must be between 1 and the value of "ct.persist.maxDatasourceThreadCount". The default is 15 (or 2 in debug mode), which is appropriate for remote databases. For a local database (e.g. in testing - this deployment doesn't make a lot of sense in production), start with 2.

Note that this value has instantaneous effect at run time - for example, when you change via JMX or a management tool. Threads have been started (based on the value of "ct.persist.maxDatasourceThreadCount") monitor threadsPerDatasource value; any threads not need to provide this number of live threads are paused.
4.5.54  'ct.production' Property
ct.production=false
ct.production can be set true for running in production. It changes the action of the DefaultManagerEventListener after a hospital event is logged: in non-production, this continues; in production it exits the JVM via System.exit() which stops this node running. When ct.production=false, asserts (see trace.assert) in the code are enabled by default, but disabled when ct.production=true.
4.5.55  'ct.replayer.continueAfterError' Property
ct.replayer.continueAfterError = false
This property determines if the replayer continues the processing in case of unrecoverable error. This applies to the first error seen; the replayer will always stop at the second unrecoverable error.
4.5.56  'ct.replayer.retryCount' Property
ct.replayer.retryCount = 5
Retry count in case of SQLRecoverableException or SQlTransientException when replaying transactions after a lights-out event.
4.5.57  'ct.replicator.canReplicate' Property
ct.replicator.canReplicate=true
'ct.replicator.canReplicate' tells whether a node in the Isolator service can act as a replicator.

To indicate that an Isolator node cannot replicate, you must set this property to false. This property will normally also indicate that the node has special communications hardware to support fast WAN links.
4.5.58  'ct.replicator.dataCenter' Property
ct.replicator.dataCenter=localDataCenterName
The 'ct.replicator.dataCenter' property defines the name of the local data center.

If the replicator is enabled, this property is required. Furthermore, it cannot the same as any of the data centers its replicator connects to.

For example, ct.replicator.dataCenter=East
4.5.59  'ct.replicator.electer.class' Property
ct.replicator.electer.class=com.cloudtran.replicator.isolator.DefaultReplicatorElecter
Use 'ct.replicator.electer.class' to plug in your own class to elect the primary replicator. The replicator service runs on all Isolator nodes; all Isolator nodes are candidates to be the primary replicator.

A single instance of this class will be created at start of day.

It must implement com.cloudtran.replicate.ReplicatorElecter.

If you do not specify this property, the DefaultReplicatorElecter is used, which chooses a replicator by
  • first, trying to avoid the primary isolator
  • second, not moving the primary replicator (if it's running on a given node, don't move it unless that is now the primary isolator).

4.5.60  'ct.replicator.enabled' Property
ct.replicator.enabled=false
To enable the CloudTran replicator, you must set this property true. By default, the CloudTran replicator is not enabled.

If you implement your own replicator, the CloudTran replicator won't be used (and won't reference the 'ct.replicator.*' config properties) and it is up to you what config properties to use.

(You can replace the CloudTran replicator by implementing a ManagerEventListener that handles the committing() method.)

This flag must be set to the same value on manager and isolator members of the grid. It is different from ct.replicator.canReplicate.

See also ct.replicator.enabled
4.5.61  'ct.replicator.inbound.remoteDcName' Property
ct.replicator.inbound.remoteDcName=port
To replicate between two data centers, one data center must define an inbound server connection using the 'ct.replicator.inbound.<remoteDcName>' property. The other data center must define an outbound client connection using 'ct.replicator.outbound.<remoteDcName>'.

The pair of ct.replicator.inbound/outbound configuration properties define a potential replicator connection between two data centers. If each data center has, for example, 3 nodes that can be replicators, then there are 9 possible inbound-outbound replicator connections. At run time, only one node is the primary replicator in a given data center, and it is the primary replicator that creates the server connection (for inbound) or tries to connect as a client (for outbound). So at any given time, there will only be once connection between the data centers.

The connection between the data centers is two-way: both ends can send and receive data on the connection. In particular, if the data centers split the responsibility for being primary data center for some of the data and backup for others, the connection will be replicating cache entries in both directions simultaneously.

It is possible for one data center to connect to more than one remote data center. The semantics of having multiple connections this is that the data changed on this data center as primary is replicated to all other data centers. (In future, this may be enhanced to allow the application to choose which remote data center to send changed data to.)

There can be many 'ct.replicator.inbound.<remoteDcName>' properties, one for each data center that connects to this node as a server. The <remoteDcName> is replaced by the remote data center name, such as 'West'.

The value of the 'inbound' property is the port number - the port the replicators listen on. Nodes running the replicator service will listen on this port for connections from remote replicator instances. For example,

ct.replicator.inbound.West=9990

allocates a server socket on port 9990 on 10.1.1.120 and 10.1.1.121, which allows connections from data center 'West'.

If a connection is received from a data center not named in the value, the replicator will refuse the connection.

Here is another example of a inbound/outbound properties connecting Boston and Seattle:

In Boston: ct.replicator.inbound.Seattle=5700 In Seattle: ct.replicator.outbound.Boston=5700,156.74.250.21,156.74.250.22,156.74.250.23

  • The primary replicator in Boston listens as a server on port 5700. It only accepts connections from the 'Seattle' data center.
  • The primary replicator in Seattle tries to connect to Boston via the nodes 156.74.250.21/22/23 listening on port 5700. Only one of these nodes can be the primary replicator; when it has started listening as a server, Seattle can establish the data connection.

4.5.62  'ct.replicator.initial.state' Property
ct.replicator.initial.state=dataflow
When the replicator first starts up, it normally (a) resynchronizes the two ends of the replicator and (b) starts normal operations - a state called 'dataflow'. In dataflow, the transactions from an active grid are replicated to the remote data center. The other two options here are
  • 'stop', which exits CloudTran executable. This action is useful for resynchronizing and then taking time to evaluate the new state in the data centers.
  • 'pause', which means this replicator will pause once the resynch is complete. The system can be un-paused via JMX.

4.5.63  'ct.replicator.link.initializer' Property
ct.replicator.link.initializer=com.cloudtran.replicator.link.LoopbackInitializer
The LinkInitializer turns the configuration of the link into a LinkProvider. The default is not very useful - it is just a loopback tester, which only makes sense for unit tests. For production, the simplest connection is com.cloudtran.replicator.link.SocketLinkInitializer, which uses single-connection standard TCP/IP sockets.
4.5.64  'ct.replicator.localStore.class' Property
ct.replicator.localStore.class=com.cloudtran.replicator.ssdStore.SSDLocalStoreImpl
Defines the class to implement the LocalStore for the replicator. The "local store" is the persistent storage for transactions being replicated. So that a committing transaction is not held up by latency to the other data center (which be 100ms or more) the cache data being replicated for the transaction is also held in persistent storage by the replicator.

A single instance of this class will be created at start of day.

It must implement com.cloudtran.replicator.localStore.LocalStore.

The default is SSDLocalStoreImpl, which uses a file as the local storage. For faster debug testing, there is also com.cloudtran.replicator.localStore.CacheLocalStoreImpl.
4.5.65  'ct.replicator.max.unAckedPackets' Property
ct.replicator.max.unAckedPackets=10000
This is the maximum number of unacknowledged transactions that can be sent from the local replicator to the remote replicator. Once this number of unacknowledge transactions is reached, the local replicator waits to receive acknowledgements from the remote replicator. The remote replicator only acknowledges transactions that have been committed at the remote data center.

The default value is 10,000.

When the local replicator pauses transmissions, the transactions to be replicated are durably stored in the local store. and then sent automatically when more transaction acknowledgements are received at the local replicator.
4.5.66  'ct.replicator.maxObjectSizeKb' Property
ct.replicator.maxObjectSizeKb=1024
This is the maximum serialized size of an object across the wire, in kBytes. The default is 1 Megabyte. If an object involved in a transaction is bigger than this size and replication is enabled, a fatal configuration error results This is intended as a failsafe, to prevent a program error causing unbounded memory allocation. The memory is not all allocated at start of day - increase amounts are allocate on demand
4.5.67  'ct.replicator.outbound.remoteDcName' Property
ct.replicator.outbound.remoteDcName=port,ipAddresses
See ct.replicator.inbound.remoteDcName for background on the inbound/outbound pair of configuration properties.

The value of the 'outbound' property is a comma-separated list. The first item on the list is the server port number at the remote data centre replicators. The remaining items on the list are the IP addresses (e.g. 23.61.254.162) or hostnames of the remote data centre's replicators that are listening for this replication from this data centre. There should be at least two IP addresses in the list, to provide failover for the replicator service.

For example, ct.replicator.outbound.West=9990,replicator1.mydc.myCorp.com,replicator2.mydc.myCorp.com will connect to data center West as a client on port 9990, to the hosts named 'replicator1.mydc.myCorp.com' and 'replicator2.mydc.myCorp.com'.

IP addresses are also possible: ct.replicator.outbound.West=9990,18.9.22.69,18.9.22.70

The outbound property must match on the dataCenter name: if the specified IP addresses are in a data center with a different name, the connection will be refused.
4.5.68  'ct.replicator.passive' Property
ct.replicator.passive=true
If this data center is passive, it means that transactions won't be generated in the data center.

This is an advisory value at present. It controls how ACKs are sent back from the remote data center to the local data center. If ct.replicator.passive=true, ACKs are sent immediately; if ct.replicator.passive=false, ACKs are normally sent on outbound transactions. This only has an action on a replicator receiving transactions.
4.5.69  'ct.replicator.put.direct' Property
ct.replicator.put.direct=false
This tells whether replicate uses a simple put to write into the remote grid, or uses a transaction. The reasons for using a transaction are when, on the remote grid, you want: (1) CloudTran to handle persistence (2) transaction logging - but this should normally not be necessary, because the replicator has the same functionality (3) CloudTran to respect transactions in flight on the remote grid. If so, there would be conflicts of ownership of a particular cache/key value, and this feature would not solve the wider problem of having two active writers of the entry. The default is false - i.e. to use a transaction.
4.5.70  'ct.replicator.ssd.blockSize' Property
ct.replicator.ssd.blockSize = 4096
The size of the block size we should round up to. Every write to the transaction log files starts on this boundary - extra bytes at the end of a list of transaction's write are ignored.

The default is 4096, which is good for most SSDs and hard disks. Some SSDs may have a larger native block size.
4.5.71  'ct.replicator.ssd.directory1' Property
ct.replicator.ssd.directory1 = replicator/directory1
This gives the directory location of the files that replicator uses. By default, this is 'replicator/directory1', which is a path below the current directory of the CloudTran isolator application. The default is appropriate for development. In production, the directory should be specified.

Although this is an "SSD" property, there is no check that this is actually on an SSD, so it can be on a normal directory.

The directories can optionally end in '/' or '\'. On Windows, drives, such as "C:", are allowed.

If replication is enabled, you must specify a valid directory name for this property. For initialization (ct.replicator.ssd.init=true), the directory will be created if it doesn't already exist.

In production, this directory should ideally be on an SSD for maximum performance. Performance is severely degraded on Linux if this directory is on the system drive.
4.5.72  'ct.replicator.ssd.directory2' Property
ct.replicator.ssd.directory2 = replicator/directory2
It is highly recommended that you use a second directory on a separate disk from 'directory1' in production and performance testing; this is specified in 'ct.replicator.ssd.directory2'. In development, CloudTran will work with a single directory.

See 'ct.replicator.ssd.directory1' for more information.
4.5.73  'ct.replicator.ssd.fileBasename' Property
ct.replicator.ssd.fileBasename = transactionData
This properties is the base filename of all the store files. '_' plus a number (1, 2 etc.) plus the file extension is added to this base to give the full file name.
4.5.74  'ct.replicator.ssd.fileExtension' Property
ct.replicator.ssd.fileExtension = rep
This properties set the fileExtension for the store files. The default is 'rep', for replicator. Avoid standard names like 'log', that may get deleted by mistake.
4.5.75  'ct.replicator.ssd.fileSizeMB' Property
ct.replicator.ssd.fileSizeMB	= 50/4
This is the size in MB of each file on the disk. This will be used in conjunction with the 'ct.replicator.ssd.totalSizeMB' to determine the number of file. The production default is 50MB; the debug default size is 4MB. When combined with the default 'totalSizeMB' value, this gives 2,000 files in production and 10 files in debug.
4.5.76  'ct.replicator.ssd.init' Property
ct.replicator.ssd.init = false
If the 'init' property is set to 'true', the SSDs will be initialised for use by CloudTran - which will then stop.

To run normally, and make use of the replicator information after a cluster reboot, this property must be false.

To prevent accidental deletion of the replicator information, when this property is true, there must be no files in the directories specified in the ct.replicator.ssd.directory1/2 properties.

In the preferred configuration, with two nodes providing the replicator service, the 'init' run must be done on both nodes.

Best practice is to omit this property from config.properties file in production: specify it as a system property ('-Dct.replicator.ssd.init=true' on the command line) for an initialization run.
4.5.77  'ct.replicator.ssd.threadCount' Property
ct.replicator.ssd.threadCount = 2
This is the number of threads to use for each 'SSD' drive - to backup for the replicator.
4.5.78  'ct.replicator.ssd.totalSizeMB' Property
ct.replicator.ssd.totalSizeMB = 100000/40
This sets the amount of storage allocated for the storage of the replicated packets, in MB. This is per directory (i.e. per drive, normally). The production default is 100,000 or 100GB; the debug default is 40MB.
4.5.79  'ct.replicator.toManagerThreadCount' Property
ct.replicator.toManagerThreadCount=30/3
'ct.replicator.toManagerThreadCount' is the number of threads in the remote replicator for forwarding transactions to the managers at the remote grid. This is on a per-manager basis.

The default is 30 in production and 3 in debug. However, this parameter is very sensitive to the size of the grid: this number is multiplied by the number of Managers in the grid.

This parameter should be the same on the replicator and the managers, because the manager also allocates worker threads to handle the load from the replicator.

This number must not be too small, otherwise the number of packets going to a manager will cause the response time to lengthen (because the manager request does not return until all transactions have committed).

On the other hand, if there are a lot of managers, then it should not be too large, otherwise there could be a huge number of threads at the manager (e.g. if you have 50 managers, by default you get 30*50=1500 threads sending from the replicator to managers).
4.5.80  'ct.replicator.txToBuffer' Property
ct.replicator.txToBuffer=10000/100
'ct.replicator.txToBuffer' is the number of outbound transactions to buffer in the primary replicator.

When there are more transactions backed than can be buffered, the transactions are written to SSD (as normal) but not buffered in the replicator. When it comes time to send a transaction to the remote replicator to be backed up there, it can be retrieved direct from memory if it is buffered, otherwise it will have to be re-read from the SSD. The default is 10,000 in production and 100 in debug. The absolute minimum is 20. There is no maximum.

The number given here is a 'high water mark' - the replicator does not exceed that number. The replicator continues not-buffering until the 'low water mark' of transactions buffered (which is 95% of the high water mark - e.g. 9,500 with the production default of 10,000) is breached. The replicator then starts buffering again.

The cost of re-reading from the disk is that it puts more pressure on the SSD(s) and on the node's memory bus.

If a transaction uses 2Kb in its memory form, the default of 10000 here puts a limit of 20Mb on the size of transactions buffered.

The downward pressure on this number is the additional pressure on memory and garbage collection: memory held in the buffers for too long will be promoted and require a full GC to release.

If this value is specified too large, the system may get OutOfMemory errors.
4.5.81  'ct.timesync.callsPerBlast' Property
ct.timesync.callsPerBlast = 4
The manager makes multiple calls to get the time from the isolator, and then chooses the best of them. ct.timesync.maxCallsPerBlast gives the number of calls in each blast. The default is 4, which is the optimum balance between getting a good reading and minimising the overhead of timesync calls.
4.5.82  'trace.*' Property
The trace.warn/info/debug/trace properties define whether or not to trace at different priority levels. trace.warn is the most precedent; trace.trace is least precedent, i.e. most detailed By default, only info and warning traces are written: trace.warn=true and trace.info=true, and trace.debug=false and trace.trace=false.

The level of trace is normally overridden by a '-D...' property on the command line. If trace.trace specified = true, .debug and .info are automatically set on. If trace.debug specified = true, .info is automatically set on.

Default values are
  • trace.warn = true
  • trace.info = true
  • trace.debug = false
  • trace.trace = false

4.5.83  'trace.adjustNanoTimer' Property
trace.adjustNanoTimer = true
Cohorts and others should adjust the nanotimer to allow for round-trip delay. This is useful in debugging to cross-reference log and trace files on different machines.
4.5.84  'trace.assert' Property
trace.assert defines whether assertions should be enabled. "trace.assert=true" will process asserts, which are special trace statements rather than the Java facility. The "trace.assert" config property is set into the Trace.ASSERT field, which is used as a filter on the Trace.Assert method, which takes an asserted condition. For example:

if( Trace.ASSERT) Trace.Assert( driverClass != null );

This asserts that the driverClass is not null. It fails if the driveClass is null.

You can set this property true by 'trace.assert=true' in the config.properties file or on the command line (-Dtrace.assert=true).

If trace.assert is not specified, it is set to the OR of
  • not production (ct.production=false) and
  • debug mode (trace.debug=true).

4.5.85  'trace.cacheEvents' Property
trace.cacheEvents = false
Trace all cache events. This traces every event on all caches known to CloudTran. It obviously produces a huge amount of output. This is intended for internal use by CloudTran to debug failover sequences.
4.5.86  'trace.eventHistory' Property
trace.eventHistory = false
Add event history objects into transaction statuses. This enables tracing of transaction execution in the manager. This can only be set at start of day and should not normally be used in production.
4.5.87  'trace.eventHistory.waitTime' Property
trace.eventHistory.waitTime=300
This is the number of seconds to wait between dump of the transactions that are in the process of committing but not yet complete (held in 'nowCommittingOrAbortingMap'). The default is 300 seconds, or 5 minutes. This only has any effect when trace.eventHistory is set true.
4.5.88  'trace.file' Property
This is the trace file name for this processing unit. If you do not specify a file (path) here, it will default to the console. In GigaSpaces, the console log will additionally go to a file like [[GIGASPACES_HOME]]/logs/YYYY-MM-DD~hh.mm-gigaspaces-gsc_#-[[hostname]]-nnnn.log

You can do multiple runs (even simultaneously) with the same target file name. Say you start with trace.file=MyApp.log. The second run will use the file name MyApp.log.A, the third MyApp.log.B... then MyApp.log.A2 etc.
4.5.89  'trace.fileSizeInMegaBytes' Property
trace.fileSizeInMegaBytes=20
The default file size for the trace file is 20 Megabytes. After that it will roll over to the specified trace file name suffixed by "_n", where n=2, 3, etc.
4.5.90  'trace.formatForDiff' Property
trace.formatForDiff=false
If trace.formatForDiff is set true, timers and line numbers are omitted from the trace, to make it easier to use comparison tools, like WinDiff, on the log
4.5.91  'trace.messages' Property
trace.messages = false
Trace 'messages' - which is any non-grid communications between machines, such as between replicators. Messages are also enabled for trace if trace.debug is true.

By default, this is false. It allows you to just switch on message tracing without getting all the rest of the debug.
4.5.92  'trace.module' Property
The module name. Try to keep this to 7 characters or less. It comes out on each trace line generated by this module. After merging the trace lines, this will help you distinguish which module did what. If this is left unspecified, the defaults are ISOLTR for Isolators, MANAGR for Managers and CLIENT for clients.

When working with the replicator trace (in trace.debug mode), the module name governs which is the "left" isolator. An isolator is the left one if the module name ends in "1" or "A".
4.5.93  'trace.operation.timer' Property
trace.operation.timer = false
records the elapsed time of individual operations, like space calls, service calls or locks. This is for internal performance testing of the CloudTran system. You should not normally need to set this.

The usage of operation timers is to create an operation timer if this value is set. It gets mapped into the Java variable Trace.OPERATION_TIMER. The OperationTimer is instantiated with a message, and then the time is logged by the mark() call.
     OperationTimer ot = null;
     if( Trace.OPERATION_TIMER ) ot = new OperationTimer( "enrol() " + internalTxId );
     ...
     if( Trace.OPERATION_TIMER ) ot.mark();

4.5.94  'trace.wireshark' Property
trace.wireshark = false
Add markers to transmitted messages to make it easier to interpret Wireshark traces. This can only be set at start of day and should not normally be used in production.

Copyright (c) 2008-2013 CloudTran Inc.