2.1 Application Description
2.1.1 Data Structure|
The application reads and writes using Coherence caches, one per entity, that each map to tables in the database. The entities are as follows:
- Parent - Represents a human parent. One Parent can have many children and many grandchildren.
- Child - Represents a human child. One Child undertakes many activities.
- GrandChild - Represents a human grandchild, linked directly to the Parent.
- ActivityType - Represents something the Child might do, e.g. participate in a sport.
- Activity - Represents an occurrence of an activity for the Child - e.g. attending a sports club.
- Library - Represents a library. A library contains many books.
- Book - Represents a book contained in library. A book is 'borrowed' from the Library.
The client node runs the main method of an ActivityTest instance which:
2.1.2 The Client|
There are four arguments that can be specified to the command line; they are all optional.
2.1.3 Client inputs|
- The number of iterations per thread. The default is 100.
- The number of threads. The default is 5.
- The number of master objects of standing data per client thread.
The default is 20. So this means that, by default, 20 * 5 Parent objects of standing data will be created,
and the same number of Library objects.
- The number of iterations between interim status reports. The default is 100.
Each client Thread in ActivityTest iterates a number of times. Each iteration consists of 2 transactions:
2.1.4 The Transactions|
The first transaction deals with the Parents, Children and Activities and consists of 4 reads, 2 updates, 1 insert and 1 delete.
The insert and delete are both conducted on an Activity and are balanced over time, so the ActivityTest can be run over a long period.
The program flow for transaction 1 is
The second transaction deals with the Libraries and Books and consists of 4 reads and 2 updates.
The program flow for transaction 2 is
- read a Parent from the cache by id
- read the Parent's 4 Child records
- find which Child has been the last to do an Activity
- pick an ActivityType and read it by Id from the cache (this is deliberately inefficient to add another read!)
- create a new Activity for the Child and save it, which also updates the Child
- once enough Activities have been created, delete one per iteration.
This allows this test to run for a long time without reaching resource limits.
- read a Library from the cache by id
- read the Library's 4 Book records
- find which Book was last borrowed
- pick the next book to be borrowed
- update the Library and the Book
This application is designed to put pressure on the re-use of objects and therefore
stress the logic to ensure isolation between transactions.
The end of the consolte output looks like this:
2.1.5 Output Statistics|
The first few lines are the intermediate timings.
These are useful in performance testing; with 1000 iterations between reports, on a server-class node
you can quickly see the average performance per client.
17:36:11 970 0.110
17:36:11 980 0.109
17:36:11 990 0.156
17:36:11 1,000 0.172
Number of threads: 5
Number of iterations: 100
Target transaction count (#threads * #iterations * transactions/iteration): 1000
Actual transaction count: 1000
Number of transaction failed: 0
Run finished in: 13,875 ms
StartToCommitTime: 21,334 ms
StartToCommitTimeTxOnly: 14,831 ms
Average start and commit time : 21 ms
Average start and commit time Tx Only: 14 ms
The "Run finished" time includes setting up the standing data.
There are two overall timing measurements for the application:
Put another way, StartToCommitTime less StartToCommitTimeTxOnly is the time take to do the initial reads.
- StartToCommitTime is the total program runtime (including creating standing data)
- StartToCommitTimeTxOnly is the iteration time (after the standing data is created).
The measure of transactions per second (Tx/sec) is based on the iteration time (excluding standing data setup).
The above figures were taken on a single cache-node/manager set-up running on a desktop machine under Eclipse, persisting to a local MySQL database.
On server-class machines running these transactions, we see anywhere from 100 to 1000 transactions per second on commodity hardware,
depending on the configuration.