At the End is at the Beginning

I’ve realized that the index in the schedule has to remain the same until a thread is done with a synchronization point (or rather the block following it). Only then, i. e. at the end of the block, may the index be incremented. This way, if a thread is currently executing its code and a second thread enters compactWait, it won’t procede but wait to be woken up. Theoretically, I could create a compactAdvance method that gets executed at the end of a block.

But the end of one block is the beginning of the next block, so both pieces of code can be put into compactWait. Except for the first block of a thread, though: The beginning of the first block is not the end of a previous block, so I need a conditional. The new, redesigned algorithm looks like this:

compactWait:

  1. Repeat this… (“RETRY” loop)
    1. monitorenter _waitAlgoObj (beginning of main synchronized block)
      1. If this is NOT the first sync point for this thread…
        1. Advance the indices.
      2. If the buffer has never been loaded, or if the index is at the end of the buffer…
        1. Load the buffer and reset the indices.
        2. Wake up all threads waiting for a buffer update.
      3. If the current sync point marks the end of the schedule…
        1. Disable scheduled replay.
        2. Wake up all threads waiting for a buffer update.
        3. monitorexit _waitAlgoObj
        4. Break out of the “RETRY” loop.
      4. If there’s a thread waiting for this wait array entry…
        1. Notify it
        2. Set a flag to scan ahead for the current sync point (“SCAN”).
        3. Do not advance indices.
      5. Otherwise…
        1. If the current sync point in the array is the right one…
          1. Do not advance indices.
          2. Allow the thread to exit the “RETRY” loop.
        2. Otherwise…
          1. Set a flag to scan ahead (“SCAN”).
      6. If the “SCAN” flag is set (i.e. either a thread was woken up or the current sync point did not match)…
        1. Look for the sync points in the remaining part of the array.
        2. If it could be found…
          1. Insert a new Object() into corresponding slot of the wait array.
          2. Set that Object() as “wait object” for this method.
          3. Allow the thread to exit the “RETRY” loop.
        3. Otherwise…
          1. Set the new buffer wait object as “wait object” for this method.
          2. Do not allow the thread to exit the “RETRY” loop.
        4. monitorenter waitObj (beginning of wait object synchronized block)
      7. monitorexit _waitAlgoObj (end of main synchronized block)
    2. If a “wait object” has been set for this method…
      1. Call wait() on that object (make sure to continue waiting if interrupted).
      2. monitorexit waitObj (end of wait object synchronized block)
      3. Do not advance indices.
  2. …while the thread is not allowed to exit the loop and scheduled replay is enabled (end “RETRY” loop).

compactThreadExit:

  1. monitorenter _waitAlgoObj (beginning of main synchronized block)
    1. If this is NOT the first sync point for this thread… (Note: Conditional probably not necessary)
      1. Advance the indices.
    2. If the index is at the end of the buffer…
      1. Load the buffer and reset the indices.
      2. Wake up all threads waiting for a buffer update.
    3. Otherwise…
      1. monitorenter _waitArray[index] (beginning of wait object synchronized block)
      2. If there’s a thread waiting for this wait array entry…
        1. Notify it.
      3. Otherwise, if the current sync point marks the end of the schedule…
        1. Disable scheduled replay.
        2. Wake up all threads waiting for a buffer update.
      4. Otherwise…
        1. Do nothing, because there’s still a thread awake that will reach this sync point.
      5. monitorexit _waitArray[index] (end of wait object synchronized block)
  2. monitorexit _waitAlgoObj (end of main synchronized block)

The first good news is that the modified algorithm passes the first simple test without violated assertions or invalid end states:

(Spin Version 4.2.3 -- 5 February 2005)
	+ Partial Order Reduction

Full statespace search for:
	never claim         	- (not selected)
	assertion violations	+
	cycle checks       	- (disabled by -DSAFETY)
	invalid end states	+

State-vector 196 byte, depth reached 140, errors: 0
    1486 states, stored
     725 states, matched
    2211 transitions (= stored+matched)
       0 atomic steps
hash conflicts: 0 (resolved)

Stats on memory usage (in Megabytes):
0.309 	equivalent memory usage for states (stored*(State-vector + overhead))
0.490 	actual memory usage for states (unsuccessful compression: 158.38%)
	State-vector as stored = 317 byte + 12 byte overhead
2.097 	memory used for hash table (-w19)
0.320 	memory used for DFS stack (-m10000)
0.159 	other (proc and chan stacks)
0.081 	memory lost to fragmentation
2.827 	total actual memory usage

I’m actually pretty sure compactThreadExit can always advance the indices without checking. There should never be a compactThreadExit without a previous compactWait. But I’ll have to check that.

The conditional for “If this is NOT the first sync point for this thread…” means that I have to add another variable to the java.lang.Thread class, and I’ll also have to get and set this variable in compactWait. That’s a little bit difficult since the variable doesn’t exist in the normal uninstrumented class.

So I think what I’ll do is add two empty dummy methods, public static void setOldThread() { } and public static boolean isOldThread() { }, to SyncPointBuffer and then call them from within compactWait. The calls, however, only serve as a markers for an instrumentation strategy which inserts the code to set the flag to true or get its value instead of the call.

I plan to use the same technique for embedding monitorenter and monitorexit instructions that don’t correspond to synchronized blocks.

Now I have to do laundry, though. I wish I could do that and coding concurrently…

Share

About Mathias

Software development engineer. Principal developer of DrJava. Recent Ph.D. graduate from the Department of Computer Science at Rice University.
This entry was posted in Concurrent Unit Testing. Bookmark the permalink.

Leave a Reply