I discussed earlier how synchronized blocks are not going to do it anymore, because I have to interleave the regions in which the locks are held. That makes the code significantly more complex. Therefore, I decided to dig out SPIN and Promela again. Fortunately, I had taken a course on it, COMP 607: Automated Program Verification.
This project is a lot more complex than the toy examples I had to work with before. One thing that I found particularly annoying is that Promela doesn’t have a notion of procedures. The code in replayWait
and replayThreadExit
is going to be executed in many places, and to get it correct, I really want to avoid code duplication. And that’s what procedures should be there for… In Promela, I could have spawned a new process and waited for its completion, but I wanted to have only one Promela process per simulated thread.
Fortunately, SPIN runs the C preprocessor over the Promela source code, which is mainly used for #define
directives; it also provides me with the #include
directive, though. So I put the different functions into different include files and assume that parameters are passed in a number of local variables, namely tid
and possibly code
. That’s rather ugly (especially if you stick to the C mantra never to actually include code), but it does work.
Now I have 384 lines of Promela code that model the algorithm described earlier pretty much down to the “T”. I’ve written one test file so far, it just has three threads with one sync point each:
/* Concutest - Concurrent Unit Testing
*
* Test 1
*
* Written by Mathias Ricken, Rice University.
*/
#define SCHEDULE\_SIZE 8
#define REPLAY\_SIZE 4
#define REPLAY\_SIZE\_X2 8
#include "vars.pml"
proctype thread(int tid) {
local int code = 1;
#include "waitalgo.pml"
#include "exitalgo.pml"
}
init {
schedule[0] = 1; schedule[1] = 1;
schedule[2] = 1; schedule[3] = 2;
schedule[4] = 1; schedule[5] = 3;
schedule[6] = 3; schedule[7] = 0;
run thread(1);
run thread(2);
run thread(3);
}
Ugly, huh? As you can see, I use #include
directives to include common global variables (vars.pml
) and make “calls” to waitalgo.pml
(compactWait
) and exitalgo.pml
(compactThreadExit
). There’s another file, loadbuffer.pml
, that factors out loading the next sync points out of the provided schedule, resetting the indices, and notifying threads waiting for a buffer update.
So far, it passes the SPIN “invalid end states” verification:
(Spin Version 4.2.3 -- 5 February 2005) Full statespace search for: never claim - (not selected) assertion violations - (disabled by -A flag) cycle checks - (disabled by -DSAFETY) invalid end states + State-vector 188 byte, depth reached 135, errors: 0 11377 states, stored 15019 states, matched 26396 transitions (= stored+matched) 0 atomic steps hash conflicts: 90 (resolved) Stats on memory usage (in Megabytes): 2.184 equivalent memory usage for states (stored*(State-vector + overhead)) 2.336 actual memory usage for states (unsuccessful compression: 106.92%) State-vector as stored = 201 byte + 4 byte overhead 2.097 memory used for hash table (-w19) 0.280 memory used for DFS stack (-m10000) 0.113 other (proc and chan stacks) 0.084 memory lost to fragmentation 4.630 total actual memory usage
This is only a very basic test case, but it passing SPIN’s verification means that at least with this schedule there is no interleaving of threads that can lead to an error. Now I’ll have to create more complicated schedules and threads and simulate them. Once I’ve verified that the algorithm as implemented in Promela is correct, I have to make a careful translation to Java.