Just like most of the ad-hoc dynamic tools I’m developing, the test to ensure “forced joins” does not guarantee that the test is actually correctly written: Threads can be spawned and the main thread does not join them again, but this goes undetected because every time the tests are run, the spawned threads coincidentally happened to terminate before the test ended. The test just catches a badly written test that does not join with spawned threads if this situation actually occurs.
How can I improve the framework’s ability to detect bad tests? How can I detect the lack of joins even if spawned threads always end before the test ends? I think I need to make the check bi-directional: In addition to making the main thread check if there are still spawned threads alive, spawned threads need to check that the main thread joined with them. The problem is that a spawned thread may die before the main thread reaches the
join statement, which then just acts as a no-op, so I can’t perform this check at the time the thread dies, because it may be a premature indictment. Too bad, that would have been easy to do…
I guess I should change
Thread.start so that it registers the thread in a list, change
Thread.join to remove the thread from the list, and then check that the list is empty at the time the test ends. This, of course, doesn’t guarantee that all bad tests are spotted either: It could just coincidentally happen that in the schedule that is executed, the threads that don’t have a
join statement aren’t started.
Another low-tech solution that’s not quite as thorough would be to delay thread death for a second or so, so it’s similar to the idea of inserting random delays at synchronization points . If a thread dies later than usual and there’s no join statement, it becomes more likely that the test ends before the thread is dead, and my current check would report the thread as still alive.
Without exhaustive static analysis or execution of all possible schedules, our original scheduled-replay approach, I can’t deterministically conclude that a test is actually successful and correctly written. I can only increase the likelihood that bad tests are caught.
I still believe doing this is worthwhile. The problem with both of these approaches is that I have to modify methods in the
java.lang.Thread class. That’s easy for me to do, but it also means that the rt.jar file has to be changed (I may get away with just putting the changed
Thread class file in a jar file and putting it at the front of the boot classpath, but there were some limitations to that; I’ll have to check if this is indeed possible), and Corky and I both wanted to limit the changes to JUnit to regular Java, without messing with the runtime. Since instrumenting the rt.jar file consumes both time and disk space, these improved “forced join” checks will probably be an optional enhancement.