Summer School Wrap-Up

The 12 days of summer school are over already. I had a great time: I learned a few new techniques, realized several times that I’m on the right track, and met many interesting people. Some of them, I’m sure, will accompany my career for a while. A long while, hopefully.

What is there to report since my last posting? I took a day off to fix a number of DrJava bugs. There are still one or two bigger ones left, bu I think we can release a new version within a week. I think just today, I found a rather serious timing bug in the debugger code.

I (kind of) figured out what was going on with the zero-filled sync point logs: For some reason, sometimes when I do Event.toString() on a JDI event, it breaks the system. So I took that out; it was for debug logging anyway. At least I hope this was the reason and not just a weird hack, because I don’t see a reason why calling toString() should break the system in such a way anyway.

I’m done creating headless versions of the FileInstrumentor, Record and the RecordLauncher, and of Replay and the ReplayLauncher. The headless versions are a lot faster.

I didn’t continue my attempts to exactly figure out what is the problem with assignming object IDs on the Mac. I’m pretty sure that adding the $$$objectId$$$ field to some class is the problem, though.

I have realized that there is a fundamental problem with my approach: Whenever I’m putting sync points in the buffer, I’m introducing a new reference to a class. That means that at some point this class will have to be loaded by the class loader, and this introduces sync points that previously weren’t there. This is a chicken-and-the-egg problem and even resembles the Uncertainty Principle a little: In order to observe the sequence of synchronization points, I am forced to change the sequence, which prevents me from observing the unaltered sequence. Not putting in my own classes preserves the sequence of sync points, but I don’t have the means to observe it.

I don’t think this problem will be as bad as it sounds, though, because in practice, my new classes should be loaded very early on. That means that 1) the sync points occur during the VM initialization, and we don’t consider the sync points in that phase important anyway, and 2) it should be relatively easy to recognize the additional sync points so they can be filtered out. I’m a litte more worried that the class loader that does the instrumentation will also introduce similar sync points throughout the program’s execution, though.

I have also realized that the monitorenter and monitorexit instructions in the code that assigns object and thread IDs do need to be recorded and replayed in the correct order (otherwise we end up with different IDs, and then there’s no point in having IDs at all), but since these instructions were added by the instrumentation, they should not be treated the same as other monitorenter and monitorexit sync points. I’ll introduce special codes. (Update: I have also just realized that the program tries to pass the object and thread IDs for these monitorenter and monitorexit instructions even though they clearly have not been assigned yet, since the synchronization is protecting just those assignments! I should definitely use special codes.)

I have thought a little more about how to use the identity hashcode as a fallback. That value is a 32-bit int, and at least so far it always seems to be non-negative. The easiest way to put both an object ID and the hashcode in the same long is probably to use the unmodified identity hashcode (0, 1, 2, …) and negative object IDs (-1, -2, -3, …). That gives me 2^63 unique object IDs.

It’s not too bad if the object IDs wrap around and I start getting the same ones again (even though they should probably wrap from from -(2^63) back to -1): The deadlock detector may just issue more false positives, but those can be discerned as such later. The program will just run slower.

I still haven’t finished tying the deadlock detector to the compact scheme. I think I’ll have to redo more of it than I thought. Right now, it’s very heavily dependent on the “heavy” object scheme that I initially used. I’ll still do a detailed analysis at the object level using ThreadReferences, but only to remove false positives when the compact scheme has already detected a deadlock.

I think that’s my recap for tonight. Good night.

Share

About Mathias

Software development engineer. Principal developer of DrJava. Recent Ph.D. graduate from the Department of Computer Science at Rice University.
This entry was posted in Concurrent Unit Testing, DrJava, Research. Bookmark the permalink.

Leave a Reply