Select Page

Ah, so I’m currently debugging what we think is a memory leak in our java application.  In java, performance issues and memory leaks primarily surface through bad coding habits.  Yes, the garbage collector helps us, but if your trash strinks, then your jvm heap will continue to wreak even after its been emptied.  

The interesting caveat here is that I’m not seeing the memory issues in the QA environment.  What’s interesting is that we stress our system with a much higher load in the QA env than we do in production; however, in the production environment, our scope of use cases is much more varied and likely exercises much more of our code paths.  Herein likely lies the mem leak.

Tools and approach to the situation:
1. jmap to create a dump file of the heap (jmap -dump:file=heap.bin [pid] )
2. jhat to view the histogram of the heap.  In particular, I’m paying close attention to the ‘histogram’ page, as it gives telling information about suspect classes that have a high instance count and memory usage, in terms of bytes.
(jhat -J-mx2560m heap.bin)
3. code: use the finalize() method in suspect classes to print our debug lines.  The finalize() method is called by the java runtime just before it garbage collects an object.  This would help us see what instances of the class are being garbage collected – perhaps the instances that are being garbage collected go through a certain code path and these use cases are likely similar to what appears on our QA environment. 
4. jconsole
The hunt shall commence…