Select Page

The Eclipse Memory Analyzer Tool helped us find 2 memory leaks. The first one, which we thought was the main course, turned out only to be the appetizer. The meat and potatoes (sincere apologies to the vegetarians but I’m trying to make a point) were only found after testing the first fix again through the Memory Analyzer Tool (MAT).

We had initially run the Histogram and Leak Suspects reports and found that the Apache TagHandlerPool was holding on to our JSP Tag Classes. Well this isn’t a big deal if your custom Tag classes aren’t holding on to anything. Err, let’s look at this a different way… It’s not a big deal if your custom Tag classes are correctly releasing objects that no longer need to be referenced. See, our Tag classes weren’t releasing objects since we hadn’t overridden the doEndTag() calls to perform any cleanup. We were sending off our old objects to the train station without a valid boarding pass and we did this by failing to set those objects to null in the doEndTag() method.

Easy fix:

doEndTag(){
objectNotNeeded = null; //now becomes candidate for garbage collection
}

Potential issues:
1. The objectNotNeeded may be called again by some code path that we’re not aware of. What if the user hits the back button on our page?
Reply: There is a release() method that is called at the very end of the jsp life cycle, just after the doEndTag(). This is a safer place to perform cleanup since this is the absolute last breath of life in the lifecycle. I did some testing though, and this method was called only 12 times by the container throughout the duration of the server run so we wouldn’t be seeing a big improvement in memory utilization by putting cleanup code here.

We deployed the code fix and noted the following behavior:

What this means?
We still have a leak. We reduced the rate at which memory is being accumulated, but it is evident that objects are still being held onto in memory.
So we took a java heap dump of a production server after we had let it drain for 24 hours. The tool’s Leak Hunter pointed us to a new suspect. We have a cache that holds onto certain items and the cache removal is based on the LRU (least recently used scheme). The cache was accounting for over 65% of the java heap!
The cache itself extends Java’s LinkedHashMap class (java.util.LinkedHashMap).

The map can be used as an LRU cache simply by overriding the following function:


private static final int MAX_ENTRIES = 100;

protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > MAX_ENTRIES;
}

(taken from http://java.sun.com/javase/6/docs/api/java/util/LinkedHashMap.html#removeEldestEntry(java.util.Map.Entry))

This convenience method makes it such that everytime a put(Object) is called, the map will first check to see if the size capacity has been met. If so, the least recently used entry is removed to allow for the new entry. Simple right?
Well, the one caveat is that this convenience method is only called by the underlying implementation if and only if a special constructor is used:
public LinkedHashMap(int initialCapacity,
float loadFactor,
boolean accessOrder)

We were not using the special constructor and so ALL items were being persisted in the cache and remaining there for the lifecycle of the application. I’ll have an update of our memory charts with the new fix once we have the data.

THANK YOU ECLIPSE MAT!!