Ok, this is a cute bug.
Under the hood, UtilCache makes use of java.util.LinkedHashMap when maxMemSize is set. It configures LinkedHashMap to act like a LRU map; every access reorders the internal linked list. Access here is defined as a call to get(key). The problem is that to handle element expiration, UtilCache must call get(key) on LinkedHashMap. This causes the key to be accessed, which pushes it to the front of the linked list, and breaks the LRU contract. The way to fix this is to switch UtilCache away from a polling mechanism for expiration. Instead, a DelayQueue can be used, where every single CacheLine is registered into a single, static DelayQueue, and a thread is started that is always polling from that. When an item is returned from the poll, it removes it from it's containing cache. This is the only way I see to get UtilCache make to being a proper LRU system. |
Adam Heath wrote:
> Ok, this is a cute bug. > > Under the hood, UtilCache makes use of java.util.LinkedHashMap when > maxMemSize is set. It configures LinkedHashMap to act like a LRU map; > every access reorders the internal linked list. Access here is > defined as a call to get(key). > > The problem is that to handle element expiration, UtilCache must call > get(key) on LinkedHashMap. This causes the key to be accessed, which > pushes it to the front of the linked list, and breaks the LRU contract. > > The way to fix this is to switch UtilCache away from a polling > mechanism for expiration. Instead, a DelayQueue can be used, where > every single CacheLine is registered into a single, static DelayQueue, > and a thread is started that is always polling from that. When an > item is returned from the poll, it removes it from it's containing cache. > > This is the only way I see to get UtilCache make to being a proper LRU > system. I think it would help if the original cache design decisions were available so we can understand why things were set up that way. The first question that comes to my mind is: Why does the cache need to be polled? It seems to me the max memory size would need to be checked only when a new item is about to be placed in the cache. |
Adrian Crum wrote:
> Adam Heath wrote: >> Ok, this is a cute bug. >> >> Under the hood, UtilCache makes use of java.util.LinkedHashMap when >> maxMemSize is set. It configures LinkedHashMap to act like a LRU map; >> every access reorders the internal linked list. Access here is >> defined as a call to get(key). >> >> The problem is that to handle element expiration, UtilCache must call >> get(key) on LinkedHashMap. This causes the key to be accessed, which >> pushes it to the front of the linked list, and breaks the LRU contract. >> >> The way to fix this is to switch UtilCache away from a polling >> mechanism for expiration. Instead, a DelayQueue can be used, where >> every single CacheLine is registered into a single, static DelayQueue, >> and a thread is started that is always polling from that. When an >> item is returned from the poll, it removes it from it's containing cache. >> >> This is the only way I see to get UtilCache make to being a proper LRU >> system. > > I think it would help if the original cache design decisions were > available so we can understand why things were set up that way. > > The first question that comes to my mind is: Why does the cache need to > be polled? It seems to me the max memory size would need to be checked > only when a new item is about to be placed in the cache. You misunderstand the problem. Having a LRU map that auto-removes things as the map gets too large is fine. This part works. What breaks is that the a LRU map's algorithm is only effective when a get is a get, and a put is a put. However, UtilCache does a get while doing a put. This causes the LRU map to consider this get an access, and the item moves to the beginning of the internal list. If I do UtilCache.put(key, value), I don't expect the LRU to change the order of eviction. It should only change during a get(key) call. I discovered this problem while trying to write test cases. |
Free forum by Nabble | Edit this page |