Analyzing the Tridion Object Cache

Introduction

Tridion Content Delivery comes with an Object Cache module that keeps frequently requested information in memory to speed up access to your web site. In many cases this is all you need to know about the Object Cache: if you enable the Object Cache your web site becomes faster. But there are also situations where you'd like to know how your Object Cache is being used and whether assigning more memory to caching will improve performance even further. To do this you can use the Content Delivery logging mechanism to write out a separate log file with just the output from the Object Cache subsystem.

Prerequisites

For a more complete explanation of how the Tridion Object Cache works, see this article.

1. Enable logging of cache information to a separate file

Go to one of the web sites where you have the Object Cache enabled and find the logback.xml file used by that site. All the changes we need to do are to this file.
By default the logging output is written to the cd_core logging file, together with the logging done by many other Content Delivery modules. To get the caching information into a separate file, we add two elements to logback.xml.
First we define a new file type:


    <appender name="rollingCacheLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${log.folder}/cd_cache_staging.%d{yyyy-MM-dd}.log</fileNamePattern>
            <maxHistory>${log.history}</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>${log.pattern}</pattern>
        </encoder>
        <prudent>true</prudent>
    </appender>
This is just a copy of one of the other appender elements with a different name and fileNamePattern.
Next we tell logback to send all logging from the Object Cache to the newly defined files:
    <logger name="com.tridion.cache">
        <appender-ref ref="rollingCacheLog"/>
    </logger>
Last we need to make sure that logging is done in debug mode:
    <property name="log.level" value="DEBUG"/>


Note that it is in general a bad idea to enable debug logging on production servers. So you may want to do you analysis of cache efficiency on a non-production environment.

2. Analyzing the caching log file

After you restart the web site (or recycle the relevant Application Pool in IIS), a cd_cache....log file will be created with this type of information:
2012-01-27 10:28:19,863 INFO  CacheFactory - No policy configured, creating a cache with an empty policy.
2012-01-27 10:28:19,867 INFO  LRUPolicy - Creating LRU Policy for region
2012-01-27 10:28:19,867 DEBUG LRUPolicy - Configuring LRU Policy for region
2012-01-27 10:28:19,868 DEBUG LRUPolicy - List size of global LRU Policy = 0
2012-01-27 10:28:19,868 DEBUG LRUPolicy - Memory size of global LRU Policy = 16K
2012-01-27 10:28:19,868 DEBUG DependencyTracker - Creating a dependency tracker for region
2012-01-27 10:28:19,878 INFO  LRUPolicy - Creating LRU Policy for region /com_tridion_linking_PageLinkInfo
2012-01-27 10:28:19,879 DEBUG LRUPolicy - Configuring LRU Policy for region /com_tridion_linking_PageLinkInfo


So you can see that we initialize a cache here with a global size of 16K. In production this is too small, but for testing I set the cache to this unreasonable small size.
After some more information you will see that the cache is being populated:


2012-01-27 10:28:19,885 DEBUG DependencyTracker - Creating a dependency tracker for region /com.tridion.linking.ComponentLink
2012-01-27 10:28:20,163 DEBUG LRUPolicy - LRUModel: listSize = 1  memSize = 496
2012-01-27 10:28:20,180 DEBUG LRUPolicy - LRUModel: listSize = 2  memSize = 628
2012-01-27 10:28:20,187 DEBUG LRUPolicy - LRUModel: listSize = 3  memSize = 1263
...
2012-01-27 10:28:20,228 DEBUG LRUPolicy - LRUModel: listSize = 11  memSize = 6056
2012-01-27 10:28:20,231 DEBUG LRUPolicy - LRUModel: listSize = 12  memSize = 6655
2012-01-27 10:28:20,233 DEBUG LRUPolicy - LRUModel: listSize = 13  memSize = 7382


I removed a lot of the output here, since the Object Cache also writes information about the types of objects it is caching and the dependencies it tracks between those objects.
At some point the cache becomes full and objects will have to be removed to make room for new ones. At that point you will see entries like this in your log file:
...
2012-01-27 10:28:44,957 DEBUG LRUPolicy - LRUModel: listSize = 28  memSize = 15427
2012-01-27 10:28:44,959 DEBUG LRUPolicy - LRUModel: listSize = 29  memSize = 15986
2012-01-27 10:28:44,962 DEBUG LRUPolicy - LRUModel: listSize = 30  memSize = 16572
2012-01-27 10:28:44,962 DEBUG LRUPolicy - LRUPolicy.processPut: maximum memory size exceeded
2012-01-27 10:28:44,963 DEBUG LRUPolicy - Removing node with key true:false:1:264:124:0::
2012-01-27 10:28:44,963 DEBUG LRUPolicy - LRUPolicy.processPut: reduced memory size to 15937
2012-01-27 10:28:44,964 DEBUG LRUPolicy - LRUModel: listSize = 30  memSize = 16532
2012-01-27 10:28:44,964 DEBUG LRUPolicy - LRUPolicy.processPut: maximum memory size exceeded
2012-01-27 10:28:44,965 DEBUG LRUPolicy - Removing node with key 1:124
2012-01-27 10:28:44,965 DEBUG LRUPolicy - LRUPolicy.processPut: reduced memory size to 16036
2012-01-27 10:28:44,968 DEBUG LRUPolicy - LRUModel: listSize = 30  memSize = 16606
2012-01-27 10:28:44,968 DEBUG LRUPolicy - LRUPolicy.processPut: maximum memory size exceeded
2012-01-27 10:28:44,969 DEBUG LRUPolicy - Removing node with key true:false:1:264:124:0: class="FeaturedItemTitleLink" :
2012-01-27 10:28:44,969 DEBUG LRUPolicy - LRUPolicy.processPut: reduced memory size to 15847
2012-01-27 10:28:44,971 DEBUG LRUPolicy - LRUModel: listSize = 30  memSize = 16398
2012-01-27 10:28:44,972 DEBUG LRUPolicy - LRUPolicy.processPut: maximum memory size exceeded
2012-01-27 10:28:44,972 DEBUG LRUPolicy - Removing node with key true:false:1:264:93:0::
2012-01-27 10:28:44,972 DEBUG LRUPolicy - LRUPolicy.processPut: reduced memory size to 15777


As soon as my cache becomes bigger than 16K (the MemSize I configured) items are expunged until the size becomes acceptable again. Once this state is reached, adding any item to the cache will result in one or more other items being removed.
If this happens on your servers occasionally it is nothing to worry about, since it may just be an indication that the access patterns to your web site are changing and the Object Cache is adapting to the change. But if this pattern of remove-one-to-add-one is a frequent occurence on your server, it may well indicate that your cache is too small and that it would benefit from increasing the MemSize in the relevant cd_storage_conf.xml.

Result

Enabling Tridion's Object Cache is a great way to increase the performance of your web site. Analyzing the cache utilization regularly is a very good way to ensure that the performance stays optimal. The method documented here gives you the information needed to perform such an analysis.

Related Links

  • What is the SDL Tridion Object Cache?
  • Cache Extensions