How it Works – “LogPool”   1 comment

Hello all,

Today I would like to discuss about one of the new enhancements done in SQL Server 2012 called LogPool.

Before we get into the details, let me try to explain how does the log flush occur in SQL Server. When a record is inserted or modified in a database, it is first copied into the log buffer in memory, these log buffers are like log blocks of different size (512 B to 60 KB). When this buffer is full or a commit is issued these log blocks are flushed to the transaction log.

Now let’s explore more about the newly introduced feature in SQL Server 2012 i.e. LogPool. It is a new feature introduced in SQL Server 2012 to enhance performance of transaction log i.e. TLOG. This is applicable for reading the transaction log by different consumers. In SQL Server there are multiple consumers which reads the transaction logs multiple times to perform certain activities. For example Replication, Mirroring, AlwaysOn, Recovery, Rollback, fn_dblog each of these consumers read the transaction log. Till SQL Server 2008 R2 if any of these consumers wants to read the log they have to read it from the disk and there is no sharing of buffers between these consumers if there are multiple consumers reading the transaction log multiple times it can cause tlog performance problems.

To improve this performance issue with transaction log, Logpool was introduced in SQL Server 2012. In Logpool when log blocks are flushed to disk, before flushing it checks if there are any consumers interested in log blocks. If it finds any these log blocks are first copied in to memory block before flushing in to transaction log.


As you can see in the above diagram. Log pool uses 2 different methods to copy these buffers into memory blocks depending upon the consumers.

Private Cache: – Each consumer has its private cache that keeps these buffers and its consumption. So if there is only 1 consumer on the server or no other consumer is interested in reading in that case those log blocks will be copied to the private cache of that particular consumer.

Cache buffers (Hash table): – If there are multiple consumer enabled and interested in reading the transaction log in that case log locks are then copied to cached buffers and hashed in to a hashed table so you have only one copy of buffers which is shared across all the consumers.

There are few dmv’s also introduced for each of these activities,  so far there is no documentation available in books online for these dmv’s.

sys.dm_logpool_stats  –  This will provide the details about logpool

sys.dm_logconsumer_privatecachebuffers  – this  shows the private caches of each consumers. it needs 2 parameters (database id, consumer id)

sys.dm_logconsumers_cachebufferref  –  this dmv shows details about multiple consumers which are using this logpool, this also needs 2 parameters (database id, consumer id)

sys.dm_logpoolhashentries – This DMV is used to show all the hashed entries which are shared across multiple consumers.

A memory clerk was also introduced which accounts all the memory taken by these consumers in the logpool. It is tracked by a memory clerk called MEMORYCLARK_SQLLOGPOOL.

There was a bug in RTM version of SQL Server 2012 where this logpool used to cause memory leak which was fixed in CU5(RTM) and CU2(SP1). Please refer to kb ( for more details.

Hope you liked it. Stay tuned for some more interesting blogs!!!

One response to “How it Works – “LogPool”

Subscribe to comments with RSS.

  1. Pingback: Using Extended Events to Visualize Availability Group Replication Internals - Centino Systems Blog

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: