Monitoring open transactions and low_water_mark_for_ghosts (13 minutes)


How can we tell that we’ve got this transaction sitting there open for a while?

We might try my good old friend DBCC OPENTRAN

This is one of the first commands that I learned when using SQL Server. What is my oldest active transaction? Well, DBCC OPENTRAN says there are no active open transactions. It hasn’t really caught up with modern times. How about some dynamic management views?

Let’s look at sys.dm_tran_database_transactions

It does see that we’ve got a database transaction open. It doesn’t give us a begin_time though which is kind of a big bummer. So if I want to know how long has this transaction been open which I actually do want to know that, this dynamic management view doesn’t happen to help me.

There’s another dynamic management view that does help us

This one is sys.dm_tran_active_snapshot_database_transactions. So this will tell us how many active transactions you have using snapshot isolation, and all of my queries against this guy are going to be escalated to snapshot isolation and it does tell me that at the moment I ran this in session 66, our elapsed time is 625 seconds. So that’s great. If I really actually want to get details on how many sessions are running against my secondary, I can see them and I can see how long they’re running by querying this dynamic management view.

If I want to do monitoring and look for long-running transactions…

There’s an even simpler way that I can do this using performance counters. What I’m querying here is sys.dm_os_performance_counters. This is a dynamic management view that shows me perfmon counters.

But from querying in SQL Server, you can see these same counters by using the perfmon tool or if you have a monitoring tool that can connect in and look at performance counters, give you history over time, it can collect these counters as well. And under the object_name that’s the Transactions object, I have a named instance here, right, so it’s looking at my instance name in the counter name, there are some very useful counters for monitoring the long-running transactions here. One of the counters that you can look at is the Longest Running Transaction Time.

This is only going to be transactions that are using snapshot isolation or read committed snapshot for a readable secondary, and this is for the whole instance. For a readable secondary, this will give us the longest running transaction using snapshot isolation or RCSI. And that can be very useful if what I want to know is what might be causing a buildup of versions in tempdb. It’s reported in seconds. It updates every 60 seconds or so typically. I can also see the number of snapshot transaction that I have. I currently have that one open snapshot transaction.

This is a lightweight way, by grabbing these counters, to baseline what is normal on my instance and then set alerts to let you know if things start running long. And part of why we can about this is that version buildup on the readable secondary can impact performance against the secondary.

Long-running transactions against the secondary can also have some impacts against the primary database…

Because they impact something called ghost cleanup. And ghost cleanup, if it is delayed, can impact whether or not parts of your transaction log may be able to be reused. So that can be, if this gets out of hand and we have really long-running transactions on our secondary, depending on the rate of modifications we have and the type of modifications that they are, we could potentially impact our primary database. We can see this on the primary replica as well. So what we’re going to do here is we are going to clean up our previous demo.

I’m going to take this open transaction who is running here and we are going to go ahead and commit it.

So we now have finished up all our transactions against our readable secondary.

#But of course, we’re going to start a new one fresh.

In this session I’m starting a new transaction…

And I’m counting rows where the DateKey is either January 1st, 2005, which is what the rows currently have, or January 1st, 2007, and as expected, there are 11242 rows in there. And now we’re going to run a bunch of queries against our primary and get to see what that transaction on the secondary may cause on our primary and how we can view it.

Let’s ghost some records

On our primary database, this time I am just going to insert a bunch of rows. I’m saying for all the rows that are January 29th, 2009, and I just picked that because it had a useful number of rows, it had about 11000 and 1/2 rows, insert new rows with a different DateKey, February 1st, 2005.

This is just a date where we didn’t have any rows, so I’m inserting a bunch of rows for that date simply so that we can delete all the rows and ghost some records. So now that I’ve inserted 11638 rows, I’m going to go in and delete them, and sure enough I deleted 11638 rows.

When you delete in SQL Server, SQL Server doesn’t in the midst of the delete, it doesn’t, as part of that transaction, go scrub all the data off of the pages that are impacted from the table. Instead it has a way that it can just mark, okay, yeah, this record it’s deleted.

If we want to reuse the space, somebody needs to come through and clean it up later.

That process, who comes through periodically and cleans up the ghost records, is called ghost cleanup. And having transactions open on our readable secondary database can mean that ghost cleanup is a little bit worried about doing its work because it doesn’t want to clean things up and then send those cleaned pages or this will happen through commits in the transaction log. It doesn’t want to send that cleaned up stuff to those replicas and potentially impact them, depending on what happens next in those open snapshot queries. I’m looking at specifically the sys.dm_hadr_database_replica_states dynamic management view. I can see a column named low_water_mark_for_ghosts, and I can see this information for each replica.

So right now on my two instances, my two replicas, BEEPBEEP is my primary replica and here is my current low_water_mark_for_ghosts.

My secondary replica, who is readable, has a low_water_mark_for_ghosts that is a lower value and that is because I have that open transaction who says, oh, oh, we can’t clean up those ghosts that you created. I’m going to now go over to my secondary replica with the open transaction and I am going to commit it.

This was the only transaction I had open against my secondary replica. And before I refresh this, I’m going to copy out what was my low_water_mark_for_ghosts before I committed that transaction. And now we’re going to go ahead and run the query again and see did our low_water_mark_for_ghosts go up.

Now we’re at 40,430,093. In this case, even though I have committed that read transaction on the secondary, at this moment the low water marks are not equal for my primary and secondary database. And that can be the case sometimes. Sometimes these can be equal, but right now against my secondary, I don’t have open transactions. And we can see that by querying our trusty sys.dm_tran_active_snapshot_database_transactions.

We can prove that against our secondary database there and we can see that at that point our low water marks weren’t the same exactly, even though we didn’t have open transactions. They are not guaranteed to be the same, of course.

If we’re running into a situation where we have a lot of replicas…

We can look for, okay, whose water mark is the lowest here? And, if I’m having a problem like why is my transaction log growing even if I’m running backups, and even if I don’t have long-running transactions on my primary replica, this low water mark can give me an indicator about which secondary to look at. Now if I have been clever and I have set up monitoring, maybe with these performance counters, maybe with a different method, but if I’ve set up monitoring so that I get an alert if I have unusually long-running transactions against any of my secondary replicas, that could save me some time as well and I might be able to go straight there after getting the alert.

A quick recap before we go

When you query a readable secondary database, escalation to snapshot isolation happens. We showed in our demo that I used a NOLOCK hint and I didn’t see any dirty reads, even though we could tell by looking at the version store, that changes had come over to that database. This escalation to snapshot isolation for the readable secondary is really useful because it means that when inserts, updates, and deletes are happening, my snapshot transactions running against the secondary aren’t going to get blocked by simple inserts, updates, and deletes. And things do get a little trickier sometimes when schema modifications are involved.

This escalation to snapshot against the secondary also means though that to support this, we’ve got to have versions in our version store on the secondary database. These are automatically generated in tempdb. And with long-running transactions, we’re going to have potentially a buildup in our version store if there are lots of modifications happening. It’s really, really convenient in this case that this form of snapshot isolation ignores locking hints and it even ignores if you were to go in there and say set my transaction isolation level for my session serializable, you are still going to have your queries use snapshot isolation auto-magically, no matter what you try to the contrary.

Other takeaways, if you aren’t sure where long-running transactions might be in an availability group that has a lot of replicas, the sys.dm_hadr_database_replica_states dynamic management may help to look for that low_water_mark. But I think that it’s easier to baseline your long-running transactions counter at first. And if you baseline this and it’s longer than you would expect at the beginning against your replicas, just start looking at what’s running in that time. Is there long-running queries running I don’t know about that are a part of a process that’s running during this time?

You can also set alerts using the simple counter. Just remember it’s only updated every 60 seconds. So if it looks like it’s stuck, that’s by design. This one isn’t a super granular counter.

Thanks for attending this session on Isolation Levels Against Readable Secondaries. I’m Kendra Little from

Back to: Snapshot Isolation Against Availability Group Secondaries (28 minutes) > Videos

Share a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.