Should I change the ‘locks’ configuration in SQL Server?

I recently got a fantastic question from a reader regarding lock usage in SQL Server. Here’s the question:

One of my production databases has a total lock count around 25,000 (select count(*) from sys.dm_tran_locks). The configuration setting for locks is set to the default of zero. This lock count is due to multiple procedures which frequently run and use the same 2-3 tables, repeatedly taking out and releasing locks. Do I need to change the configuration for locks or look into the SP’s so they can finish more quickly, rather than creating locks?

Our friend is correct to leave the ‘locks’ setting in sp_configure alone

The default setting of zero lets SQL Server manage the memory for locks dynamically. When use this dynamic setting, SQL Server will:

  • Allocate memory for 2,500 locks on startup
  • Acquire more memory for locks when it needs to do so (unless you’re in a memory pressure situation and it would cause paging)
  • Not allocate more than 60% of the memory allocated to the instance for locks
  • Trigger lock escalation when lock memory hits 40% of the memory allocated to the instance

If you change the setting for ‘locks’, you’re giving SQL Server a maximum number of locks that it can use.

Microsoft recommends leaving this at zero.

You could change ‘locks’ to raise the number of locks allowed. But is it a good idea to use more than 60% of your memory for locks? Even if you’re using that as a temporary band-aid, leaving less than 40% of your memory for buffer pool (data cache), execution plans, and everything else is going to be a world of hurt for most instances..

You could change ‘locks’  to lower the number of locks allowed. But not allocating locks means that the queries asking for locks are going to throw errors and fail. That’s not attractive, either.

So if you’re concerned about the number of locks you have, changing the ‘locks’ configuration setting isn’t likely to help you out.

The ‘locks’ configuration is also marked as slated to be removed in a future version. Microsoft doesn’t want you to be dependent on it.

What’s the memory overhead of those locks?

locksEach lock uses 96 bytes of memory. On the instance in question, 25,000 locks  = 2,400,000 bytes.

That’s only 2.3 MB of memory devoted to locks. Even though 25K  sounds like a lot, the memory footprint for that is pretty darn small.

I checked back with our questioner, and their instance has 32GB of memory. That’s a pretty small amount in the grand scheme of things (as of SQL Server 2014, Standard Edition can use up to 128GB of memory for the Buffer Pool), but 2.3 MB isn’t anything to worry about, percentage wise.

Do you have a high number of locks because you need better indexes?

Good indexes can dramatically reduce your lock overhead. Here’s a simple example using the SQLIndexWorkbook sample database.

For this sample query, run under the default read committed isolation level:

SELECT COUNT(*) FROM agg.FirstNameByYear WHERE FirstNameId=1;
GO

When this needs to do a clustered index scan, it requires 5,061 page locks.

After creating a nonclustered index on FirstNameId, the query requires only one page lock.

Indexes that help SQL Server find rows more quickly can dramatically reduce the number of locks that are taken out.

Are you waiting for locks because of blocking?

SQL Server is fast at acquiring locks — unless there’s a conflict, and you have to wait to get the lock because someone else is already using it.

In this case, the first step is to figure out when there is blocking, and who is blocking whom. I like to use alerts and the Blocked Process Report to help figure this out.

Do you have a high number of locks because of estimate problems?

One reason you might get a high number of locks is an inefficient execution plan based on poor estimates. If SQL Server thinks it’s only going to get a small number of rows, it may design a plan based on “lookups”. If it turns out that it’s got a lot more rows than it thought, it might have to execute this loop over and over and over– slowly looping and acquiring lots of little locks.

In this case, the stored procedures using this database are making heavy use of table variables. Table variables often lead to incorrect estimates in complex procedures, and could result in inefficient plans.

In this case, I wasn’t too worried about the 25,000 locks, but I thought it was possible that the performance of the procedures might be able to be improved if they have better estimates. I recommended:

  1. Testing out the procedures in a dev environment with temporary tables instead of table variables
  2. Evaluating how the procedures use indexes after the change — they likely will need different indexes for the new execution plans

If you have heavy use of table variables and can’t test out temporary tables, you can test out Trace Flag 2453 on SQL Server 2012 SP2 and higher. This trace flag doesn’t give table variables the full statistics support which temporary tables have, but does try to make SQL Server smarter about the number of rows in the table variable.

Disclaimer: changing from table variables to temporary tables doesn’t always make things faster. I wasn’t doing live troubleshooting here and I didn’t have actual execution plans– it’s possible that the rowsets are small and the table variables were doing well. You never know until you test!

Sometimes you should just take out a tablock

I don’t think this is the case for the person asking this question, but there are some cases when you just want to go ahead and take out an exclusive lock on a table. Not only can it simplify the number of locks for the table, but it can help make data loading more efficient.

Previous Post
Should I Upgrade to SQL Server 2016? (Dear SQL DBA Episode 22)
Next Post
Why is My Query Faster the Second Time it Runs? (Dear SQL DBA Episode 23)

Related Posts

3 Comments. Leave new

Last week I had to recommend the use of a PAGLOCK hint (thankfully didn’t have to go tablock) because of a particular query, perhaps because of bad estimates (not sure!), was doing several gigabytes of key locks and running out of memory in the lock manager…the query would fail and the lock manager wouldn’t give back the memory until you hit it with DBCC FREESYSTEMCACHE(‘ALL’). At this point, unsure why lock escalation didn’t kick in, but I’m still pretty hazy on all that. I feel like I should have paid more attention in the memory module of my last SQL immersion class!

Reply

    It may have been trying to escalate, but wasn’t able to because of conflicts. Depending on what the query was doing, it would be trying to convert an IX lock to the table to an X lock, or an IS lock to an S lock. So other users of the table may make escalation not work. It will re-try periodically if it keeps going. (The Books Online page on escalation is a pretty good read, TBH: https://technet.microsoft.com/en-us/library/ms184286.aspx)

    Reply

      That is a great article! I had forgotten about lock escalation skipping page locking and going straight to table lock (like skipping the triple dare and going straight for triple dog dare), which would be more easily blocked. Also, there were several self-joins in this query on the afflicted object…seems it has to breach the threshold for each individual reference, not just the object overall in the query. Thanks!

      Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Menu