Over the years, I’ve come across a pattern fairly frequently: an application in an OLTP database periodically creates new tables, loads some data into them and fixes it up, then does a switcheroo and replaces old tables with the new tables. This can cause major problems with blocking if anyone else is querying the table. The problem looks like this: Innocent query / queries are selecting from the current production table Switcheroo process comes along and tries to do an sp_rename Switcheroo process is blocked and is waiting on a schema modification lock Switcheroo process starts blocking EVVERRRYBODYYYY else who wants to query the table Sometimes this clears quickly, but it varies depending on how long the queries take and how complex the Switcheroo process is. On a busy system, a big blocking backlog can even potentially cause THREADPOOL waits, which means the SQL Server generally feels like it’s not…Read More
I recently received this question from a reader… I just moved from an in-house software development company to a new environment that most of the software used here are COTS (Commercial off-the-shelf) This is totally new to me. I’m a little bit lost since I don’t know anything on the applications, users (security), or the schema. What is your recommendation to manage such an environment and perform performance tuning? Part of the confusion is terminology I hadn’t heard the acronym “COTS” before — but that doesn’t surprise me. Companies have wildly different terms they use for this. If you manage these types of databases, you many need to use multiple search terms to find more resources: Third party vendor database / application Independent Software Vendor database / application (ISV) And also COTS! Here’s a great article on administering COTS databases for DBAs Tim Ford wrote a terrific article for this – it just doesn’t…Read More
Short answer: the SQL Server optimizer will know that the table was truncated, but statistics might not update when you expect. For the long answer, let’s walk through an example using the WideWorldImporters sample database. I’ll be using Trace Flag 3604 and 2363 to get SQL Server to print information about how it optimized my query out to the messages tab. (Thanks to Paul White for blogging about this trace flag.) First, a fresh restore of WideWorldImporters USE master; GO IF DB_ID('WideWorldImporters') IS NOT NULL ALTER DATABASE WideWorldImporters SET OFFLINE WITH ROLLBACK IMMEDIATE RESTORE DATABASE WideWorldImporters FROM DISK= 'S:\MSSQL\Backup\WideWorldImporters-Full.bak' WITH REPLACE GO USE WideWorldImporters; GO Before we do anything, what do the statistics look like on Sales.OrderLines? Here’s the query that I’m using to inspect the statistics: SELECT sp.last_updated, stat.name as stats_name, STUFF((SELECT ', ' + cols.name FROM sys.stats_columns AS statcols JOIN sys.columns AS cols ON statcols.column_id=cols.column_id AND statcols.object_id=cols.object_id WHERE statcols.stats_id = stat.stats_id…Read More
When Did SQL Server Last Update That Statistic? How Much Has Been Modified Since? And What Columns are in the Stat?
Whether I’m working as a DBA, a consultant, a teacher, or just answering questions in my inbox, I always end up needing a script to inspect statistics one way or another. Here are some freshly written scripts for a classic DBA question: what’s going on in my stats? How to get statistics details on SQL Server 2008 R2 and higher For most modern versions of SQL Server, I like to join to sys.dm_db_stats_properties() — you can get a LOT of detail in a single query! (This works with SQL Server 2008 R2 SP2+ / SQL Server 2012 SP1+ / All higher versions) Here’s the query, looking at a sample table in the WideWorldImporters database: SELECT stat.auto_created, stat.name as stats_name, STUFF((SELECT ', ' + cols.name FROM sys.stats_columns AS statcols JOIN sys.columns AS cols ON statcols.column_id=cols.column_id AND statcols.object_id=cols.object_id WHERE statcols.stats_id = stat.stats_id and statcols.object_id=stat.object_id ORDER BY statcols.stats_column_id FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)'),…Read More
The bigger your indexes are, the harder your index maintenance falls. Is it better to rebuild big indexes? Or should you reorganize? If you’re short on time, scroll down: an article with all the content is below the 23 minute video. Here’s this week’s question: Dear SQL DBA, Any advice for rebuilding indexes on very large tables? We’ve got a large table with over 2 billion rows. Index rebuilds consume over 300GB of transaction log space even though we’re backing up the log every 3 minutes. To solve this problem, should we reorganize instead of rebuild? Any issues with never rebuilding indexes – just reorganizing them? We are using SQL Server 2014 Enterprise Edition. Sincerely, Increasingly Fragmented Rebuilding big indexes can cause big headaches Index rebuilds have a log lot going for them, but they also have some drawbacks. ALTER INDEX REBUILD Pros Cons Can use parallelism – which can reduce the…Read More
What tools in SQL Server will notify you about blocking and help track the queries behind your toughest blocking and deadlocking problems? Watch the 21 minute video, subscribe to the podcast, or read the episode notes and links below. Dear SQL DBA, What is the best way to set up blocking and deadlock alerts on the server? I want to be notified automatically without any impact on the prod server. I have tried alerts with SQL server performance condition alerts with SQL server agent. They do not show the queries or tables involved etc? Thanks, All Blocked Up Woo hoo, I love this question! So first off, I’m going to answer this discussing the free, built-in tools with SQL Server. If you have a budget for custom monitoring tools, you can buy fancy tools that have customized notifications for blocking and which capture the queries and plans involved. If that’s the case, set…Read More