I’ve submitted to GroupBy, free online training for Data Professionals. Sessions will be held in European hours on May 12 and for the Americas on May 13. Vote for sessions between April 13-17.

Finding Plans and Stats for Queries like ‘%something%’

I often need to find a query plan in the cache for a process that has run long overnight. Typically I’ll be able to figure out from our logging some of the tables involved in the query. Sometimes I will have most of the executing text but won’t know exactly what dates or reference points were included. Even when I have enough information to get an estimated plan, it’s usually really helpful if I can pull the actual plan out of the cache along with runtime statistics. The query below is what I use at this point to try to find these plans– I also sometimes use it just to look for long running queries in general. One note to remember– the last_execution_time field is the time of the plan activities at the last execution. So if you’re looking for a query that ran for an hour, this time would…
Read More

The DBA Sees Your Blocking Spids… A Bird’s Eye Summary of Blocking

Update from Kendra (Nov 2018) I’m keeping this post for posterity, but I REALLY don’t recommend the script. You’d be much better off using a production monitoring tool that did this job, or @AdamMachanic ‘s sp_WhoIsActive. Maybe you’re a user in a reporting database running a long query in the read committed isolation level, merrily blocking a process which needs to load data. Maybe you’re an application misbehaving in a transactional front end database, blocking other applications. Or possibly you’re a middle tier application who is behaving well and trying to do a small update, but the table you’re updating holds a legacy trigger designed when it was a much smaller table, and that trigger is doing data validation with long running locks. With there being a lot of users, a lot of applications, and in general a lot of blocking, it can be useful to get a bird’s eye view.…
Read More

Index Usage Statistics with ColumnList and Index Size

As an add on to my last post, here is what I currently do use to track index usage. This shows usage, columns in the index, and index size on disk. The size can be quite useful to know when evaluating how much an index is worth– typically if this index is large then you’re paying a fair amount on the inserts. If it’s not easy to tell the data types from your column names, that is a modification you’d definitely want to make. Remember that indexes that have a uniqueidentifier at the head are much more likely to cause page splits and be more work to maintain, so those indexes are more “expensive”. (In my current system I do have the luxury of a consistent naming convention where it’s fairly easy to tell the datatypes in indexed columns, so I haven’t added the datatype to the column list.) The…
Read More

Everything About Your Indexes (well, almost)

am going to post my monstrously big index query.

Why? Because it’s AWESOME. No really, it actually is awesome. At least, if you like that sort of thing. I use some variant of this almost daily, and I tweak it fairly regularly to suit the needs of whatever I’m working on. So it’s a work in progress, but I find it constantly valuable.

Awesome? Oh Really? Why?
This query describes the size, basic definition, location, number of rows, partition status, and enabled/disabled status for all clustered and nonclustered indexes in a database. I typically sort them by descending size, since my primary usage is when a drive space alert fires, or when someone asks one of the million “how much space would it take if we wanted to [x]?” questions.

When you are working with a database which has many indexes that are partitioned over multiple filegroups, which are spread out over multiple drives, this can be very useful when a reindex fails due to a file filling up. Or when you want to estimate how much free space you need to main in a given filegroup in order to be able to reindex the indexes using it.

Read More

How Stale are my Statistics?

Update: improved/more recent version of queries for this are here. It can be pretty difficult to manage statistics in data warehouses, or even OLTP databases that have very large tables. This is because, even with auto_update_statistics turned on, SQL is pretty conservative about when to update statistics due to the cost of the operation. For large tables, statistics are updated when “500 + 20% of the number of rows in the table when the statistics were gathered” have changed. (see BOL here) So for a table with 50 million rows, statistics will auto update when more than 10 million 500 rows have changed. I have a lot of tables with a lot of rows, and this can be a problem. Take a fact table, for instance, where the key is sorted by an integer representing a date. Every day, a large amount of new records are loaded and there is…
Read More

Checking Permissions on Linked Servers

One reason I started this blog was just the idea of going through my catalog of scripts and reviewing them and sharing out what might be useful to people.

Here is a quick one I put together a while back. I was starting to work with a group of servers [at an unnamed company, always an unnamed company!]. Some of the instances had been configured long ago, and I found some linked servers where passwords had been hardcoded into the login mappings.

This can be a big security vulnerability, particularly if the option has been chosen to map all users to that login, and the login has significant powers on the other end of the linked server….

Read More
Menu