I’ve submitted to GroupBy, free online training for Data Professionals. Sessions will be held in European hours on May 12 and for the Americas on May 13. Vote for sessions between April 13-17.

Management Studio 2008- What I Like

There’s some pretty cool things about SSMS 2008– I am really pleased that execution plans open graphically by default now, and it’s easy to view the XML on demand. I had gotten pretty fast at saving things as .sqlplan, but it was a bit of a drag. But my very favorite thing is really simple. I’m not sure if it even made it into the ‘feature lists’ anywhere. It’s this: When you are running a query in a tab, the top of the tab has the note as ‘Executing’ until it completes. Whomever got in this feature has my sincere and heartfelt thanks. This is incredibly useful to me. I frequently need to run manual commands to verify data to troubleshoot things that shouldn’t be automated, and it’s really nice to be able to move to another tab and periodically glance at the tab header to see when it completes. No more…
Read More

Profiler: Deadlock Graph Event

Until recently I’d never used the deadlock graph event in Profiler. I’ve been pretty lucky and haven’t had to troubleshoot deadlocks much until now. The deadlock graph turns out to be quite nice! When  you add this event, it uses the TextData column to describe the deadlock in XML. You can copy the contents and parse them, or just read through it to see details on the transactions involved in the deadlock. Profiler also displays a graphical picture of the deadlock. I find the picture is helpful mostly when going through the XML itself, but the whole thing is pretty great. The XML itself looks to be about the same as you get from trace flag 1222 in the SQL log, but you get the picture and the context of the other events you are profiling in sequence as well. Note: the deadlock graph won’t work when you are applying…
Read More

Replication – Updates that Don’t Flow Through

This past week at work we found an instance where a replicated table (transactional push) was out of sync on the subscriber. In evaluating how to address the situation, we did some testing in pre-production and discovered the following (using profiler) If you update a row on the published  table by setting a date column to itself, the row will not be updated at the subscriber. No check is performed to verify the value at the subscriber. If you update a row on the published table by incrementing a date column by one millisecond, the row will not be updated at the subscriber. If you update a row on the published table that is not at the subscriber, an error will result and the row will not be inserted. The resolution we went with was to pause processing, re-push the article, reset the application, and use a verify process in…
Read More

Finding Plans and Stats for Queries like ‘%something%’

I often need to find a query plan in the cache for a process that has run long overnight. Typically I’ll be able to figure out from our logging some of the tables involved in the query. Sometimes I will have most of the executing text but won’t know exactly what dates or reference points were included. Even when I have enough information to get an estimated plan, it’s usually really helpful if I can pull the actual plan out of the cache along with runtime statistics. The query below is what I use at this point to try to find these plans– I also sometimes use it just to look for long running queries in general. One note to remember– the last_execution_time field is the time of the plan activities at the last execution. So if you’re looking for a query that ran for an hour, this time would…
Read More

The DBA Sees Your Blocking Spids… A Bird’s Eye Summary of Blocking

Update from Kendra (Nov 2018) I’m keeping this post for posterity, but I REALLY don’t recommend the script. You’d be much better off using a production monitoring tool that did this job, or @AdamMachanic ‘s sp_WhoIsActive. Maybe you’re a user in a reporting database running a long query in the read committed isolation level, merrily blocking a process which needs to load data. Maybe you’re an application misbehaving in a transactional front end database, blocking other applications. Or possibly you’re a middle tier application who is behaving well and trying to do a small update, but the table you’re updating holds a legacy trigger designed when it was a much smaller table, and that trigger is doing data validation with long running locks. With there being a lot of users, a lot of applications, and in general a lot of blocking, it can be useful to get a bird’s eye view.…
Read More

Index Usage Statistics with ColumnList and Index Size

As an add on to my last post, here is what I currently do use to track index usage. This shows usage, columns in the index, and index size on disk. The size can be quite useful to know when evaluating how much an index is worth– typically if this index is large then you’re paying a fair amount on the inserts. If it’s not easy to tell the data types from your column names, that is a modification you’d definitely want to make. Remember that indexes that have a uniqueidentifier at the head are much more likely to cause page splits and be more work to maintain, so those indexes are more “expensive”. (In my current system I do have the luxury of a consistent naming convention where it’s fairly easy to tell the datatypes in indexed columns, so I haven’t added the datatype to the column list.) The…
Read More
Menu