Upcoming Events

Level Up Your Deployments for SQL Source Control – Thu, Feb 20 at 4pm GMT/ 11 AM Eastern – Free webinar

SQLBits – Core Best Practices for High Performing DevOps Teams (Session) – March 31 to April 4 – London, UK –  Register

Data Grillen 2020 – May 28, 29 –  Lingen, Germany – sold out – schedule

Data Ceili – July 10 – Dublin, Ireland – Free! Register

Recent Recordings

Redgate Evangelist YouTube Channel: Tutorials on Database DevOps -New videos each week – Watch

Key findings from the 2020 State of Database DevOps Report – Watch

Managing and Automating Test Datasets for DevOps  with Jeffrey Palermo – Watch

How to Make Your 2020 Monitoring Strategy a Success, with BMW’s Tony Maddonna – Watch

Essential Practices for High Performing Database DevOps Teams – Watch

Fast and Reliable Development with Redgate Solutions for SQL Server – Watch

Implementing Data Masking for NIST Compliance – 1 hour – Watch

How Developers and DBAs Collaborate in a DevOps World – 40 minutes – Watch

How DevOps Keeps DBAs Safe from Being Automated Out of a Job – 1 hour – Watch

DevOps: What, who, why and how? – 1 hour – Watch

Can This Team Succeed at DevOps? Panel discussion – 1 hour – Watch

Replication – Updates that Don’t Flow Through

This past week at work we found an instance where a replicated table (transactional push) was out of sync on the subscriber. In evaluating how to address the situation, we did some testing in pre-production and discovered the following (using profiler) If you update a row on the published  table by setting a date column to itself, the row will not be updated at the subscriber. No check is performed to verify the value at the subscriber. If you update a row on the published table by incrementing a date column by one millisecond, the row will not be updated at the subscriber. If you update a row on the published table that is not at the subscriber, an error will result and the row will not be inserted. The resolution we went with was to pause processing, re-push the article, reset the application, and use a verify process in…
Read More

Finding Plans and Stats for Queries like ‘%something%’

I often need to find a query plan in the cache for a process that has run long overnight. Typically I’ll be able to figure out from our logging some of the tables involved in the query. Sometimes I will have most of the executing text but won’t know exactly what dates or reference points were included. Even when I have enough information to get an estimated plan, it’s usually really helpful if I can pull the actual plan out of the cache along with runtime statistics. The query below is what I use at this point to try to find these plans– I also sometimes use it just to look for long running queries in general. One note to remember– the last_execution_time field is the time of the plan activities at the last execution. So if you’re looking for a query that ran for an hour, this time would…
Read More

The DBA Sees Your Blocking Spids… A Bird’s Eye Summary of Blocking

Update from Kendra (Nov 2018) I’m keeping this post for posterity, but I REALLY don’t recommend the script. You’d be much better off using a production monitoring tool that did this job, or @AdamMachanic ‘s sp_WhoIsActive. Maybe you’re a user in a reporting database running a long query in the read committed isolation level, merrily blocking a process which needs to load data. Maybe you’re an application misbehaving in a transactional front end database, blocking other applications. Or possibly you’re a middle tier application who is behaving well and trying to do a small update, but the table you’re updating holds a legacy trigger designed when it was a much smaller table, and that trigger is doing data validation with long running locks. With there being a lot of users, a lot of applications, and in general a lot of blocking, it can be useful to get a bird’s eye view.…
Read More

Index Usage Statistics with ColumnList and Index Size

As an add on to my last post, here is what I currently do use to track index usage. This shows usage, columns in the index, and index size on disk. The size can be quite useful to know when evaluating how much an index is worth– typically if this index is large then you’re paying a fair amount on the inserts. If it’s not easy to tell the data types from your column names, that is a modification you’d definitely want to make. Remember that indexes that have a uniqueidentifier at the head are much more likely to cause page splits and be more work to maintain, so those indexes are more “expensive”. (In my current system I do have the luxury of a consistent naming convention where it’s fairly easy to tell the datatypes in indexed columns, so I haven’t added the datatype to the column list.) The…
Read More

Everything About Your Indexes (well, almost)

am going to post my monstrously big index query.

Why? Because it’s AWESOME. No really, it actually is awesome. At least, if you like that sort of thing. I use some variant of this almost daily, and I tweak it fairly regularly to suit the needs of whatever I’m working on. So it’s a work in progress, but I find it constantly valuable.

Awesome? Oh Really? Why?
This query describes the size, basic definition, location, number of rows, partition status, and enabled/disabled status for all clustered and nonclustered indexes in a database. I typically sort them by descending size, since my primary usage is when a drive space alert fires, or when someone asks one of the million “how much space would it take if we wanted to [x]?” questions.

When you are working with a database which has many indexes that are partitioned over multiple filegroups, which are spread out over multiple drives, this can be very useful when a reindex fails due to a file filling up. Or when you want to estimate how much free space you need to main in a given filegroup in order to be able to reindex the indexes using it.

Read More

How Stale are my Statistics?

Update: improved/more recent version of queries for this are here. It can be pretty difficult to manage statistics in data warehouses, or even OLTP databases that have very large tables. This is because, even with auto_update_statistics turned on, SQL is pretty conservative about when to update statistics due to the cost of the operation. For large tables, statistics are updated when “500 + 20% of the number of rows in the table when the statistics were gathered” have changed. (see BOL here) So for a table with 50 million rows, statistics will auto update when more than 10 million 500 rows have changed. I have a lot of tables with a lot of rows, and this can be a problem. Take a fact table, for instance, where the key is sorted by an integer representing a date. Every day, a large amount of new records are loaded and there is…
Read More
Menu