Upcoming Events

Weekly Database DevOps Live Chats – a new experiment on YouTube

How to Make Your 2020 Monitoring Strategy a Success – Wed, Nov 20th – 8 AM Pacific / 11 AM Eastern – Register

Essential Practices for High Performing Database DevOps Teams – Tue, Nov 26th – 8 AM Pacific / 11 AM Eastern – Register

Why the Database is at the Heart of DevOps Success – Fri, Nov 29 – 6:00 AM Pacific / 9:00 AM Eastern / 3PM CET – ScaleUp 360 online conference – Register

Managing and Automating Test Datasets for DevOps – Weds, Dec 4 – 7:30 AM Pacific / 10:30 AM Eastern – Register

Recent Recordings

Redgate Evangelist YouTube Channel: Tutorials on Database DevOps -New videos each week – Watch

Fast and Reliable Development with Redgate Solutions for SQL Server – Watch

Implementing Data Masking for NIST Compliance – 1 hour – Watch

How Developers and DBAs Collaborate in a DevOps World – 40 minutes – Watch

How DevOps Keeps DBAs Safe from Being Automated Out of a Job – 1 hour – Watch

DevOps: What, who, why and how? – 1 hour – Watch

Can This Team Succeed at DevOps? Panel discussion – 1 hour – Watch

Index Usage Statistics with ColumnList and Index Size

As an add on to my last post, here is what I currently do use to track index usage. This shows usage, columns in the index, and index size on disk. The size can be quite useful to know when evaluating how much an index is worth– typically if this index is large then you’re paying a fair amount on the inserts. If it’s not easy to tell the data types from your column names, that is a modification you’d definitely want to make. Remember that indexes that have a uniqueidentifier at the head are much more likely to cause page splits and be more work to maintain, so those indexes are more “expensive”. (In my current system I do have the luxury of a consistent naming convention where it’s fairly easy to tell the datatypes in indexed columns, so I haven’t added the datatype to the column list.) The…
Read More

Everything About Your Indexes (well, almost)

am going to post my monstrously big index query.

Why? Because it’s AWESOME. No really, it actually is awesome. At least, if you like that sort of thing. I use some variant of this almost daily, and I tweak it fairly regularly to suit the needs of whatever I’m working on. So it’s a work in progress, but I find it constantly valuable.

Awesome? Oh Really? Why?
This query describes the size, basic definition, location, number of rows, partition status, and enabled/disabled status for all clustered and nonclustered indexes in a database. I typically sort them by descending size, since my primary usage is when a drive space alert fires, or when someone asks one of the million “how much space would it take if we wanted to [x]?” questions.

When you are working with a database which has many indexes that are partitioned over multiple filegroups, which are spread out over multiple drives, this can be very useful when a reindex fails due to a file filling up. Or when you want to estimate how much free space you need to main in a given filegroup in order to be able to reindex the indexes using it.

Read More

How Stale are my Statistics?

Update: improved/more recent version of queries for this are here. It can be pretty difficult to manage statistics in data warehouses, or even OLTP databases that have very large tables. This is because, even with auto_update_statistics turned on, SQL is pretty conservative about when to update statistics due to the cost of the operation. For large tables, statistics are updated when “500 + 20% of the number of rows in the table when the statistics were gathered” have changed. (see BOL here) So for a table with 50 million rows, statistics will auto update when more than 10 million 500 rows have changed. I have a lot of tables with a lot of rows, and this can be a problem. Take a fact table, for instance, where the key is sorted by an integer representing a date. Every day, a large amount of new records are loaded and there is…
Read More

Checking Permissions on Linked Servers

One reason I started this blog was just the idea of going through my catalog of scripts and reviewing them and sharing out what might be useful to people.

Here is a quick one I put together a while back. I was starting to work with a group of servers [at an unnamed company, always an unnamed company!]. Some of the instances had been configured long ago, and I found some linked servers where passwords had been hardcoded into the login mappings.

This can be a big security vulnerability, particularly if the option has been chosen to map all users to that login, and the login has significant powers on the other end of the linked server….

Read More

A Table Summarizing All Agent Jobs with Steps…

Also on the topic of SQL Agent jobs– each time I work with a new system, it can take a while to familiarize myself with what all the Sql Agent jobs do. Often there are quite a few jobs, and sometimes they have legacy names that either don’t describe what the job does very well anymore, or is just hard to understand.

Plus, I don’t like opening jobs in the SQL Agent itself very much, since it only opens in an ‘edit’ view. I very much prefer selecting job details out of the tables in msdb, it’s just safer.

Because of this, a while back I wrote a SQL script that takes a lot of descriptive information about a job in MSDB and pivots it out into a table. The table will automatically have as many columns as are required– I have a server where a job has 41 steps, so it’s got 41 columns for step, each in order.

[read on for sample code…]

Read More

SQL Agent Jobs: Checking for failed steps at the end of a job

I use the SQL agent a lot, and it is handy for a lot of things, but it can be frustrating to not be able to pass state information between steps.

For example, I have a job where I want to execute data verification steps against multiple tables. It makes sense to have the check for each table in its own step with a clear label to simplify troubleshooting– so when the job fails, you can see which step had an error and know from the name exactly what’s wrong. But I want all steps in the job to run, regardless of whether a step fails— I want to check for failure at the end.

The most basic way to do this is to have each job step log to a table. This isn’t really bad, but I’d rather not maintain a table for every job of this type. It leaves room for failure, it’s more to maintain, and it just feels redundant for the most part: all of the job history is tracked in MSDB anyway, shouldn’t I be able to use that?

Well, I think I can… [read on for sample code…]

Read More