Upcoming Events

2019 Highlights and Learnings – Weds, Dec 18, 8 AM Pacific / 11 AM Eastern / 4PM GMT –  Register

Recent Recordings

Redgate Evangelist YouTube Channel: Tutorials on Database DevOps -New videos each week – Watch

Managing and Automating Test Datasets for DevOps  with Jeffrey Palermo – Watch

How to Make Your 2020 Monitoring Strategy a Success, with BMW’s Tony Maddonna – Watch

Essential Practices for High Performing Database DevOps Teams – Watch

Fast and Reliable Development with Redgate Solutions for SQL Server – Watch

Implementing Data Masking for NIST Compliance – 1 hour – Watch

How Developers and DBAs Collaborate in a DevOps World – 40 minutes – Watch

How DevOps Keeps DBAs Safe from Being Automated Out of a Job – 1 hour – Watch

DevOps: What, who, why and how? – 1 hour – Watch

Can This Team Succeed at DevOps? Panel discussion – 1 hour – Watch

How Stale are my Statistics?

Update: improved/more recent version of queries for this are here. It can be pretty difficult to manage statistics in data warehouses, or even OLTP databases that have very large tables. This is because, even with auto_update_statistics turned on, SQL is pretty conservative about when to update statistics due to the cost of the operation. For large tables, statistics are updated when “500 + 20% of the number of rows in the table when the statistics were gathered” have changed. (see BOL here) So for a table with 50 million rows, statistics will auto update when more than 10 million 500 rows have changed. I have a lot of tables with a lot of rows, and this can be a problem. Take a fact table, for instance, where the key is sorted by an integer representing a date. Every day, a large amount of new records are loaded and there is…
Read More

Checking Permissions on Linked Servers

One reason I started this blog was just the idea of going through my catalog of scripts and reviewing them and sharing out what might be useful to people.

Here is a quick one I put together a while back. I was starting to work with a group of servers [at an unnamed company, always an unnamed company!]. Some of the instances had been configured long ago, and I found some linked servers where passwords had been hardcoded into the login mappings.

This can be a big security vulnerability, particularly if the option has been chosen to map all users to that login, and the login has significant powers on the other end of the linked server….

Read More

A Table Summarizing All Agent Jobs with Steps…

Also on the topic of SQL Agent jobs– each time I work with a new system, it can take a while to familiarize myself with what all the Sql Agent jobs do. Often there are quite a few jobs, and sometimes they have legacy names that either don’t describe what the job does very well anymore, or is just hard to understand.

Plus, I don’t like opening jobs in the SQL Agent itself very much, since it only opens in an ‘edit’ view. I very much prefer selecting job details out of the tables in msdb, it’s just safer.

Because of this, a while back I wrote a SQL script that takes a lot of descriptive information about a job in MSDB and pivots it out into a table. The table will automatically have as many columns as are required– I have a server where a job has 41 steps, so it’s got 41 columns for step, each in order.

[read on for sample code…]

Read More

SQL Agent Jobs: Checking for failed steps at the end of a job

I use the SQL agent a lot, and it is handy for a lot of things, but it can be frustrating to not be able to pass state information between steps.

For example, I have a job where I want to execute data verification steps against multiple tables. It makes sense to have the check for each table in its own step with a clear label to simplify troubleshooting– so when the job fails, you can see which step had an error and know from the name exactly what’s wrong. But I want all steps in the job to run, regardless of whether a step fails— I want to check for failure at the end.

The most basic way to do this is to have each job step log to a table. This isn’t really bad, but I’d rather not maintain a table for every job of this type. It leaves room for failure, it’s more to maintain, and it just feels redundant for the most part: all of the job history is tracked in MSDB anyway, shouldn’t I be able to use that?

Well, I think I can… [read on for sample code…]

Read More
Menu