Why I Love DML Triggers in SQL Server

on September 11, 2018

It’s TSQL Tuesday today! If you’re not familiar, TSQL Tuesday is a monthly blogging event where a Microsoft Data Platform community member chooses a writing prompt. This is a great thing if you’re interested in starting a technical blog and wonder, “What would I write about?” It’s also great if, like me, you occasionally want a little outside inspiration to shake things up.

This month’s topic is from Steve Jones: we’re invited to write about an experience that we’ve had with triggers.

My first encounter with triggers was overwhelmingly positive

When I first edged my way into a Junior DBA-ish role, I worked with a complex application with many large databases. Customers loaded and configured data into a (mostly) OLTP database, and then data was replicated to multiple other systems – in part to publish data to an adserving platform, in part to transform the data for reporting.

Triggers were used extensively in these downstream systems to detect changes. It went like this:

  • Transactional replication publications were set up on the OLTP  (ish) database.
  • Transactional replication subscriptions were set up on each downstream server. A dedicated database was used for replication articles on each instance.
  • After replication was initialized, DML triggers were created on each article in the subscriber database. For each modification, the trigger would insert a narrow row into a “delta” table related to that article.
  • The “delta” tables were in their own schema, and contained row identifiers, timestamp columns, and custom indexes for processing. This enabled batches to be efficiently pulled from these tables for processing into the related system.
  • Cleanup processes periodically pulled processed rows out of the delta tables in the background (and indexes were designed to prevent the cleanup process from fighting with data processing jobs and inserts)

This design worked very well, and was a very positive introduction to me for triggers– they worked exactly as they’d been designed, and were easy to understand and support. Making schema changes to the tables involved was rarely a problem, and we had established processes for modifying articles and re-initializing replication which worked very well.

Why didn’t you use Change Tracking?

The primary reason that triggers were chosen over Change Tracking was that Change Tracking didn’t exist – these tools were written and tuned well before that feature was released. 

TSQL2SDAY #12!

Learn more about this TSQL Tuesday

Looking back, even if we had Change Tracking as an option, we benefited from the granular control we had over this custom system and the elegant simplicity of the implementation. The Change Tracking feature in SQL Server has remained somewhat of an ‘edge feature’, and people using it have had to deal with a variety of issues over time - a need to customize the indexes on the change tables, statistics estimate problems with change functions, and problems with cleanup. (I wrote about these a while back over here.)

I don’t mean to say that Change Tracking is bad, just that it’s not necessarily as ‘easy’ a solution as you might assume from reading the documentation. You can’t simply set it up super fast and skip load testing, or assume that cleanup is going to work like a charm for every scenario – you need to really think about scaling it… kinda like you’d have to think about scaling it if you were writing your own custom application to handle this type of processing with … TRIGGERS! There are pros and cons to each approach.

And even today I wonder… what if we’d been able to get batch mode processing on those delta tables with a little careful custom coding? Hmm…..

I still kinda love triggers

A first impression goes along way, you know?

Sure, people can do terrible things with triggers. But when they’re used well, they can be simple, reliable, and incredibly useful.