Triggers – good or bad?

Long time readers might remember my posts from years ago about implementing Table-Valued Parameters in SQL 2005, or about TVPs in SSIS, and think that I have some interesting views about triggers. I’m not sure whether Steve Jones (@way0utwest) knew this when he decided on this month’s T-SQL Tuesday topic, but either way, it’s been a long time and triggers are worth talking about.

They’re worth talking about because people misunderstand them and misuse them. And therefore they are hated.

Triggers are code that is executed when something in the database happens, such as an insert, delete, or update for DML triggers, or created, dropped, and so on for DDL triggers. The event ‘triggers’ the action, hence the name (but don’t get me started about the fact that the ‘trigger’ is not the thing that does the triggering; it’s the action).

And therein lies the problem. Someone does something which seems to be innocent, but then finds that some code has started executing, and the innocence of their tiny operation is lost. And who knows what awful things are done!? And all within the same transaction as your innocent little update…

But the problem isn’t what they do, it’s that you didn’t know it was there. Just like asbestos.

People complain about nested triggers, about triggers that are overly complex, about triggers which do too many things… but stored procedures have all these faults too. It’s just that you knew that the stored procedure was there, and you didn’t know that the trigger was there.

I wish that triggers were more visible inside the tools (Management Studio / Operations Studio / Visual Studio), right alongside stored procedures and procedural functions (those ones that use BEGIN & END and are bad, rather than inline functions). They’re code and should be treated as such.

Once you start to see triggers as code, and particularly if you use INSTEAD OF triggers rather than AFTER triggers, you can start to see them as being not dissimilar to stored procedures that have access to handy internal tables. In those two posts I linked to at the start, I show ways you can use triggers to effectively have table-valued parameters without user-defined table types. I think this is really handy and somewhat elegant, except that the code isn’t obvious. Of course you can document them, and find ways to let your DBAs know they’re there… but still they will be considered semi-visible, and be nefarious by default.

Like many features of technology, they will be used by well-meaning developers, and often the impact will be negative rather than positive. You may not be able to change the fact that they’re used without refactoring major parts of the architecture (again, like asbestos), but there are some things you can do:

1. Be pragmatic. Accept that you will come across systems that use triggers. Don’t overreact or ask that developers be fired – remember that you’ve seen worse.

2. Document them. Make sure that you don’t forget they’re there, and that those that come after you can discover them easily.

3. Learn to appreciate the power of them. I’m not saying you should start implementing them, unless you’re pushing data around using SSIS and are wishing you had TVPs, but you could look at them as being like stored procedures and appreciate that they could be an elegant solution.

4. Tune them. You tune the stored procedures in your system, and you can tune the triggers too. Every bad thing you come across is an opportunity to demonstrate your strength as a DBA.

Thanks again to Steve for hosting this month’s T-SQL Tuesday. For more on triggers, go and check out the other posts this month. I’ll leave it up to you to decide whether triggers are good or bad – I think it’ll depend on how they’ve been done.

@rob_farley

Developing a new niche for presenting

This year I wanted to set a goal for coming up with a new range of topics to present on.

As much as I think a large portion of the SQL community need to learn more about Joins (and to start looking at them differently in terms of the functionality they’re providing) and Sargability (in general!), two talks that I did at last year’s Data Platform Summit in Bengaluru, I was feeling like they were topics I’d done quite a lot over the years. And they’d been recorded a few times, which made me feel like I maybe wasn’t providing as much value to attendees as I would’ve liked.

Coming up with ideas was looking like it was going to be hard. I had definitely hit a brick wall (thanks for hosting, Wayne!).

You see, I wanted to come up with something different.

These days I have introduced Cognitive Services and AI into the topics I teach about, but teaching courses feels different (to me, at least) than presenting at conferences. Nothing is jumping out at me yet in that area. I spent a bit of time looking, but nothing felt right.

And then I found myself remembering SQLBits.

At SQLBits, I was in a weird mood. My (UK-based) grandmother had died about a day before, while I was en route to see her. I was emotionally exhausted and also had grief to deal with, but figured I would go to the event anyway. I saw friends from the SQL community, played around with a few magic tricks, and decided to do a very different type of presentation – one without a computer. Not just slide-free, but technology-free (apart from the microphone I was wearing). It wasn’t brave of me – the word is probably ‘stupid’. You can watch it here if you want, but essentially I used large pieces of paper and a thick pen. I got people up from the audience, put them in groups, moved them around, and distracted the people in the room from the bounds of a typical presentation. I think I mostly pulled it off.

At SQL Saturday in Melbourne I gave a talk about index internals, with the goal of explaining how indexes work and why your index choices can affect the ways that your queries run, the space that they take, and more. And again, I did it without a computer. I wrote on flip charts, and got people up from the audience to hold them. My original plan was to stick them on the wall, but the audience participation worked nicely, and I’m sure I pulled it off.

I presented in these styles because I found myself wanting to do something different with the topics that I was already doing. I wasn’t developing a new niche of topic, just a new style. I had given up trying to think of a new topic, but still wanted to do something different. I’d done slideless talks, and whiteboard sessions using OneNote, but this was new.

And this is going to be my niche. It’s not a new topic niche, just a new presentation niche. I’ll spend a bit of time exploring it, I think – coming up with different ways to explain the stuff you potentially already know.

So over the next year (or more), I might not come up with topics that are entirely new, but I want to present them in ways you might not have seen before. I want to come up with entirely new topic areas every couple of years still, but in the meantime, I can take audiences in different directions. It might not work for everyone, but if it works out for most, then that’s good enough for me.

I didn’t break through my that brick wall that I’d hit. It’s still there, and it’s still made of brick, but I’m looking at it from a different angle now. Sometimes the best way through a wall is simply to take a step back, and walk around it a different way.

@rob_farley

Code I couldn’t live without

This month’s writing assignment is to describe code I wouldn’t want to live without, and Bert (thanks for hosting, by the way!) actually says “that you’ve written”. Well, that last point made me think, because the examples I’m most likely to wish I had with me are all ones that I can find easily enough online, and are very unlikely to have been written by me.

The most obvious one that I think of that I wrote and have used multiple times and was really frustrated when I couldn’t find it when I needed it was one that I’ve used remarkably rarely in comparison – being a warehouse date table with a bunch of computed columns (in fact, every column except one called ActualDate was calculated, and persisted of course). I’d taken the time to work out all the nuances of nondeterminism to make sure that it worked correctly, handling financial years and English month&day names, and variations around the start of the week, and all kinds of things.

…but it felt very multidimensional-centric, and I haven’t used it in quite a while.

It was hard to find another example, but I’m going to go with the example that I use to demonstrate the pain of NOLOCK. Because I’ve typed this fresh lots of times, and I find it’s still very compelling. So I’ll write it for you again.

A case against NOLOCK

I’m going to create a table and insert exactly 1 million rows. This particular table will be a clustered index, and will contain 1 million GUIDs.

Next I prove that there a million rows.

Now without inserting or deleting any rows, I’m going to shuffle them.

And if while this is happening, I count the rows in a different session, I have to wait for that query to finish.

But what if I use NOLOCK?

I run this over and over, to demonstrate the results that come out.

I don’t have to wait for the answer, but the number it gives me is…

…WRONG!

That’s right, I won’t get the correct answer until the first query is finished. At least, I’m very unlikely to.

Of course it’s exaggerating what happens in real life, but I find it makes the point.

@rob_farley

Azure SQL Database

Björn Peters challenges us this month to write about Azure SQL Database, describing all our experiences and opinions about it. I’m not sure either you, the reader, or I, the writer, have enough time for that, so I’m going to give you a quick version.

…and that is – yes, you should be using it.

Yes, because I think you should embrace Azure SQL Database. For all kinds of reasons.

Not least, because this week the SQL to probably my favourite movie is being released. I haven’t seen it yet, but from the trailers, I’m going to assume that Mr Incredible feels like he’s been left on the shelf while Elastigirl (not even branded as “Mrs Incredible”) goes out to save the world. That’s all I know (okay, except that Jack-Jack is discovering his powers), and I won’t be updating this post with any further information once I’ve seen it.

You see, in the first movie, the supers have been forced to retire and join the real world. As their secret identities become their only identities, they lose touch with who they are. Edna tells Helen to show Bob that she remembers who he is, because he feels like the world has forgotten. The kid on the tricycle doesn’t know what he’s waiting for from Bob – something amazing, he guesses. Bob is too…

Of course, Bob gets hired by Mirage, and gets to relive the glory days, discovering purpose again briefly. But it’s short-lived as it turns out Syndrome is developing technology to give everyone superpowers. And when everyone’s super, no one will be.

This is a fear that I think many database professionals have been dealing with in recent times, as the cloud has become increasingly prevalent. Infrastructure as as Service takes away the need to buy hardware. Platform as a Service takes away the need to install SQL Server. As for NoSQL, well, that’s about as blunt as it gets. Those superpowers that database professionals have developed over the years are being taken away as the world comes to realise that it’s better to put those duties in the hands of cloud providers like Microsoft.

I’m not saying that Microsoft is like Syndrome – keen to be a sidekick but only until they can grow into a nemesis, destined to try to steal our children until we throw a car at them (okay, I think that analogy falls down fairly quickly) – I’m saying that in this time of rapid technological advancement, which isn’t about to slow down, trying to hold onto the past is pointless. In the movie, if Syndrome didn’t develop that technology, someone else would have done. Hopefully they wouldn’t have used it to make the supers redundant, but rather to develop the supers (which I suspect is a theme in Incredibles II). I think it’s interesting that at the climax of the first movie, the family leverage technology to beat the giant evil robot. They can’t do it by themselves, but need to leverage the technology to be stronger.

We need to leverage technology, and not get depressed about the fact that the world is changing. Learn to use Azure SQL Database, rather than shying away from it. Understand the great things that you can do with it, and how it will enhance your environment. This technology is not to try to make you redundant, it’s to help you beat problems you didn’t have to fight before.

So remember who you are. Confront your problem. Fight. Win. And call me when you get back, darling, I enjoy our visits.

@rob_farley

Getting started as a presenter

For me, the trick to getting started in presenting in the technical community was to find something which made sense to me, but that other people found hard. I think from memory, the first thing that I presented was a .Net thing – maybe AJAX? – and I explored the things that people typically found hard, showing why it was easier than they expected.

…and it was no different to telling a story to a group of people at a barbecue, or around the water cooler.

Let me use Sargability as an example, and show you how the water cooler conversation goes:

 

Person 1: Rob, why did you change my query this morning?

Me: Because it was performing badly, and all you needed to do to fix that was to apply a dateadd on getdate() instead, of doing a datediff between your column and getdate().

Person 1: But why should that make a difference?

Me: Well, you have an index on your column, not on the difference between your column and now.

Person 2 (see – there’s a group!): Yeah nah, I saw the plan, and that index was being used.

Me: Sure, but it was a Scan, not a Seek. It was having to work out the datediff on every row and see if it was small enough.

 

…and you see you have the basis of a presentation. If you can explain it to a few people, and find ways to help them understand it, you can explain it to a larger group.

You might be fine about demonstrating a concept to a co-worker on your computer – so there’s the demo section of your presentation. You don’t need to use slides (I don’t!) to present to a crowd, you just need to see how to explain things in a way that they can connect to – addressing their problems and giving them confidence in your solution.

I think the trick to being confident on stage is to know that you’re the expert who has the answer they want. When you have an abstract that explains what scenario you’re trying to solve, people are coming along to listen just like they’d gather around your desk to listen. They want to know the answer, and they’re on your side. They’re not trying to beat you down – they genuinely want to learn. You might have the odd person in the crowd who wants to argue that they’re the real expert, but if you know your stuff, you can disarm them with things like “I love that you’ve pointed out that the index does get used, because you’re right, it does! Just having that index helps a lot – but I want to show you how to use that index in a better way.”

You can work off the people in the room the same way you can work off the people around your desk. When someone at your desk says “Hang on, what did you do there?”, you know that you should be more careful when explaining that to a bigger room, and invite them to come along the journey with you.

Presenting is a great way to give back to the community, and that’s why I’ve written this piece for T-SQL Tuesday this month. The community wants to learn from you – so let them do just that!

@rob_farley

The tools in my day

Another month, another writing assignment for T-SQL Tuesday – although I did have a post go live over at sqlperformance.com, where I wrote about ways that I put queries together when I have separate date and time columns.

This month, with the #tsql2sday theme being tools, hosted by Jens Vestergaard (@vestergaardj), I get to write about other methods I use to do my job.

In case you hadn’t realised, I’m a consultant, and tend to be on-site at my customers’ offices, as I help them with their data. Sometimes I’m helping them with some advanced analytics things, such as making predictions with Machine Learning. Sometimes I’m in standard BI/DW projects, as they’re trying to make sense of their data. Sometimes I’m troubleshooting performance issues, corruption issues, or whatever they need. Or I’m teaching them, about Power BI, querying, tuning, whatever. And more often than not, I’m using a machine that they’ve allocated to me – and using it alongside the customers.

So I don’t always (or typically) get to install new software on the machines I generally use. So the tools I get to use the most are things like SSMS, ML Studio, Visual Studio, DAX Studio, Power BI Desktop Studio, PowerShell Studio, and so on – the things that they already have on their machine – along with various scripts that I will use. If I’m doing a salvage operation from a corrupt database, I might be able to start thinking about other tools such as the HxD hex editor, or Plan Explorer (actually, the whole Sentry One suite is a great set of tools, that I love finding a customer sites when I do).

I want to wax lyrical a little about one of the tools that I feel like I’ve got to know better over the past couple of years, since SQL 2014 came around.

I wrote about Live Query Statistics within SSMS a while back – and even presented at conferences about how useful it is for understanding how queries run…

…but what I love is that at customers where I have long-running queries to deal with, I can keep an eye on the queries as they execute. I can see how the Actuals are forming, and quickly notice whether the iterations in a Nested Loop are going to be unreasonable, or whether I’m happy enough with things. I don’t always want to spend time tuning a once-off query, but if I run it with LQS turned on, I can easily notice if it’s going to be ages before I see any rows back, see any blocking operators that are going to frustrate me, and so on. Using TOP might introduce row goals that twist my plan more than I’d like, but using LQS lets me feel how long a query will take overall, as well as giving me good insight into it without too much cost. I figure the cost of having the server store the execution profile is probably (I wouldn’t do this on a production server) a lot easier than my looking at an estimate plan for a while and assessing whether I should do a bit of a rewrite or introduce some indexes.

LQS is a great teaching tool, but it also serves as a progress bar for me on queries. I recognise blocking operators, such as Hash Aggregate or Top N Sort, and I can tell when an estimate on an operator is way off – to the point that I can quickly get an idea about whether I should let the query keep going or not. To be able to sit at someone’s desk and show them that their query will start returning rows fairly soon (or not) is excellent, and having a better guess about whether it’s going to be another five minutes or another hour is also tremendously useful.

…because that way I can figure out whether to get some more coffee.

@rob_farley

 

TSQL2sDay150x150

T-SQL Tuesday #100

When T-SQL Tuesday started back in late 2009, I have to admit that I wasn’t sure why Adam used three digits. I didn’t know whether it would reach 10, let alone 100. 2018 seemed so far away – if you’d asked me if I thought T-SQL Tuesday would still be going in 2018, I wouldn’t’ve had a clue. And I definitely wouldn’t’ve thought I’d still be contributing, having never missed one. I’m pleased I haven’t missed any though, because it keeps me writing. I enjoy having a writing assignment each month – it forces my hand, even though it’s self-inflicted.

Anyhow…

In his invitation post, our host and T-SQL Tuesday owner Adam Machanic (@adammachanic) makes some interesting points about the types of topics that have made up the list so far, that there has been a significantly high proportion of topics that are not technical in nature, showing that there is a large interest in personal development as well as technical. I suspect this implies that people want to know what people are thinking, not just how to solve problems. People can ask for technical help on a variety of sites, but perhaps T-SQL Tuesday is unique in its ability to ask people for their opinions. I do think that the next hundred topics will be less technical and more personal.

Adam asks what things will be like after another hundred months’ time, and it’s an excellent question.

I think people will continue to become more and more interested in their personal development rather than their technical development. And I think this will increasingly become the shift of the world. Many years ago, we saw robots replace factory lines, and today we see self-driving cars. As time continues, I’m confident that this trend will continue, and that “regular folk” like us will need to solve fewer and fewer technical problems. Performance tuning seem to be already on the way out, with some of the advancements that we’re seeing in Azure SQL Database. AI advancements allow us to create systems that are smarter than ever before, and database consultants are becoming business consultants.

And this is because I see ‘business consultancy’ as being one of those ‘hard’ tasks. In school we learned about P/NP problems – P being the set of problems that were solvable in useful (the real word is ‘polynomial’ time, but I figure that the real thing is whether it can be solved quickly enough to be useful), and NP being the set of problems that weren’t, despite it being quick to demonstrate that a particular solution was a valid solution (this is just my paraphrase of it all – don’t beat me up for being exactly correct). Obviously you can just test every possible answer to see if it’s a valid solution, but that’s not quick enough to be useful. The world of computer science is still finding ways to turn problems from being NP into P, and we shouldn’t have to spend time thinking about P problems.

If I assume that consulting will always be needed, why isn’t it possible to turn it into a P problem? I suppose that’s like the question of whether a computer can ever produce beauty on a reliable basis. And I think it can. It should be possible to analyse a given situation, and produce a strong-enough plan of attack to produce sensible advice. A self-driving car has a very restricted set of parameters in which it can make decisions, but ten years ago we would’ve all considered this was too NP hard to become useful. Medical consulting has been replaceable (to a certain extent at least) by ‘expert’ systems for many years. So I suspect that business consulting and certainly data consulting ought to be able to go the same way.

…maybe not in the next 100 months though. And hopefully not until I’m ready to retire. I might still be writing a blog post every month, mind you, unless that can be automated as well.

@rob_farley

TSQL2sDay150x150

Revisiting GROUP BY v DISTINCT

It’s been nearly eleven years since I compared GROUP BY and DISTINCT in a blog post. So when Aaron Bertrand (@aaronbertrand) challenged people to review his list of bad SQL habits, and I saw a post from last year at the top, I thought I’d have to have a look.

And I loved his post! It used almost exactly the same example that I had, and came up with the same results, ten years later (I don’t mean that his query took ten years to return results, just that his post is from 2017 and mine is 2007). It’s great to see the ‘next generation’ of blog posts coming up with the same conclusions. (I get to pick on Aaron like this because he was an MVP almost ten years before me (despite being born in the same year as me!), and I therefore consider him a different generation of SQL expert. An ‘elder statesman’ if you like.)

What I want to explore in this post is the particular example that we both used… to bring an important point that could be missed because of the similarity of our examples.

You see, we both happened to use a FOR XML concatenation query, looking back at the same table. We did this to simulate a practical GROUP BY – somewhere that you might feel like GROUP BY is useful, but you know that you’re not using an aggregate function like SUM or MAX, but there isn’t one available. Ok, for Aaron he could’ve used the really new STRING_AGG, but for the old-timer like me, having to use SQL Server 2005, that wasn’t available.

The upshot was that we both used a sub-query, one example using GROUP BY…

…and one using DISTINCT instead.

And, we both demonstrated that GROUP BY was quicker, because the QO didn’t need to do all the string concatenation for every row, only for every unique WhateverWeWereGroupingBy.

Go read those posts to see the results. I’m not going to repeat them here.

Back again? Ok, cool.

Now, this isn’t particularly different to any other sub-query, like this one in the AdventureWorks database,

, with the corresponding plans shown here.

image

Both posts demonstrate that using DISTINCT, the Query Optimizer comes up with a worse plan, because it doesn’t seem to know that the sub-query will return the same values on subsequent calls, figuring that it should see what comes back each time before checking for distinct values. It’s like it worries that DISTINCT option could produce different results. Using GROUP BY it only calls the sub-query for the distinct values in the GROUP BY clause, which could be massive savings.

Now, I’m sure you’re thinking “Hang on – I could’ve written that query another way…”, and you’re right. You probably would’ve written it like this:

…and the query plan for this one looks like this, which is the shape as the GROUP BY query – despite using DISTINCT.

image

But if I use a LEFT JOIN instead of an INNER JOIN, then the behaviour changes significantly, and it goes back to joining the results of the aggregate, rather than aggregating the results of the join.

image

I’m a big fan of LEFT JOIN – it makes it very clear that I’m not filtering my base table when I do the join. So I don’t want to hear that the performance is worse.

Luckily, this sounds like a thing about GROUP BY. After all, I only want to group by od.ProductID – there can only ever be one Product Name per ProductID. And yet,

image

it doesn’t help at all.

Hmm.

There is definitely an advantage to using GROUP BY over DISTINCT when using sub-queries. But the story isn’t actually as clear as all that, and is affected by a bunch of other scenarios.

The trick is to look at where the aggregation is going on, and to remember that an aggregate operator reduces the number of rows in play. Anything which reduces the number of rows being pulled through the plan should be pushed as far to the right as possible. If your aggregate will do that the most, do what you can (query rewrites, indexes, constraints, preferably not hints) to get that operator as far right as possible. But you could do the same with a join operator, or a Filter operator, whatever is reducing the rows.

Understand that GROUP BY is generally better than DISTINCT, because it’s a technique for pushing the operator to the right hand side of a join. But remember it’s only part of the story.

@rob_farley

PS… I feel like I want to expand on this a bunch more, but I need to get this published tonight for T-SQL Tuesday. It’s the 99th one, and I don’t want to miss one now, just because I could type about this for hours.

SQL Server CPU Usage by Session

Troubleshooting high CPU usage on a SQL Server Database is an art, but there is a defined methodology to follow to find the root cause of high CPU. This can involve breaking down the overall server CPU usage to a more granular level, first discovering that it’s SQL Server that’s the problem (because way too often it’s something else!), down to exploring specific plan operators in a particular problematic query. Finding that problematic query, identifying the high CPU consumer, means identifying the CPU usage by session.

sys.dm_exec_requests shows the CPU time, but it’s cumulative – it doesn’t give the CPU consumption by each session at the current time. You can see how much CPU usage a session has had since it started, but it doesn’t show you what’s going on right now. To explore that, we need to query sys.dm_exec_requests repeatedly, and look for the differences. We need to collect the CPU usage for a time interval to identify the high CPU consumers.

Here is a query which does that across a one second interval. You can go as small or as large as you like by changing the ‘WAITFOR’ value – but the closer you want to what’s going on “right now”, the smaller you want that interval to be. I find one second to be a good starting point. The query populates snapshots of sys.dm_exec_requests into a table variable called @sessionsCPU, which I then query to do some analysis. I could do this with LAG, but my code will work on pre-2012 instances too, and I can’t see a reason to change it.

Here’s an example of what I get. Obviously, this isn’t from a client machine, it’s just on my local environment. But you’ll see that I can tell that right now, the query that’s causing me most difficulty is the session 71.

I can see the query, and this lets me delve further into the problem. I don’t have an answer yet, but I’m an awful lot closer to finding the culprit and solving the high CPU issue than I was before I ran this query.

Hope this query helps you in troubleshooting high CPU issue and get you a step closer to resolve the issue.

Happy Learning,
Manu

Twenty years on, how I made reporting run quicker

Twenty years ago, in January 1998, was when I touched SQL Server for the first time as a consultant – that was SQL Server 6.0, and I’d be involved in an upgrade to 6.5 at that same customer only a couple of months later. I would go on to spend quite a bit of time with this customer, and helped them implement a lot of things over time. I was a programmer back then, and saw the database as just part of the application. It was only later that I started to see applications as peripheral to the data rather than the other way around.

One major problem that was happening at this customer was the reporting. Every month they would need to provide reports to a part of the government, and these reports were simply painful to produce. Let me explain…

The basic system was designed to monitor the health of machines that were dotted all around the city – machines that would send a signal every few moments to indicate their status. Normally these were “nothing wrong” messages, but occasionally there would be a problem with a particular part of the machine, and our application would manage getting the message out to someone who could go and fix it. Later, a “fixed’ message would come through, I’m sure you get the picture. There were also manual reports of downtime that didn’t have automated signals, that would need to be included in the mix, for those times when people would phone up and say that there was a problem.

The reporting was about the overall uptime of each machine, including whether there was a machine available at each location at a given time, whether the combination of components that were unavailable meant that the entire machine was considered down, or whether so many machines at a location were down that the location itself needed to be marked as unavailable, and so on. The data for a single machine was like:

Machine Time Message
100 19980109 12:33:05 OK
100 19980109 12:34:23 HEAT WARNING
100 19980109 12:34:26 COMP1 ERROR
100 19980109 12:34:29 TOUCHPAD ERROR
100 19980109 12:35:12 COMP1 NOERROR
100 19980109 12:35:19 HEAT NORMAL
100 19980109 12:30:00 Report – TOUCHPAD ERROR
100 19980109 12:35:00 Report – TOUCHPAD FIXED

 

…and so on. Timestamps were available for when phone calls came in, but apparently the reporting needed to show the time that the person said they saw it was unavailable, not the time the alert came through. Go figure.

The reporting would need to have looked at this data, and figured out that the touchpad was unavailable between 12:30 and 12:35, and that Component 1 was unavailable for 48 seconds. There would’ve been tables available to show that the touchpad being down meant the machine would’ve been considered only 50% available, and that Component 1 being down meant 60% available, but a combination of both touchpad and component 1 meant completely down. So this data would mean: 4 minutes 26 seconds of 50% available, 34 seconds of 0% available, and 12 seconds of 60% available. The reports were in place before I went there – they just didn’t run well.

They ran by querying all the data, and then looping through it (in VB3) to figure out the gaps, and to try to keep track of what was available and what wasn’t. If the report were at the machine level, it would pull back everything for that machine during the time period of interest and then try to figure it all out, but if it were at the location level, or even city-wide, it was dealing with a very different amount of data. It would take a few minutes to run a report for a single machine for a week, and many reports that were available had simply never been run. Fixing this was one of the biggest challenges I had in my first few months of consulting.

I didn’t know until years later that what I did was to create a data warehouse of sorts.

I could see that the moment when someone hit the button to say “Show me the report for machine 100 last week” was the wrong time to do the reporting, because no one wanted to watch a computer for a few minutes – or a few HOURS if they wanted a larger amount of time or more than a handful of machines. That meant the work needed to be done ahead of time.

So I relaxed many of the rules of database design I’d learned, and came up with some denormalised tables that were purely designed for analysing the data. This would store the amount of downtime per machine or per location, but be populated ahead of time – updating it with maybe an hour or so latency, long enough to allow for the phoned-in alerts. I’d store the number of seconds since the last machine event, and what the availability proportion was during that time period, and include extra dummy events for the start of each hour, so that the report could handle exact time periods. I spread the heavy-lifting across time, pulling the new messages in as regularly as I could, to avoid doing that heavy lifting when time was more urgent. This was ETL, although I just considered it was reporting-preparation, and didn’t learn what ETL meant until much later.

Once that data was in place, I could just sum the product of number of seconds * availability, and the reports could be produced quickly and easily, no matter how large they were. There was no longer a need for a cursor at report runtime – just regular aggregations.

For me, this trick of doing the work at a time when it’s less urgent is key to a lot of things in the database world (and also in the rest of life). I use indexes to keep a sorted copy of data so that aggregations and joins can run faster. I use filtered indexes to separate out lists of new data from old data. I use data warehouse and cubes to handle history and data quality and complex calculations, so that business users can explore data to their hearts’ content. If I can put up with doing the work later, then maybe it’s just fine to not have those indexes in place – perhaps I’m totally okay about scanning a table of 1000 rows every so often. But if the cost of maintaining a separate copy (index, warehouse, whatever) isn’t going to be significant when spread out over time, but that copy is going to make life much easier when it comes to the crunch, then I’m going to do it.

I often think back to some of the customers / projects that I’ve had over the years and roll my eyes at what I had to overcome. Generally though, the worse they were, the more I learned and the stronger I became.

It’s all just practice for whatever is going to come in the future, right?

@rob_farley

This post was written in response to this month’s T-SQL Tuesday, hosted by Arun Sirpal.

TSQL2sDay150x150