Twenty years on, how I made reporting run quicker

Twenty years ago, in January 1998, was when I touched SQL Server for the first time as a consultant – that was SQL Server 6.0, and I’d be involved in an upgrade to 6.5 at that same customer only a couple of months later. I would go on to spend quite a bit of time with this customer, and helped them implement a lot of things over time. I was a programmer back then, and saw the database as just part of the application. It was only later that I started to see applications as peripheral to the data rather than the other way around.

One major problem that was happening at this customer was the reporting. Every month they would need to provide reports to a part of the government, and these reports were simply painful to produce. Let me explain…

The basic system was designed to monitor the health of machines that were dotted all around the city – machines that would send a signal every few moments to indicate their status. Normally these were “nothing wrong” messages, but occasionally there would be a problem with a particular part of the machine, and our application would manage getting the message out to someone who could go and fix it. Later, a “fixed’ message would come through, I’m sure you get the picture. There were also manual reports of downtime that didn’t have automated signals, that would need to be included in the mix, for those times when people would phone up and say that there was a problem.

The reporting was about the overall uptime of each machine, including whether there was a machine available at each location at a given time, whether the combination of components that were unavailable meant that the entire machine was considered down, or whether so many machines at a location were down that the location itself needed to be marked as unavailable, and so on. The data for a single machine was like:

Machine Time Message
100 19980109 12:33:05 OK
100 19980109 12:34:23 HEAT WARNING
100 19980109 12:34:26 COMP1 ERROR
100 19980109 12:34:29 TOUCHPAD ERROR
100 19980109 12:35:12 COMP1 NOERROR
100 19980109 12:35:19 HEAT NORMAL
100 19980109 12:30:00 Report – TOUCHPAD ERROR
100 19980109 12:35:00 Report – TOUCHPAD FIXED

 

…and so on. Timestamps were available for when phone calls came in, but apparently the reporting needed to show the time that the person said they saw it was unavailable, not the time the alert came through. Go figure.

The reporting would need to have looked at this data, and figured out that the touchpad was unavailable between 12:30 and 12:35, and that Component 1 was unavailable for 48 seconds. There would’ve been tables available to show that the touchpad being down meant the machine would’ve been considered only 50% available, and that Component 1 being down meant 60% available, but a combination of both touchpad and component 1 meant completely down. So this data would mean: 4 minutes 26 seconds of 50% available, 34 seconds of 0% available, and 12 seconds of 60% available. The reports were in place before I went there – they just didn’t run well.

They ran by querying all the data, and then looping through it (in VB3) to figure out the gaps, and to try to keep track of what was available and what wasn’t. If the report were at the machine level, it would pull back everything for that machine during the time period of interest and then try to figure it all out, but if it were at the location level, or even city-wide, it was dealing with a very different amount of data. It would take a few minutes to run a report for a single machine for a week, and many reports that were available had simply never been run. Fixing this was one of the biggest challenges I had in my first few months of consulting.

I didn’t know until years later that what I did was to create a data warehouse of sorts.

I could see that the moment when someone hit the button to say “Show me the report for machine 100 last week” was the wrong time to do the reporting, because no one wanted to watch a computer for a few minutes – or a few HOURS if they wanted a larger amount of time or more than a handful of machines. That meant the work needed to be done ahead of time.

So I relaxed many of the rules of database design I’d learned, and came up with some denormalised tables that were purely designed for analysing the data. This would store the amount of downtime per machine or per location, but be populated ahead of time – updating it with maybe an hour or so latency, long enough to allow for the phoned-in alerts. I’d store the number of seconds since the last machine event, and what the availability proportion was during that time period, and include extra dummy events for the start of each hour, so that the report could handle exact time periods. I spread the heavy-lifting across time, pulling the new messages in as regularly as I could, to avoid doing that heavy lifting when time was more urgent. This was ETL, although I just considered it was reporting-preparation, and didn’t learn what ETL meant until much later.

Once that data was in place, I could just sum the product of number of seconds * availability, and the reports could be produced quickly and easily, no matter how large they were. There was no longer a need for a cursor at report runtime – just regular aggregations.

For me, this trick of doing the work at a time when it’s less urgent is key to a lot of things in the database world (and also in the rest of life). I use indexes to keep a sorted copy of data so that aggregations and joins can run faster. I use filtered indexes to separate out lists of new data from old data. I use data warehouse and cubes to handle history and data quality and complex calculations, so that business users can explore data to their hearts’ content. If I can put up with doing the work later, then maybe it’s just fine to not have those indexes in place – perhaps I’m totally okay about scanning a table of 1000 rows every so often. But if the cost of maintaining a separate copy (index, warehouse, whatever) isn’t going to be significant when spread out over time, but that copy is going to make life much easier when it comes to the crunch, then I’m going to do it.

I often think back to some of the customers / projects that I’ve had over the years and roll my eyes at what I had to overcome. Generally though, the worse they were, the more I learned and the stronger I became.

It’s all just practice for whatever is going to come in the future, right?

@rob_farley

This post was written in response to this month’s T-SQL Tuesday, hosted by Arun Sirpal.

TSQL2sDay150x150

My Learning Goals for 2018

Some months it’s a good thing to be prompted to post by T-SQL Tuesday, because it gets me to talk about something I should’ve talked about already, but haven’t. This is one of those months, thanks to Malathi Mahadevan (@sqlmal)’s topic around Learning Goals

I don’t generally write about my Learning Goals, because I tend to base what I’m learning around what my clients’ needs are. If I feel like I need to brush up on Azure Data Lake because I have a project around that, then I’ll take some for that. I like to keep on top of new things as they come through, but then hone my skills when I have a proper opportunity. I don’t tend to study much for certifications – I didn’t prepare much for MCM exams that I took (and passed – just!) five years ago, nor the couple of exams I took recently to keep hold my ‘Partner Seller’ status.

That said, it’s always good to see things that are coming up and dive a bit deeper into them. And for me, at the moment, that’s the edX courses in the Data Science space. I’m not sure whether I’ll get through to the end of them all, because I’ll happily let myself get distracted by something else. The Advance Analytics area is one of which I’m particularly fond – it’s a nice contrast to the Pure Maths subjects I did at university which shapes my approach to relational data. So far, I’ve gone through the courses quickly, but picking up the odd bit that I didn’t remember. The “Essential Statistics for Data Analysis using Excel” was a particular area, which rang bells but also reminded me how long it’s been since I’ve studied this kind of material.

Where do I think I’ll be in 12 months? Well, like the answer to all good SQL questions, it depends. I’m sure I’ll still be analysing data and helping businesses with their data-focused ambitions, and that I will have learned quite a bit during that time.

@rob_farley

My inspirational team

This month the T-SQL Tuesday theme is “Who has made a meaningful contribution to your life in the world of data?”, and is hosted by Ewald Cress (@sqlonice).

It was good to reflect on this over the past week, and there are (of course!) a lot of people that I could list. I could list some of the people that were prominent in the community when I first discovered it all those years ago. I could list those people that I turn to when I have questions about things.

But there’s one group of people that for me stands out above all the rest. So much so that I’ve hired them.

I’ve had my company for nine years now, and during that time I’ve had up to seven people in the team. Well over a dozen different people have represented the LobsterPot Solutions brand, and they’ve all been amazing. Here are a few stats about our past and present LobsterPot team:

  • There have been MVPs, of course. Six MVPs, in fact, have been employed here. One became an MVP after leaving us, and two no longer have MVP status, but still… six!
  • There have been user group leaders. Seven (or maybe eight) of us lead or have led groups.
  • A few former employees now lead their own businesses.
  • One former employee is completing a PhD.
  • One former employee is a Senior Program Manager at Microsoft.
  • One former employee attends the PASS Summit on her vacation time.

You see, they are all leaders. They almost all give presentations at events. They all make meaningful contributions to the data community.

And they have all made a meaningful contribution to my own life. I may have paid them a salary, but they’ve all left an impact on me personally.

@rob_farley

The effort of relocating blog sites

Hopefully you’ve realised that I’m not posting at sqlblog.com any more. There’s still some excellent content there, but it has come up time and time again that I should be posting at a company blog site – so the move has now been made. I’ve also brought across the material that I wrote at msmvps.com, which had also been Community Server until a few years ago when it became WordPress.

Adam Machanic (@AdamMachanic) had put together some C# code for moving posts off Community Server (which is what sqlblog uses) onto WordPress, and combined with a regular WordPress Export + Import from msmvps.com, I had most of my content moved over. I don’t code in C# very often these days, but it felt nice. I spent some time in PowerShell and XML tweaking dates in the WordPress export file to make sure they matched the time zone that I’d originally used, which introduced some frustrating character-mapping that needed fixing in MySQL, so all in all I felt like I was moving around a variety of toolsets that I don’t often swim in.

A big thanks again to Barb and Susan who host msmvps.com still – they (particularly Barb) have helped a lot with sorting out some of my content from the old site. Some things are still broken from years back, but they did find the picture of me with Desmond Tutu, so I’m happy. At some point I’ll be going through old posts and seeing what doesn’t work.

I no longer use Categories – I lost the msmvps.com categories when they moved to WordPress, and the sqlblog.com ones didn’t seem to want to come across either. I don’t know that I ever did categories particularly well, so perhaps it’s a good opportunity to stop pretending that I do. Not everything should within a ‘sql’ category.

I discovered that I have quite a bit of content that needed to use a ‘powershell’ formatting. There is still a bunch of formatting on old posts that I won’t get to for some time though (there’s almost 500 posts, so I’ll take a bit of a run at the rest another day).

I had to install some plugins to get a few things to work. SyntaxHighlighter was one, but also RWD’s Responsive Image Maps to get an image map from an old T-SQL Tuesday round-up working. I tried a stats plugin, only to find that I needed a later version of PHP to support it. Luckily I don’t think I was showing an error for too long, but I’m really not keen on the error messages that WordPress gives.

CSS was a bit of fun to get the “Popular Posts” to look similar to the “Recent Posts”. I ended up just finding a way to have the Popular Posts widget use the same CSS class as the Recent Posts.

And it turns out I do like the Segoe font-face. I know it’s Microsoft’s one, and perhaps that’s what makes it feel right for me – I spend so long looking at Microsoft web pages it feels quite natural to me. Since we deal almost entirely in the Microsoft space, it’s quite appropriate too. We’ll probably use the same when we do a rebranding of our company site.

@rob_farley

New blog site!

It’s about time I moved content to a more central site – one that I own, rather than controlled by others. One that is part of the company, which can help demonstrate the capabilities of the company, and where the other skilled people within the team can also post content.

So I’ve moved the content that I had written at sqlblog.com across (big thanks to Peter and Adam for having me blog there for so long), and the content from msmvps.com (where I’d blogged from the time I first became a SQL MVP, even before the company was set up). I’ll still write for sqlperformance.com when I have something they’ll be interested in, and I’ll post something here to let you know that I’ve done that.

Feel free to let me know what you think of it all – whether I should be use WordPress differently, for example – you can ping me via email, or DM me on Twitter at @rob_farley.

I’ve redirected feedburner, but also feel free to follow this site in general.

The BigData Legacy

Trends come along, and trends pass. Some hang around for quite a while, and then move on, and some seem to disappear quickly. Often we’re glad that they’ve gone, but we still bear scars. We live and work differently because they were there. In the world of IT, I feel like this is all too common.

When ORMs became trendy, people were saying that writing T-SQL would be a thing of the past. LINQ was another ways that people were reassuring the developer community that writing database queries would never again be needed. The trend of avoiding T-SQL through ORMs has hung around a bit, and many developers have recognised that ORMs don’t necessarily create the best database experiences.

And yet when we consider what’s happening with Azure SQL Data Warehouse (SQL DW), you find yourself querying the data through an interface. Sure, that interface looks like another database, but it’s not where the data is (because the data is in the 60 databases that live in the back), and has to it translates our query into a series of other queries that actually run. And we’re fine with this. I don’t hear anyone complaining about the queries that appear in SQ DW’s explain plans.

When CLR came in, people said it was a T-SQL killer. I remember a colleague of mine telling me that he didn’t need to learn T-SQL, because CLR meant that he would be able to do it all in .Net. Over time, we’ve learned that CLR is excellent for all kinds of things, but it’s by no means a T-SQL killer. It’s excellent for a number of reasons – CLR stored procedures or functions have been great for things like string splitting and regular expressions – and we’ve learned its place now.

I don’t hear people talking about NoSQL like they once did, and it’s been folded somehow into BigData, but even that seems to have lost a little of its lustre from a year or two ago when it felt like it was ‘all the rage’. And yet we still have data which is “Big”. I don’t mean large, necessarily, just data that satisfies one of the three Vs – volume, velocity, variety.

Of these Vs, Volume seems to have felt like a misnomer. Everything thinks what they have is big, but if you compared it to others, it probably wouldn’t actually be that big. Generally, if people are thinking “BigData” because they think their data is big, then they just need a reality check, and then deal with it like all your regular data.

Velocity is interesting. If your system can’t respond to things quickly enough, then perhaps pushing your data through something like Stream Analytics could be reasonable, to pick up the alert conditions. But if your data is flowing through to a relational database, then is it really “BigData”?

And then we have Variety. This is about whether your data is structured or not. I’m going to suggest that your data probably is structured – and BigData solutions wouldn’t disagree with this. It’s just that you might not want to define the structure when the data is first arriving. To get data into a structured environment (such as a data table), types need to be tested, the data needs to be converted appropriately, and if you don’t have enough control over the data that’s coming in, the potential for something to break is high. Sorting out that mess when you need to query it back again means that you have a larger window to deal with it.

So this is where I think BigData is leaving its legacy – in the ability to accept data even if it doesn’t exactly fit the structure you have. I know plenty of systems that will break if the data arriving is in the wrong structure, which makes change and adaptability hard to achieve. A BigData solution can help mitigate that risk. Of course, there’s a price to pay, but for those times when the structure tends to change overly regularly, BigData’s ideology can definitely help.

We see this through the adoption of JSON within SQL Server, which is much less structured even than XML. We see PolyBase’s external tables define structure separately to the collection of data. Concepts that were learned in a void of relational data have now become part of our relational databases.

Don’t dismiss fads that come through. Look into them, and try to spot those things which have more longevity. By adopting those principles, you might find yourself coming through as a stronger professional.

@rob_farley

This post was put together for T-SQL Tuesday 95, hosted by Derik Hammer (@sqlhammer). Thanks Derik!

Interviews and niches

T-SQL Tuesday turns this month to the topic of job interviews. Kendra Little (@kendra_little) is our host, and I really hope her round-up post is in the style of an interview. I’m reminded of a T-SQL Tuesday about three years ago on a similar topic, but I’m sure there will be plenty of new information this time around – the world has moved on.

I’m not sure when my last successful job interview was. I know I went through phases when I guess I was fairly good in job interviews (because I was getting job offers), and phases when I was clearly not very good in job interviews (because I would get interviews but not be able to convert them into job offers), and at some point I reached a point where I stopped doing interviews completely. That’s the phase I’m still in.

I hit that point when I discovered my niche (which sounds like “neesh” in my language, not “nitch”). For me it was because I realised that I had a knack for databases and starting exploring that area more – writing, presenting, helping others – until people noticed and started approaching me. That’s when interviewing stops being a thing. It doesn’t necessarily mean going starting your own business, or even changing jobs – it just means that people know who you are and come to you. You no longer have to sit in front of a panel and prove your worth, because they’ve already decided they want you.

So now people approach me for work through LobsterPot Solutions, and although there is sometimes a bidding phase when we need to compete against other companies, there is no ‘interview’ process in the way that there was when I was an employee.

What’s your niche? And are you spending time developing that?

There’s career-advice that talks about the overlap between something you enjoy doing, something you’re good at, and something that people are prepared to pay for. The thing is that people won’t pay you for it unless they know that you’re the person they need, rather than someone else. So get yourself out there. Prove yourself. Three years ago I asked “When is your interview” and said that you need to realise that even before your interview they’ve researched you, considered your reputation, and all of that. Today I want to ask you how your niche is going. Have you identified that thing you enjoy, and that people will pay for? And are you developing your skills in that area?

Your career is up to you. You can respond to job ads and have interviews. Or you can carve your own space.

Good luck.

@rob_farley

Learning the hard way – referenced objects or actual objects

This month’s T-SQL Tuesday is about lessons we’ve learned the hard way. Which, of course, is the way you learn best. It’s not the best way to learn, but if you’ve suffered in your learning somewhat, then you’re probably going to remember it better. Big thanks to Raul Gonzalez (@sqldoubleg) for dragging up these memories.

Oh, I could list all kinds of times I’ve learned things the hard way, in almost every part of my life. But let’s stick to SQL.

This was a long while back… 15-20 years ago.

There was a guy who needed to get his timesheets in. It wasn’t me – I just thought I could help …by making a copy of his timesheets in a separate table, so that he could prepare them there instead of having to use the clunky Access form. I’d gone into the shared Access file that people were using, made a copy of it, and then proceeded to clear out all the data that wasn’t about him, so that he could get his data ready. I figured once he was done, I’d just drop his data in amongst everyone else’s – and that would be okay.

Except that right after I’d cleared out everyone else’s data, everyone else started to complain that their data wasn’t there.

Heart-rate increased. I checked that I was using the copy, not the original… I closed it, opened the original, and saw that sure enough, only his data was there. Everyone else’s (including my own) data was gone.

And then it dawned on me – these tables were linked back to SQL in the back end. I’d copied the reference, but it was still pointing at the same place. All that data I’d deleted was gone from the actual table. I walked over to the boss and apologised. Luckily there was a recent backup, but I was still feeling pretty ordinary.

These kinds of problems can hurt in all kinds of situations, even if you’re not using Access as a front-end. Other applications, views within SQL, Linked Servers, linked reports – plenty of things contain references rather than the actual thing. When you delete something, or change something, or whatever, you had better be sure that you’re working in the right environment.

I don’t even know the best way to have confidence that you’re safe on this. You can help by colouring Prod tabs differently in SSMS with SSMS Tools Pack, but it’s not going to guarantee that you’re okay. You need to be a little paranoid about it. Learn to check and double-check. Because ultimately, data is too valuable to make that kind of mistake.

@rob_farley

DevOps and your database

I’m a consultant. That means I have to deal with whatever I come across at customer sites. I can recommend change, but when I’m called in to fix something, I generally don’t get to insist on it. I just have to get something fixed. That means dealing with developers (if they exist) and with DBAs, and making sure that anything that I try to fix somehow works for both sides. That means I often have to deal with the realm of DevOps, whether or not the customer knows it.

DevOps is the idea of having a development story which improves operations.

Traditionally, developers would develop code without thinking much about operations. They’d get some new code ready, deploy it somehow, and hope it didn’t break much. And the Operations team would brace themselves for a ton of pain, and start pushing back on change, and be seen as a “BOFH”, and everyone would be happy. I still see these kinds of places, although for the most part, people try to get along.

With DevOps, the idea is that developers work in a way that means that things don’t break.

I know, right.

If you’re doing the DevOps things at your organisation, you’re saying “Yup, that’s normal.” If you’re not, you’re probably saying “Ha – like that’s ever going to happen.”

But let me assure you – it can. For years now, developers have been doing Continuous Integration, Test-Driven Development, Automated Builds, and more. I remember seeing these things demonstrated at TechEd conferences in the middle of the last decade.

But somehow, these things are still considered ‘new’ in the database world. Database developers look at TDD and say “It’s okay for a stateless environment, but my database changes state with every insert, update, or delete. By its very definition, it’s stateful.”

The idea that a stored procedure with particular parameters should have a specific impact on a table that particular characteristics (values and statistics – I would assume structure and indexes would be a given) isn’t unreasonable. And it’s this that can lead to the understanding that whilst a database is far from stateless, state can be a controllable thing. Various states can become part of various tests: does the result still apply when there are edge-case rows in the table?; is the execution plan suitable when there are particular statistics in play?; is the amount of blocking reasonable when the number of transactions is at an extreme level?

Test-driven development is a lot harder in the database-development world than in the web-development world. But it’s certainly not unreasonable, and to have confidence that changes won’t be breaking changes, it’s certainly worthwhile.

The investment to implement a full test suite for a database can be significant, depending on how thorough it needs to be. But it can be an incremental thing. Elements such as source control ought to be put in place first, but there is little reason why database development shouldn’t adhere to DevOps principles.

@rob_farley

(Thanks to Grant Fritchey (@gfritchey) – for hosting this month’s T-SQL Tuesday event)

“Stored procedures don’t need source control…”

Hearing this is one of those things that really bugs me.

And it’s not actually about stored procedures, it’s about the mindset that sits there.

I hear this sentiment in environments where there are multiple developers. Where they’re using source control for all their application code. Because, you know, they want to make sure they have a history of changes, and they want to make sure two developers don’t change the same piece of code, maybe they even want to automate builds, all those good things.

But checking out code and needing it to pass all those tests is a pain. So if there’s some logic that can be put in a stored procedure, then that logic can be maintained outside the annoying rigmarole of source control. I guess this is appealing because developers are supposed to be creative types, and should fight against the repression, fight against ‘the man’, fight against [source] control.

When I come across this mindset, I worry a lot.

I worry that code within stored procedures could be lost if multiple people decide to work on something at the same time.

I worry that code within stored procedures won’t be part of a test regime, and could potentially be failing to consider edge cases.

I worry that the history of changes won’t exist and people won’t be able to roll back to a good version.

I worry that people are considering that this is a way around source control, as if source control is a bad thing that should be circumvented.

I just worry.

And this is just talking about code in stored procedures. Let alone database design, constraints, indexes, rows of static data (such as lookup codes), and so on. All of which contribute to a properly working application, but which many developers don’t consider worthy of source control.

Luckily, there are good options available to change this behaviour. Red Gate’s Source Control is tremendously useful, of course, and the inclusion of many Red Gate’s DevOps tools within VS2017 would suggest that Microsoft wants developers to take this more seriously than ever.

For more on this kind of stuff, go read the other posts about this month’s T-SQL Tuesday!

TSQL2sDay150x150

@rob_farley