Non-SQL technologies

This month, Mala Mahadevan (@sqlmal) challenges us to write about technology to learn that isn’t SQL. Interesting topic – I feel like there are quite a number of technologies that I feel I know outside the SQL space, but I had to give some thought to what I should encourage people to learn.

PowerShell was an obvious one – arguably, any command-line scripting. The number of times that people need to do things with files, or have some automated process, or need to do some old-fashioned grepping, but don’t feel confident enough to spin up a PowerShell window… it beggars belief. It’s incredibly useful and anyone technical should be across it. General data folk probably don’t need to jump right into creating their own cmdlets, but they should definitely know about ripping into files, looping, all that.

To help get you started in that, I wrote something a bit over a year ago at https://sqlperformance.com/2017/09/sql-performance/need-to-know-powershell 

The other technology I can’t go past is Excel.

Excel because it’s the data environment that regular business people know, and you should know it at least as well as them.

They’ll want you to give them copies of data in Excel (tip: ask them what they will do with that data once you’ve given it to them!), they will create charts with Excel, they will give you data in Excel (another tip: learn about MDS and let them use Excel to maintain data stored within MDS), and they will implement their business logic there. Essentially, it’s where they do Data Analytics. Make sure you can read their Excel expressions / formulae, and the VBA code they write, and understand pivot tables, and so on. It will become a useful tool for you as well, because you can generate Excel spreadsheets through SSRS now (so long as you understand what SSRS will do around merged cells if you haven’t taken care).

And Excel can lead you onto teaching them about Power Pivot, Power BI, and Analysis Services. Your Excel experts can become expert data modellers, and you can be raising the next generation of data scientists.

So those are my two suggestions for non-SQL technologies – PowerShell and Excel. They’re not new, but they are set to stay significant for some time. If you don’t have them, you’re missing some serious foundations.

@rob_farley

Where projects often go wrong

I don’t write about our customers in a negative way. I just don’t. I don’t ever want a customer (or potential customer) to see what I’ve written about someone else and to worry about what I might be saying about them. So I don’t tweet about the times when something’s driving me crazy, or about code that I’m seeing – I don’t even tweet about the times that I’ve tuned a process from 15 minutes down to a second or two in case someone considers that I’ve just broadcast that they had a process that was taking 15 minutes.

But I do write for T-SQL Tuesday every month. Without fail. And this month, the topic is failed projects. Thanks to Jeff Mlakar (@jmlakar) for hosting.

And so I want to write about something I see which commonly makes projects fail, and that is:

A misaligned expectations and feedback loop

There’s a blog post I often show to customers that dates back to 2006. It’s Kathy Sierra’s post called “Don’t Make The Demo Look Too Done“, and it brings out a lot of good points.

The gist of it is that customer expectations largely depend on how finished the demo looks, because they don’t see what’s happening behind the scenes. They’re measuring the completeness of the project based on the tip of the iceberg, and not really listening to any commentary about how much work there is to do beneath the surface. I was thinking of putting a picture here of an iceberg, but I think you can imagine. You know what’s going on out-of-sight of the customer, and how the user interface is just a small part. It’s just that the customer might not, and will imagine, and either get excited about how much progress they think has been made or concerned about how little progress seems to have happened.

Kathy also brings out the fact that the depth of feedback will tend to be based around how complete the thing appears to be. If I asked for feedback on this post, you might say “You know, you really should’ve put a good picture of an iceberg in that last paragraph”. Or you’ll point out the fact that I have a mispelled word in this sentence. You probably won’t tell me whether you think Kathy’s post is a good way of making my argument, or whether I should continue to write blog posts every month. You’ll probably assume that the window for that is gone. (Of course, having said this, there are four types of comments that will almost certainly appear below – no prizes for the bleeding obvious.)

As a consultant who tends to get involved in customers’ businesses, not just their data (because honestly – there’s a reason why it’s called “Business Intelligence”), I do sometimes find myself saying “Now, please tell me if the opportunity for this level of feedback has passed…”, and then asking questions that probably belonged much earlier in the project. I’ll then see people taking deep breaths and wondering whether they should’ve either not invited me to that meeting or perhaps got me into meetings back when the opportunity for that level of feedback hadn’t passed.

Project-wise, I see that developers, whether database developers or user interface developers or whatever, tend to work based on their understanding of how things work, and customers won’t ask certain questions until they see how things actually look. Because before then, they don’t really get a sense that it’s being worked on. And then suddenly things look like they’re done, and they feel that the window for deeper feedback has passed them by. As much as I love the rich visuals available with Power BI, the speed with which data can be presented (and the same occurs even in SSRS) can often cause people to give the wrong level of feedback early on, and only later look at fundamental questions like whether the metrics that are being shown are being measured in the right way for the business.

All of this can lead to projects which are poorly estimated.

Customers will feel like things are going well if they’re seeing things that look pretty, and developers will become unstuck when they realise that they didn’t have key pieces of information early on because they hadn’t asked for feedback at the right times.

Sadly, there’s no easy solution, except to manage both sides better. Customer expectations need to be managed to make sure that they have a good picture of where things are at, and developers need to be managed to make sure that stories are being presented in ways that invite the right level of feedback. Methodologies can help, including rapid prototyping (although this should also be taken with salt because prototypes can often form the top of an iceberg and mislead – everyone should understand that prototypes are designed to be learned from and thrown away), but ultimately it’s about perspective and empathy. By understanding the difficulties faced by each side, a technical manager ought to be able to manage the expectations of both customers and developers, and advocate for the other in getting the right level of feedback needed at each point along the way.

It doesn’t surprise me that our T-SQL Tuesday host Jeff sees continuing behaviour around poor project estimation. I see it too. I’m guilty of misjudging the quality of feedback that I’ll need.

And I frequently show Kathy’s blog post to the customers and developers that I interact with, in the hope that we can all improve and get better estimations going forward.

@rob_farley

Triggers – good or bad?

Long time readers might remember my posts from years ago about implementing Table-Valued Parameters in SQL 2005, or about TVPs in SSIS, and think that I have some interesting views about triggers. I’m not sure whether Steve Jones (@way0utwest) knew this when he decided on this month’s T-SQL Tuesday topic, but either way, it’s been a long time and triggers are worth talking about.

They’re worth talking about because people misunderstand them and misuse them. And therefore they are hated.

Triggers are code that is executed when something in the database happens, such as an insert, delete, or update for DML triggers, or created, dropped, and so on for DDL triggers. The event ‘triggers’ the action, hence the name (but don’t get me started about the fact that the ‘trigger’ is not the thing that does the triggering; it’s the action).

And therein lies the problem. Someone does something which seems to be innocent, but then finds that some code has started executing, and the innocence of their tiny operation is lost. And who knows what awful things are done!? And all within the same transaction as your innocent little update…

But the problem isn’t what they do, it’s that you didn’t know it was there. Just like asbestos.

People complain about nested triggers, about triggers that are overly complex, about triggers which do too many things… but stored procedures have all these faults too. It’s just that you knew that the stored procedure was there, and you didn’t know that the trigger was there.

I wish that triggers were more visible inside the tools (Management Studio / Operations Studio / Visual Studio), right alongside stored procedures and procedural functions (those ones that use BEGIN & END and are bad, rather than inline functions). They’re code and should be treated as such.

Once you start to see triggers as code, and particularly if you use INSTEAD OF triggers rather than AFTER triggers, you can start to see them as being not dissimilar to stored procedures that have access to handy internal tables. In those two posts I linked to at the start, I show ways you can use triggers to effectively have table-valued parameters without user-defined table types. I think this is really handy and somewhat elegant, except that the code isn’t obvious. Of course you can document them, and find ways to let your DBAs know they’re there… but still they will be considered semi-visible, and be nefarious by default.

Like many features of technology, they will be used by well-meaning developers, and often the impact will be negative rather than positive. You may not be able to change the fact that they’re used without refactoring major parts of the architecture (again, like asbestos), but there are some things you can do:

1. Be pragmatic. Accept that you will come across systems that use triggers. Don’t overreact or ask that developers be fired – remember that you’ve seen worse.

2. Document them. Make sure that you don’t forget they’re there, and that those that come after you can discover them easily.

3. Learn to appreciate the power of them. I’m not saying you should start implementing them, unless you’re pushing data around using SSIS and are wishing you had TVPs, but you could look at them as being like stored procedures and appreciate that they could be an elegant solution.

4. Tune them. You tune the stored procedures in your system, and you can tune the triggers too. Every bad thing you come across is an opportunity to demonstrate your strength as a DBA.

Thanks again to Steve for hosting this month’s T-SQL Tuesday. For more on triggers, go and check out the other posts this month. I’ll leave it up to you to decide whether triggers are good or bad – I think it’ll depend on how they’ve been done.

@rob_farley

Developing a new niche for presenting

This year I wanted to set a goal for coming up with a new range of topics to present on.

As much as I think a large portion of the SQL community need to learn more about Joins (and to start looking at them differently in terms of the functionality they’re providing) and Sargability (in general!), two talks that I did at last year’s Data Platform Summit in Bengaluru, I was feeling like they were topics I’d done quite a lot over the years. And they’d been recorded a few times, which made me feel like I maybe wasn’t providing as much value to attendees as I would’ve liked.

Coming up with ideas was looking like it was going to be hard. I had definitely hit a brick wall (thanks for hosting, Wayne!).

You see, I wanted to come up with something different.

These days I have introduced Cognitive Services and AI into the topics I teach about, but teaching courses feels different (to me, at least) than presenting at conferences. Nothing is jumping out at me yet in that area. I spent a bit of time looking, but nothing felt right.

And then I found myself remembering SQLBits.

At SQLBits, I was in a weird mood. My (UK-based) grandmother had died about a day before, while I was en route to see her. I was emotionally exhausted and also had grief to deal with, but figured I would go to the event anyway. I saw friends from the SQL community, played around with a few magic tricks, and decided to do a very different type of presentation – one without a computer. Not just slide-free, but technology-free (apart from the microphone I was wearing). It wasn’t brave of me – the word is probably ‘stupid’. You can watch it here if you want, but essentially I used large pieces of paper and a thick pen. I got people up from the audience, put them in groups, moved them around, and distracted the people in the room from the bounds of a typical presentation. I think I mostly pulled it off.

At SQL Saturday in Melbourne I gave a talk about index internals, with the goal of explaining how indexes work and why your index choices can affect the ways that your queries run, the space that they take, and more. And again, I did it without a computer. I wrote on flip charts, and got people up from the audience to hold them. My original plan was to stick them on the wall, but the audience participation worked nicely, and I’m sure I pulled it off.

I presented in these styles because I found myself wanting to do something different with the topics that I was already doing. I wasn’t developing a new niche of topic, just a new style. I had given up trying to think of a new topic, but still wanted to do something different. I’d done slideless talks, and whiteboard sessions using OneNote, but this was new.

And this is going to be my niche. It’s not a new topic niche, just a new presentation niche. I’ll spend a bit of time exploring it, I think – coming up with different ways to explain the stuff you potentially already know.

So over the next year (or more), I might not come up with topics that are entirely new, but I want to present them in ways you might not have seen before. I want to come up with entirely new topic areas every couple of years still, but in the meantime, I can take audiences in different directions. It might not work for everyone, but if it works out for most, then that’s good enough for me.

I didn’t break through my that brick wall that I’d hit. It’s still there, and it’s still made of brick, but I’m looking at it from a different angle now. Sometimes the best way through a wall is simply to take a step back, and walk around it a different way.

@rob_farley

Code I couldn’t live without

This month’s writing assignment is to describe code I wouldn’t want to live without, and Bert (thanks for hosting, by the way!) actually says “that you’ve written”. Well, that last point made me think, because the examples I’m most likely to wish I had with me are all ones that I can find easily enough online, and are very unlikely to have been written by me.

The most obvious one that I think of that I wrote and have used multiple times and was really frustrated when I couldn’t find it when I needed it was one that I’ve used remarkably rarely in comparison – being a warehouse date table with a bunch of computed columns (in fact, every column except one called ActualDate was calculated, and persisted of course). I’d taken the time to work out all the nuances of nondeterminism to make sure that it worked correctly, handling financial years and English month&day names, and variations around the start of the week, and all kinds of things.

…but it felt very multidimensional-centric, and I haven’t used it in quite a while.

It was hard to find another example, but I’m going to go with the example that I use to demonstrate the pain of NOLOCK. Because I’ve typed this fresh lots of times, and I find it’s still very compelling. So I’ll write it for you again.

A case against NOLOCK

I’m going to create a table and insert exactly 1 million rows. This particular table will be a clustered index, and will contain 1 million GUIDs.

Next I prove that there a million rows.

Now without inserting or deleting any rows, I’m going to shuffle them.

And if while this is happening, I count the rows in a different session, I have to wait for that query to finish.

But what if I use NOLOCK?

I run this over and over, to demonstrate the results that come out.

I don’t have to wait for the answer, but the number it gives me is…

…WRONG!

That’s right, I won’t get the correct answer until the first query is finished. At least, I’m very unlikely to.

Of course it’s exaggerating what happens in real life, but I find it makes the point.

@rob_farley

Azure SQL Database

Björn Peters challenges us this month to write about Azure SQL Database, describing all our experiences and opinions about it. I’m not sure either you, the reader, or I, the writer, have enough time for that, so I’m going to give you a quick version.

…and that is – yes, you should be using it.

Yes, because I think you should embrace Azure SQL Database. For all kinds of reasons.

Not least, because this week the SQL to probably my favourite movie is being released. I haven’t seen it yet, but from the trailers, I’m going to assume that Mr Incredible feels like he’s been left on the shelf while Elastigirl (not even branded as “Mrs Incredible”) goes out to save the world. That’s all I know (okay, except that Jack-Jack is discovering his powers), and I won’t be updating this post with any further information once I’ve seen it.

You see, in the first movie, the supers have been forced to retire and join the real world. As their secret identities become their only identities, they lose touch with who they are. Edna tells Helen to show Bob that she remembers who he is, because he feels like the world has forgotten. The kid on the tricycle doesn’t know what he’s waiting for from Bob – something amazing, he guesses. Bob is too…

Of course, Bob gets hired by Mirage, and gets to relive the glory days, discovering purpose again briefly. But it’s short-lived as it turns out Syndrome is developing technology to give everyone superpowers. And when everyone’s super, no one will be.

This is a fear that I think many database professionals have been dealing with in recent times, as the cloud has become increasingly prevalent. Infrastructure as as Service takes away the need to buy hardware. Platform as a Service takes away the need to install SQL Server. As for NoSQL, well, that’s about as blunt as it gets. Those superpowers that database professionals have developed over the years are being taken away as the world comes to realise that it’s better to put those duties in the hands of cloud providers like Microsoft.

I’m not saying that Microsoft is like Syndrome – keen to be a sidekick but only until they can grow into a nemesis, destined to try to steal our children until we throw a car at them (okay, I think that analogy falls down fairly quickly) – I’m saying that in this time of rapid technological advancement, which isn’t about to slow down, trying to hold onto the past is pointless. In the movie, if Syndrome didn’t develop that technology, someone else would have done. Hopefully they wouldn’t have used it to make the supers redundant, but rather to develop the supers (which I suspect is a theme in Incredibles II). I think it’s interesting that at the climax of the first movie, the family leverage technology to beat the giant evil robot. They can’t do it by themselves, but need to leverage the technology to be stronger.

We need to leverage technology, and not get depressed about the fact that the world is changing. Learn to use Azure SQL Database, rather than shying away from it. Understand the great things that you can do with it, and how it will enhance your environment. This technology is not to try to make you redundant, it’s to help you beat problems you didn’t have to fight before.

So remember who you are. Confront your problem. Fight. Win. And call me when you get back, darling, I enjoy our visits.

@rob_farley

Getting started as a presenter

For me, the trick to getting started in presenting in the technical community was to find something which made sense to me, but that other people found hard. I think from memory, the first thing that I presented was a .Net thing – maybe AJAX? – and I explored the things that people typically found hard, showing why it was easier than they expected.

…and it was no different to telling a story to a group of people at a barbecue, or around the water cooler.

Let me use Sargability as an example, and show you how the water cooler conversation goes:

 

Person 1: Rob, why did you change my query this morning?

Me: Because it was performing badly, and all you needed to do to fix that was to apply a dateadd on getdate() instead, of doing a datediff between your column and getdate().

Person 1: But why should that make a difference?

Me: Well, you have an index on your column, not on the difference between your column and now.

Person 2 (see – there’s a group!): Yeah nah, I saw the plan, and that index was being used.

Me: Sure, but it was a Scan, not a Seek. It was having to work out the datediff on every row and see if it was small enough.

 

…and you see you have the basis of a presentation. If you can explain it to a few people, and find ways to help them understand it, you can explain it to a larger group.

You might be fine about demonstrating a concept to a co-worker on your computer – so there’s the demo section of your presentation. You don’t need to use slides (I don’t!) to present to a crowd, you just need to see how to explain things in a way that they can connect to – addressing their problems and giving them confidence in your solution.

I think the trick to being confident on stage is to know that you’re the expert who has the answer they want. When you have an abstract that explains what scenario you’re trying to solve, people are coming along to listen just like they’d gather around your desk to listen. They want to know the answer, and they’re on your side. They’re not trying to beat you down – they genuinely want to learn. You might have the odd person in the crowd who wants to argue that they’re the real expert, but if you know your stuff, you can disarm them with things like “I love that you’ve pointed out that the index does get used, because you’re right, it does! Just having that index helps a lot – but I want to show you how to use that index in a better way.”

You can work off the people in the room the same way you can work off the people around your desk. When someone at your desk says “Hang on, what did you do there?”, you know that you should be more careful when explaining that to a bigger room, and invite them to come along the journey with you.

Presenting is a great way to give back to the community, and that’s why I’ve written this piece for T-SQL Tuesday this month. The community wants to learn from you – so let them do just that!

@rob_farley

The tools in my day

Another month, another writing assignment for T-SQL Tuesday – although I did have a post go live over at sqlperformance.com, where I wrote about ways that I put queries together when I have separate date and time columns.

This month, with the #tsql2sday theme being tools, hosted by Jens Vestergaard (@vestergaardj), I get to write about other methods I use to do my job.

In case you hadn’t realised, I’m a consultant, and tend to be on-site at my customers’ offices, as I help them with their data. Sometimes I’m helping them with some advanced analytics things, such as making predictions with Machine Learning. Sometimes I’m in standard BI/DW projects, as they’re trying to make sense of their data. Sometimes I’m troubleshooting performance issues, corruption issues, or whatever they need. Or I’m teaching them, about Power BI, querying, tuning, whatever. And more often than not, I’m using a machine that they’ve allocated to me – and using it alongside the customers.

So I don’t always (or typically) get to install new software on the machines I generally use. So the tools I get to use the most are things like SSMS, ML Studio, Visual Studio, DAX Studio, Power BI Desktop Studio, PowerShell Studio, and so on – the things that they already have on their machine – along with various scripts that I will use. If I’m doing a salvage operation from a corrupt database, I might be able to start thinking about other tools such as the HxD hex editor, or Plan Explorer (actually, the whole Sentry One suite is a great set of tools, that I love finding a customer sites when I do).

I want to wax lyrical a little about one of the tools that I feel like I’ve got to know better over the past couple of years, since SQL 2014 came around.

I wrote about Live Query Statistics within SSMS a while back – and even presented at conferences about how useful it is for understanding how queries run…

…but what I love is that at customers where I have long-running queries to deal with, I can keep an eye on the queries as they execute. I can see how the Actuals are forming, and quickly notice whether the iterations in a Nested Loop are going to be unreasonable, or whether I’m happy enough with things. I don’t always want to spend time tuning a once-off query, but if I run it with LQS turned on, I can easily notice if it’s going to be ages before I see any rows back, see any blocking operators that are going to frustrate me, and so on. Using TOP might introduce row goals that twist my plan more than I’d like, but using LQS lets me feel how long a query will take overall, as well as giving me good insight into it without too much cost. I figure the cost of having the server store the execution profile is probably (I wouldn’t do this on a production server) a lot easier than my looking at an estimate plan for a while and assessing whether I should do a bit of a rewrite or introduce some indexes.

LQS is a great teaching tool, but it also serves as a progress bar for me on queries. I recognise blocking operators, such as Hash Aggregate or Top N Sort, and I can tell when an estimate on an operator is way off – to the point that I can quickly get an idea about whether I should let the query keep going or not. To be able to sit at someone’s desk and show them that their query will start returning rows fairly soon (or not) is excellent, and having a better guess about whether it’s going to be another five minutes or another hour is also tremendously useful.

…because that way I can figure out whether to get some more coffee.

@rob_farley

 

TSQL2sDay150x150

T-SQL Tuesday #100

When T-SQL Tuesday started back in late 2009, I have to admit that I wasn’t sure why Adam used three digits. I didn’t know whether it would reach 10, let alone 100. 2018 seemed so far away – if you’d asked me if I thought T-SQL Tuesday would still be going in 2018, I wouldn’t’ve had a clue. And I definitely wouldn’t’ve thought I’d still be contributing, having never missed one. I’m pleased I haven’t missed any though, because it keeps me writing. I enjoy having a writing assignment each month – it forces my hand, even though it’s self-inflicted.

Anyhow…

In his invitation post, our host and T-SQL Tuesday owner Adam Machanic (@adammachanic) makes some interesting points about the types of topics that have made up the list so far, that there has been a significantly high proportion of topics that are not technical in nature, showing that there is a large interest in personal development as well as technical. I suspect this implies that people want to know what people are thinking, not just how to solve problems. People can ask for technical help on a variety of sites, but perhaps T-SQL Tuesday is unique in its ability to ask people for their opinions. I do think that the next hundred topics will be less technical and more personal.

Adam asks what things will be like after another hundred months’ time, and it’s an excellent question.

I think people will continue to become more and more interested in their personal development rather than their technical development. And I think this will increasingly become the shift of the world. Many years ago, we saw robots replace factory lines, and today we see self-driving cars. As time continues, I’m confident that this trend will continue, and that “regular folk” like us will need to solve fewer and fewer technical problems. Performance tuning seem to be already on the way out, with some of the advancements that we’re seeing in Azure SQL Database. AI advancements allow us to create systems that are smarter than ever before, and database consultants are becoming business consultants.

And this is because I see ‘business consultancy’ as being one of those ‘hard’ tasks. In school we learned about P/NP problems – P being the set of problems that were solvable in useful (the real word is ‘polynomial’ time, but I figure that the real thing is whether it can be solved quickly enough to be useful), and NP being the set of problems that weren’t, despite it being quick to demonstrate that a particular solution was a valid solution (this is just my paraphrase of it all – don’t beat me up for being exactly correct). Obviously you can just test every possible answer to see if it’s a valid solution, but that’s not quick enough to be useful. The world of computer science is still finding ways to turn problems from being NP into P, and we shouldn’t have to spend time thinking about P problems.

If I assume that consulting will always be needed, why isn’t it possible to turn it into a P problem? I suppose that’s like the question of whether a computer can ever produce beauty on a reliable basis. And I think it can. It should be possible to analyse a given situation, and produce a strong-enough plan of attack to produce sensible advice. A self-driving car has a very restricted set of parameters in which it can make decisions, but ten years ago we would’ve all considered this was too NP hard to become useful. Medical consulting has been replaceable (to a certain extent at least) by ‘expert’ systems for many years. So I suspect that business consulting and certainly data consulting ought to be able to go the same way.

…maybe not in the next 100 months though. And hopefully not until I’m ready to retire. I might still be writing a blog post every month, mind you, unless that can be automated as well.

@rob_farley

TSQL2sDay150x150

Revisiting GROUP BY v DISTINCT

It’s been nearly eleven years since I compared GROUP BY and DISTINCT in a blog post. So when Aaron Bertrand (@aaronbertrand) challenged people to review his list of bad SQL habits, and I saw a post from last year at the top, I thought I’d have to have a look.

And I loved his post! It used almost exactly the same example that I had, and came up with the same results, ten years later (I don’t mean that his query took ten years to return results, just that his post is from 2017 and mine is 2007). It’s great to see the ‘next generation’ of blog posts coming up with the same conclusions. (I get to pick on Aaron like this because he was an MVP almost ten years before me (despite being born in the same year as me!), and I therefore consider him a different generation of SQL expert. An ‘elder statesman’ if you like.)

What I want to explore in this post is the particular example that we both used… to bring an important point that could be missed because of the similarity of our examples.

You see, we both happened to use a FOR XML concatenation query, looking back at the same table. We did this to simulate a practical GROUP BY – somewhere that you might feel like GROUP BY is useful, but you know that you’re not using an aggregate function like SUM or MAX, but there isn’t one available. Ok, for Aaron he could’ve used the really new STRING_AGG, but for the old-timer like me, having to use SQL Server 2005, that wasn’t available.

The upshot was that we both used a sub-query, one example using GROUP BY…

…and one using DISTINCT instead.

And, we both demonstrated that GROUP BY was quicker, because the QO didn’t need to do all the string concatenation for every row, only for every unique WhateverWeWereGroupingBy.

Go read those posts to see the results. I’m not going to repeat them here.

Back again? Ok, cool.

Now, this isn’t particularly different to any other sub-query, like this one in the AdventureWorks database,

, with the corresponding plans shown here.

image

Both posts demonstrate that using DISTINCT, the Query Optimizer comes up with a worse plan, because it doesn’t seem to know that the sub-query will return the same values on subsequent calls, figuring that it should see what comes back each time before checking for distinct values. It’s like it worries that DISTINCT option could produce different results. Using GROUP BY it only calls the sub-query for the distinct values in the GROUP BY clause, which could be massive savings.

Now, I’m sure you’re thinking “Hang on – I could’ve written that query another way…”, and you’re right. You probably would’ve written it like this:

…and the query plan for this one looks like this, which is the shape as the GROUP BY query – despite using DISTINCT.

image

But if I use a LEFT JOIN instead of an INNER JOIN, then the behaviour changes significantly, and it goes back to joining the results of the aggregate, rather than aggregating the results of the join.

image

I’m a big fan of LEFT JOIN – it makes it very clear that I’m not filtering my base table when I do the join. So I don’t want to hear that the performance is worse.

Luckily, this sounds like a thing about GROUP BY. After all, I only want to group by od.ProductID – there can only ever be one Product Name per ProductID. And yet,

image

it doesn’t help at all.

Hmm.

There is definitely an advantage to using GROUP BY over DISTINCT when using sub-queries. But the story isn’t actually as clear as all that, and is affected by a bunch of other scenarios.

The trick is to look at where the aggregation is going on, and to remember that an aggregate operator reduces the number of rows in play. Anything which reduces the number of rows being pulled through the plan should be pushed as far to the right as possible. If your aggregate will do that the most, do what you can (query rewrites, indexes, constraints, preferably not hints) to get that operator as far right as possible. But you could do the same with a join operator, or a Filter operator, whatever is reducing the rows.

Understand that GROUP BY is generally better than DISTINCT, because it’s a technique for pushing the operator to the right hand side of a join. But remember it’s only part of the story.

@rob_farley

PS… I feel like I want to expand on this a bunch more, but I need to get this published tonight for T-SQL Tuesday. It’s the 99th one, and I don’t want to miss one now, just because I could type about this for hours.