The SQL Query Optimizer – when Logical Order can get it wrong

It’s very easy to get in the habit of imagining the way that a query should work based on the Logical Order of query processing – the idea that the FROM clause gets evaluated first, followed by the WHERE clause, GROUP BY, and so on – finally ending with whatever is in the SELECT clause. We even get in the habit of creating indexes that focus on the WHERE clause, and this is mostly right.

But it’s only mostly right, and it will often depend on statistics.

There are other situations where statistics have to play a major part in choosing the right plan, of course. In fact, almost every query you ever run will use statistics to work out the best plan. What I’m going to show you in this post is an example of how the statistics end up being incredibly vital in choosing the right plan. It also helps demonstrate an important feature of the way that Scans work, and how to read execution plans.

I’m going to use AdventureWorks2012 for this example. I’m going to ask for the cheapest product according to the first letter of the product name. This kind of query:

Don’t run it yet. I want to ask you how you’d solve it on paper.

Would you prefer I give you a list of the products sorted by name, or would you prefer I give you that list sorted by price?

If you want the ‘sorted by name’ option, then you’ll have to look through all the products that start with H, and work out which is the cheapest (notice that my predicate is not an equality predicate – if I knew what the name had to be exactly, then I could have an index which ordered by name and then price, and very quickly find the cheapest with that name). This approach could be good if you don’t have many products starting with that particular letter. But if you have lots, then finding them all and then looking for the cheapest of them could feel like too much work. Funnily enough, this is the way that most people would imagine this query being run – applying the WHERE clause first, and applying the aggregate function after that.

On the other hand, if you have lots of products with that particular letter, you might be better off with your list sorted by price, looking through for the first product that starts with the right letter.

Let me explain this algorithm a little more.

If you’re at a restaurant and are strapped for cash, you might want to see what the cheapest thing is. You’d pick the “sorted by price” menu, and go to the first item. But then if you saw it had peanut in, and you have an allergy, then you’d skip it and go to the next one. You wouldn’t expect to have to look far to find one that doesn’t have peanut, and because you’ve got the “sorted by price” menu, you have the cheapest one that satisfies your condition after looking through just a few records.

It’s clearly not the same algorithm as finding all the things that satisfy the condition first, but it’s just as valid. If you’re only going to have to look through a handful of products before you find one that starts with the right letter, then great! But what if there are none? You’d end up having to look through the whole list before you realised.

The Query Optimizer faces the same dilemma, but luckily it might have statistics, so it should be able to know which will suit better.

Let’s create the two indexes – one sorted by Name, one sorted by Price. Both will include the other column, so that the query will only need one of them.

Now let’s consider two queries. Both queries give the same result – $0.00. But that’s not important, I’m only interested in how they run.

The two queries are almost identical, but they run quite differently.


Ok, they’re fairly similar – they both use a Stream Aggregate operator, for example. And they have similar cost. But significantly, one is performing a Seek, while the other is doing a Scan. Different indexes, but nevertheless a Scan and a Seek.

People will tell you that Scans are bad and Seeks are good, but it’s not necessarily the case. Here, we see that the Scan plan is no more expensive than the Seek plan – it’s just different. We should consider why.

Those two indexes are the two different stories that I described earlier. There are very few products that start with the letter ‘I’, and quite a number than start with ‘H’, and so the Query Optimizer has chosen differently.

There are exactly 10 products that start with I. From a total of 504. That’s less than 2% of the products.

There are 91 products that start with H. That’s 18%. You might not have expected it to be that high, but that’s okay – if SQL has been maintaining statistics for you on this, it hopefully won’t be as surprised as you.

18% – nearly 1 in 5. So by the time you’ve looked at, oh, a dozen records, you will have almost certainly found one that starts with an H. (Actually, the chance of NOT finding one in the first 12 would be power(.82, 12), which is 0.09. That’s just 9%.) If I do a bit of digging into the internals, I can discover that the pages in my index typically have over a hundred records on them each. The chance of not finding a product that starts with an H on that first page – you’d need lottery-scale luck (1 in 444 million).

On the other hand, the cost of finding the cheapest value from 91 records is a lot more expensive than finding the cheapest from just 10. And getting all 10 records should be a small number of reads too.

But a Scan! Really? It has to look through the whole table, right?

No. That’s not how it works.

You see, execution plans go from left to right. If you start reading these plans from the right, you’ll start thinking that the whole index has been scanned, when it’s simply not the case. That Top operator asks for a single row from the index, and that’s all it provides. Once that row has been found, the Scan operation stops.

For this information, I don’t even need to pull up the Properties window for the Scan (but I would recommend you get in the habit of doing that). No – this is all available in the Tool Tip. Look at the number of “Actual number of rows” – it’s just one.


A predicate is applied – it looks through the index for rows that start with H – but it’s doing this in Order (see Ordered = True), and it’s stopping after the first row is found. Remember I mentioned that there are actually 91 rows that satisfy the predicate? The Scan doesn’t care – it only needs one and it stops right then.

You might figure this is because we are using MIN. What if we needed the MAX though? Well, that’s just the same, except that the Direction of the Scan is BACKWARD (you’ll need F4 for that one).


MIN goes forward, because it’s most interested in the ‘smallest’ ones, MAX will go backward because it wants the ‘largest’. (And as you’d probably expect, if you’d created your index to be descending, then it would be reversed.)

But again – being able to tell which is the better algorithm depends entirely on your statistics being known.

I see so many systems have bad statistics for one reason or another, and typically because the data most frequently queried is the newest data, and that makes up such a small percentage of the table. The statistics will think that there is almost no data for ‘today’, as they probably haven’t been updated since at least some number of hours ago.

When you look at how a query is running, always have a think about you’d solve it on paper, and remember that you might actually have a better (or worse) picture of the statistics than what the Query Optimizer has.

And remember that a Scan is not necessarily bad. I might do another post on that soon as well.


Portfolio changeover time for the PASS board

The last PASS board meeting for the year has happened, and the portfolio handovers are well under way.

Sadly, having new board members elected means having existing board members step down, and this last board meeting was the last one for both Allen Kinsel (@sqlinsaneo) and Kendal van Dyke (@sqldba). In 2012, these guys had the portfolios of local chapters and SQL Saturdays, respectively.

Newly elected board member Wendy Pastrick (@wendy_dance) is taking over from Allen on local chapters, while I’m taking over SQL Saturdays from Kendal. In 2012, my portfolio was 24 Hours of PASS, which is being rolled into the Virtual Chapters portfolio, headed still by Denise McInerney (@denisemc06).

I have to admit that I’m really excited that the 24HOP portfolio is being merged with Virtual Chapters, as the two are so linked. I had been on the 24HOP committee before I joined the PASS board, and had recommended that the two portfolios be merged around the time I was elected to the board. During my term I even recruited Virtual Chapter leaders to be on the committee for 24HOP, as I believe their experience in the online experience makes them best suited to influence PASS’ premier online event – the semi-annual 24HOP.

2012 was a good year for 24HOP, although it was the riskiest for some time as well.

Two of the more obvious changes that we made were to look at a new platform, and to return to the 24-hours straight format (rather than two 12-hour blocks). This more continuous format meant that numbers dropped (the largest audience is in the US, so any sessions that are overnight for the US are obviously going to have smaller attendance). However, this format meant we reached over 100 different countries, which I think was really significant. Comparing the first 2012 event with the first 2011 event (which used the 2×12 format), we jumped from reaching 54 countries in 2011 to 104 in 2012.

imageWhile I was still on the committee, we had discussed the need for a new platform, as the LiveMeeting platform wasn’t coping well with the numbers we were seeing. A number of options had been considered, some too expensive, some not capable of scaling sufficiently, and a decision had been made to use a platform called IBTalk. It was obviously more expensive than LiveMeeting (which had been available for free), but looked like it was going to scale much more nicely. We used it for both 2012 events and it will also be used for the next event (on Jan 30). The decision to use IBTalk was very risky, but as an experiment it seemed to work okay. There were both good and bad elements of the platform, which I’m not going to go into in a forum like this, although the second event that we used IBTalk for ended up being much smoother than the first, and I anticipate that the Jan30 event will be event smoother still.

I felt like the first event of 2012 was dominated by the new platform. It was held two weeks after the SQL Server 2012 launch, which had also been a large virtual event using a new platform. I guess experimenting with new platforms was a large topic of discussion that month. One thing that didn’t really work for us was the closed captioning. It turns out that when you have someone providing closed captioning live, any typos that come through, or anything that is misheard by the person providing the service, etc… well, it doesn’t always work for being able to feed a translation service. We tried, and it was good to try – but it didn’t work so well. Despite all that, PASS members can view the session recordings at

The main 24HOP event in the second half of the year was the annual Summit Preview event. We didn’t try to pursue the closed captioning again, but we did continue with IBTalk. Going back to LiveMeeting was never going to be an option for us, and we wanted to take a second look at the platform, in light of the various things we’d learned from the experience in Q1. It was a better experience from a number of perspectives, and we certainly got to test the scalability.

Over the course of the day, we had good numbers – only a handful shy of 10,000 attendees across the course of the day (okay, a handful if you count fingers, toes, and were inbred – we had 9979). The lowest attendances were around the 100 mark, but the largest reached 1421 attendees. The highest from any previous events was around the 800 mark, so this was a significant improvement – and the platform handled it just fine. If we’d had that many people trying to access the LiveMeeting platform it simply wouldn’t’ve coped, and the IBTalk platform deserves praise for that.

The platform decision isn’t over yet. A new search has taken place in light of what we’ve learned in the past year, and including a lot of what people have expressed to us on platforms such as Twitter. There are platforms that are way out of our price range (it can be very expensive to present 10,000 man-hours of content using some platforms), and there are ones that won’t cope with some of the things we would like to do. With some of the Virtual Chapters growing fast, a new platform needs to be able to cope with them too, with a wide variety of attendances needing to be handled. I wish Denise all the best for that, and have been able to happily assure her that the PASS HQ team that does most of the work behind the scenes for 24HOP (particularly Jane and Vicki) is excellent and will make her look brilliant this year.

Another change in 2012 was the sponsorship story. For a long time, Dell had been a major sponsor of 24HOP, and I want to thank them for that. However, 24HOP wasn’t a priority for them in 2012, and new sponsors needed to be found. The first event saw sponsorship come in from Microsoft, SQL Sentry and Idera, with Idera being replaced by RSSBus for the second event. But what really excited me was to see a second tier of sponsors join the fray, with Melissa Data and Confio joining Idera as ‘Alliance Sponsors’. It was really good to have six fantastic companies sponsoring the event, and providing extra options for them.

I haven’t even mentioned the non-English events that have taken place! PASS has seen 24HOP events in Russian, Portuguese and Spanish this year, although my personal involvement with those events have been somewhat less. Again, the PASS HQ staff have been great in enabling these events, and helping them run smoothly.

So I leave 24HOP in capable hands.

Instead, I pick up the SQL Saturday portfolio – another fast-growing facet of PASS. Already the 200th SQL Saturday event has been scheduled, and I’m finding myself getting onto a moving train. Luckily, I won’t be battling anyone on the roof Bond-style, but there are a lot of things that will need attention to make sure that the SQL Saturday model can continue to be successful.

imageThe PASS HQ staff most responsible for the SQL Saturdays that happen all around the world are Karla and Niko. If you’ve ever met either of these two, you’ll know that they run quickly and are nothing if not achievers. I suspect that I could just tell them to keep doing their thing and the portfolio would be considered successful. This is incredibly useful to me, because I should be able to focus on identifying and solving some of the things that might need to change as these events become larger in both size and number. I’m keen to look into some of the edge cases, such as international events (including non-English), and both the larger and smaller events that are around – but all the time trying to serve Niko, Karla and all the community leaders in what they do.


SQL Replication demystified

I spent some time explaining SQL Server Replication to someone recently. They said they hadn’t ever really understood the concepts, and that I’d managed to help. It’s inspired me to write a post that I wouldn’t normally do – a “101” post. I’m not trying to do a fully comprehensive piece on replication, just enough to be able to help you get the concepts.

The way I like to think about replication is by comparing it to magazines. The analogy only goes so far, but let’s see how we go.

The things being replicated are articles. A publication (the responsibility of a publisher) is a collection of these articles. At the other end of the process are people with subscriptions. It’s just like when my son got a magazine subscription last Christmas. Every month, the latest set of articles got delivered to our house. FOUR-FOUR-TWO_JAN-13(The image here isn’t my own – but feel free to click on it and subscribe to FourFourTwo – excellent magazine, particularly when they’re doing an article about the Arsenal.) Most of the work is done by agents, such as the newsagent that gets it to my house.

In SQL Server, these same concepts hold. The objects which are being replicated are articles (typically tables, but also stored procedures, functions, view definitions, and even indexed views). You might not replicate your whole database – just the tables and other objects of interest. These articles make up a publication. Replication is just about getting that stuff to the Subscribers.

Of course, the magazine analogy breaks down quite quickly. Each time my son got a new edition, the articles were brand new – material he’d never seen before. In SQL Replication, the Subscribers probably have data from earlier. But this brings us to look at a key concept in SQL Replication – how the stupid thing starts.

Regardless of what kind of replication you’re talking about, the concept is all about keeping Subscribers in sync with the Publisher. You could have the whole table move across every time, but more than likely, you’re going to just have the changes go through. At some point, though, the thing has to get to a starting point.

This starting point is (typically) done using a snapshot. It’s not a “Database Snapshot” like what you see in the Object Explorer of SQL Server Management Studio – this is just a starting point for replication. It’s a dump of all the data and metadata that make up the articles, and it’s stored on the file system. Not in a database, on the file system. A Subscriber will need this data to be initialised, ready for a stream of changes to be applied.

It’s worth noting that there is a flavour of replication which just uses snapshots, known as Snapshot Replication. Every time the subscriber gets a refresh of data, it’s the whole publication that has to flow down. This might be fine for small pieces of data, it might not for others.

(There are other ways to get started too, such as by restoring a backup, but you should still be familiar with the concept of snapshots for replication.)

To get in sync, a subscriber would need the data in the snapshot for initialisation, and then every change that has happened since. To reduce the effort that would be required if something went drastically wrong and a new subscription became needed, snapshots can be recreated at regular intervals. This is done by the Snapshot Agent, and like all agents, can be found as a SQL Server Agent job.

The middle-man between the Publisher and the Subscribers is the Distributor. The Distributor is essentially a bunch of configuration items (and as we’ll see later, changes to articles), stored in the distribution database – a system database that is often overlooked. imageIf you query sys.databases on a SQL instance that has been configured as a Distributor you’ll see a row for the distribution database. It won’t have a database_id less than 5, but it will have a value of 1 in the is_distributor column. The instance used as the Distributor is the one whose SQL Server Agent runs most of the replication agents, including the Snapshot Agent.

If you’re not doing Snapshot Replication, you’re going to want to get those changes through. Transactional Replication, as the name suggests, involves getting transactions that affect the published articles out to the subscribers. If the replication has been set up to push the data through, this should be quite low latency.

So that SQL Server isn’t having to check every transaction right in the middle of it, there’s a separate agent that looks though the log for transactions that are needed for the replication, copying them across to the distribution database, where they hang around as long as they’re needed. This agent is the Log Reader Agent, and also runs on the Distributor. You can imagine that there is a potential performance hit if this is running on a different machine to the Publisher, and this is one of the influencing factors that means that you’ll typically have the Distributor running on the Publisher (although there are various reasons why you might not).

Now we have a process which is making sure that initialisation is possible by getting snapshots ready, and another process which is looking for changes to the articles. The agent that gets this data out to Subscribers is the Distribution Agent. Despite its name, it can run at the Subscriber, if the Subscriber is set to pull data across (good for occasionally connected systems). This is like with my magazine – I might prefer to go to the newsagent and pick it up, if I’m not likely to be home when the postman comes around. In effect, my role as Subscriber includes doing some distribution if I want to pull the data through myself.

These three agents, Snapshot Agent, Log Reader Agent and Distribution Agent, make up the main agents used for Transactional Replication, which is probably the most common type of replication around. Snapshot Replication doesn’t use the Log Reader Agent, but still needs the other two.

Now let’s consider the other types of replication.

imageMerge Replication involves having subscribers that can also change the data. It’s similar to Transactional Replication with Updateable Subscribers, which has been deprecated. These changes are sent back to the Merge Agent, which works out what changes have to be applied. This is actually more complicated than you might expect, because it’s very possible to have changes made in multiple places and for a conflict to arise. You can set defaults about who wins, and can override manually through the Replication Monitor (which is generally a useful tool for seeing if Subscribers are sufficiently in sync, testing the latency, and so on). Updateable Subscribers end up using the Queue Reader Agent instead of the Merge Agent. They’re slightly different in the way they run, but I consider them to be quite similar in function, as they both involve getting the data back into the publisher when changes have been made elsewhere.

Peer-to-Peer Replication is the final kind. This is really a special type of Transactional Replication, in which you have multiple publishers, all pushing data out at each other. It’s the option that is considered closest to a High Availability system, and is good across geographically wide environments, particularly if connections are typically routed to the closest server. Consider the example of servers in the UK, the US and Australia. Australian users can be connected to the local server, knowing the changes are going to be pushed out to the UK and US boxes. They’re set up in a topology, with each server considered a node. Each server keeps track of which updates it’s had, which means they should be able to keep in sync, regardless of when they have downtime. If Australian changes are sent to the UK but not the US, then US can be updated by the UK server if that’s easier.

Replication can feel complex. There are a lot of concepts that are quite alien to most database administrators. However, the benefits of replication can be significant, and are worth taking advantage of in many situations. They’re an excellent way of keeping data in sync across a number of servers, without many of the server availability hassles associated with log-shipping or mirroring. It can definitely help you achieve scale-out environments, particularly if you consider Peer-to-Peer, which can help you offload your connections to other servers, knowing the key data can be kept up-to-date easily.

I haven’t tried to be completely comprehensive in this quick overview of replication, but if you’re new to the concepts, or you’re studying for one of the MCITP exams and need to be able to get enough of an understanding to get you by, then I hope this has helped demystify it somewhat.

There’s more in SQL Books Online, of course – a whole section on Replication. If what I’ve written makes sense, go exploring, and try to get it running on your own servers too. Maybe my next post will cover some of that.


The MCM lab exam – two days later

Readers of my blog, or followers on Twitter will know I took the MCM Lab exam a couple of days ago. I let people know I was doing the exam, rather than doing the ‘normal’ thing of doing it in secret and hoping no-one found out until a successful result had been published.

Oh, and this post has been approved by the MCM program’s boB Taylor (@sqlboBT) as not-breaking NDA. Nothing herein should be seen to imply that a particular topic is or isn’t in the exam.

So how did I go? Well… I made a bunch of mistakes, I wasted a lot of time, I even left some questions incomplete, and so I assume I’ve failed. It’s a horrible feeling, I can tell you. I went in not knowing what to expect. I knew that I’d worked with SQL Server for a lot of years, and felt like I had a good grounding in all the various aspects of SQL Server that I might have to know – but on the day, in the stress of having a laggy remote desktop session, a Lync connection which had dropped out just beforehand, being told that ‘everyone’ fails first time but that I was probably going to be one of the exceptions…, well, I didn’t feel like it was my day.

If I’d had that day at a client site, it would’ve been fine. With extra time, I have no doubt that I would’ve been able to get through things. I could’ve raised questions when I didn’t feel exactly sure about what was expected. I could’ve pointed out how much I’d done to help address a situation and found out if they wanted me to continue hunting for other potential issues. I could’ve asked for more information. I could’ve written a document describing what I had in mind to solve the particular problem to confirm that the client was happy with it. You don’t get any of that in exam situations.

I found myself doing things the ways that work in the real world, but not having somewhere to turn when it turned out that the information was clearly not quite there, turning a simple situation into a trickier one. In real life, I’d’ve turned to the face peering over my shoulder and said “I thought you said it was that thing.” Instead, I wasted time looking for what it actually was.

I found myself making changes to databases that I wouldn’t make in the real world without first running those changes past someone, or at least making sufficient disclaimers along the lines of “I would recommend making this change, but there’s always a chance that something else will see the impact of this…”

Now, I think I could go into the exam, and even faced with a different set of questions, have a better picture about the style of thing that would be asked, and be better able to identify the danger-items – those things which could lure me away from staying on track. It’s not the same with consulting, because you can always ask as soon as you consider there’s a potential problem. Just like the mechanic who says “So, there’s a rattle… do you want me to spend some time looking into this? If so, is half an hour okay?”, I can ask my clients if there’s something which doesn’t smell right. In the exam, you don’t get that luxury.

If you’re going to take the exam, I would recommend the following approach:

  • Start by going through the questions, making notes about what they’re asking you to do and how much time it’ll take. Move the list of questions to the side, because switching from one window to another will simply take too long.
  • Once you’ve identified ones that you know will be ‘quick’, do them. But if one stops being quick, leave it and come back. You’re not in danger of leaving it unfinished completely – there’s 5.5 hours that you’ll be remoted in, and you’ll be coming back in about 30 minutes. But you don’t want to find that you spend an hour on something which you anticipated would be a ten minute job.
  • Whenever you leave a question, put it back in the list at an appropriate spot. If you think it’s still quicker than the question on XYZ, then fine, put it above that. But be ruthless about how long you spend on each task. If something doesn’t work the way you expect, do some troubleshooting, but don’t treat it as if your client depends on it. Your exam result will suffer more because you wasted time, than if you left it incomplete. You could end up finding that the XYZ question actually turned out to be simpler than you thought.

I hope I passed, but I really don’t think I have done.

I’m confident I blitzed quite a lot of them. There were plenty on the list that I moved through quickly, and others that took me longer than I expected, but I still finished. It’s just that there were a certain number that I assume I didn’t finish satisfactorily – simply because if it were my database that I had got a consultant in to fix, I wouldn’t’ve considered it complete. I mean, hopefully I hit the key things on some of those, but I can’t tell.

I know I definitely got some just plain wrong. The things that weren’t working, and I didn’t have time to get back to. I was getting hungry by the end of it, and was feeling stressed about the amount of time I’d wasted on other questions.

Assuming I need to retake it, I have to wait until 90 days after this attempt. That’s March 20th.

So what will I do between now and then? Well, I might check through the various guides about the things that stop things from working the way I expect them to. I’m not saying what didn’t work, but imagine there’s some technology that you’re familiar enough with. To use Tom’s example, peer-to-peer replication. I know how to set that up – I’ve done it before. It’s actually very straight forward. But if you do what you always do and you get some error… well, that might be harder to troubleshoot. In the real world there are plenty of ways you can troubleshoot online, but in an exam, it’s just Books Online and what’s in your head.

In that regard, Tom’s guide on what he didn’t do is useful. His “Don’t try to get every answer” is very important. But I would also point out that you should start to try them, in case they turn out to be easier than you expected. Just because it’s on P2P (continuing Tom’s example) doesn’t mean you won’t get it. Tom described shouting at the screen saying “Seriously? Is that all you’ve got?” – this is why you should try the ones that you figured would be harder. Don’t start with them, but a few hours in, don’t just focus on the time-wasters.

Tom described ‘not studying alone’. I discussed various things with people leading up to the exam, more so than studying per se. There’s another person who I know is taking the exam soon, and we’ve talked a bit about the various technologies. Like “How are you at TDE? What’s your clustering like?” – that kind of thing. He has environments set up where he’s practising a bunch of tasks. I didn’t. I did skim through some of the videos, but not many, and it was really only skimming. I found myself being slightly interested in the video where PaulR demonstrates how to set up replication with a mirrored subscriber. I watched it a couple of times because something wasn’t sitting right – I figured it out though… in the exam you don’t want to be using scripts for everything (the GUI would be much better – not that I practised at all), but also, the error that he gets at the end… it’s because his script tries to remove the subscriber from the wrong server. In the GUI, he wouldn’t’ve had that problem (reminds me – I should mention this particular mistake to him, although I’m sure he knows about it and just doesn’t have an easy way to change the video). I didn’t end up watching most of the videos – even the ones on Resource Governor. I looked at the titles, and figured it would be fine to skip them. I listened to one on Clustering Troubleshooting in the car as I drove home from town, but just kept thinking “Right – I get all that… but is that enough for the exam?” I can’t say whether it was or not, but I can tell you that I didn’t do any further study on it. I also watched the Waits demo, and the slides on Log File Internals (but mainly out of curiosity about a different problem I’d had a while back).

I didn’t read the books. I’ve read a couple of chapters of Kalen’s SQL Internals before, but didn’t finish it. I haven’t even read the book I wrote recently (well, obviously the chapters I wrote – I read them lots of times).

Of Tom’s advice on what he did to pass (doing the reading, watching the videos, writing things down and teaching others) I really didn’t do any of them. But I do spend time teaching others in general. I’m an MCT – I teach from time to time (not as often as I used to), and that sharing aspect is important to me. I explain to people about how to tune their queries. I explain why indexing strategies will work for general performance gain. I explain why security is important, and the keys to avoiding SQL Injection. Even today I spent time explaining the roles of Replication Agents to someone (who later told me that they’d heard that stuff before, but it just made more sense when I explained it). Teaching comes naturally to me, and I’ve always done it. But I didn’t do any of that with this MCM exam in mind.

My advice on how to pass this exam – use SQL Server for fifteen years. It’s what I did. Studying might work for you too, and if that’s what you’ve done, then you may well get to leave the lab exam feeling a whole lot more confident about your result than me.

I actually did leave feeling very confident about my result. I’m confident I failed. But I also know that I could pass it tomorrow with no extra study – by avoiding the time sinks that are in there – and I wish I didn’t have to wait until March to retake it.

I’ll post again when my result becomes public, to let you know how I actually did.

Of course, I hope I’m wrong.


MCM Lab exam this week

In two days I’ll’ve finished the MCM Lab exam, 88-971. If you do an internet search for 88-971, it’ll tell you the answer is –883. Obviously.



It’ll also give you a link to the actual exam page, which is useful too, once you’ve finished being distracted by the calculator instead of going to the thing you’re actually looking for. (Do people actually search the internet for the results of mathematical questions? Really?)

The list of Skills Measured for this exam is quite short, but can essentially be broken down into one word “Anything”.

The Preparation Materials section is even better. Classroom Training – none available. Microsoft E-Learning – none available. Microsoft Press Books – none available. Practice Tests – none available. But there are links to Readiness Videos and a page which has no resources listed, but tells you a list of people who have already qualified. Three in Australia who have MCM SQL Server 2008 so far. The list doesn’t include some of the latest batch, such as Jason Strate or Tom LaRock.

I’ve used SQL Server for almost 15 years. During that time I’ve been awarded SQL Server MVP seven times, but the MVP award doesn’t actually mean all that much when considering this particular certification. I know lots of MVPs who have tried this particular exam and failed – including Jason and Tom. Right now, I have no idea whether I’ll pass or not. People tell me I’ll pass no problem, but I honestly have no idea. There’s something about that “Anything” aspect that worries me.

I keep looking at the list of things in the Readiness Videos, and think to myself “I’m comfortable with Resource Governor (or whatever) – that should be fine.” Except that then I feel like I maybe don’t know all the different things that can go wrong with Resource Governor (or whatever), and I wonder what kind of situations I’ll be faced with. And then I find myself looking through the stuff that’s explained in the videos, and wondering what kinds of things I should know that I don’t, and then I get amazingly bored and frustrated (after all, I tell people that these exams aren’t supposed to be studied for – you’ve been studying for the last 15 years, right?), and I figure “What’s the worst that can happen? A fail?”

I’m told that the exam provides a list of scenarios (maybe 14 of them?) and you have 5.5 hours to complete them. When I say “complete”, I mean complete – you don’t get to leave them unfinished, that’ll get you ‘nil points’ for that scenario. Apparently no-one gets to complete all of them.

Now, I’m a consultant. I get called on to fix the problems that people have on their SQL boxes. Sometimes this involves fixing corruption. Sometimes it’s figuring out some performance problem. Sometimes it’s as straight forward as getting past a full transaction log; sometimes it’s as tricky as recovering a database that has lost its metadata, without backups. Most situations aren’t a problem, but I also have the confidence of being able to do internet searches to verify my maths (in case I forget it’s –883). In the exam, I’ll have maybe twenty minutes per scenario (but if I need longer, I’ll have to take longer – no point in stopping half way if it takes more than twenty minutes, unless I don’t see an end coming up), so I’ll have time constraints too. And of course, I won’t have any of my usual tools. I can’t take scripts in, I can’t take staff members. Hopefully I can use the coffee machine that will be in the room.

I figure it’s going to feel like one of those days when I’ve gone into a client site, and found that the problems are way worse than I expected, and that the site is down, with people standing over me needing me to get things right first time…

…so it should be fine, I’ve done that before. 🙂

If I do fail, it won’t make me any less of a consultant. It won’t make me any less able to help all of my clients (including you if you get in touch – hehe), it’ll just mean that the particular problem might’ve taken me more than the twenty minutes that the exam gave me.


PS: Apparently the done thing is to NOT advertise that you’re sitting the exam at a particular time, only that you’re expecting to take it at some point in the future. I think it’s akin to the idea of not telling people you’re pregnant for the first few months – it’s just in case the worst happens. Personally, I’m happy to tell you all that I’m going to take this exam the day after tomorrow (which is the 19th in the US, the 20th here). If I end up failing, you can all commiserate and tell me that I’m not actually as unqualified as I feel.

Joins in single-table queries

Tables are only metadata. They don’t store data.

I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. TSQL2sDay150x150

In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries.

I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post.

So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this.

All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes.

So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you.

I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision.

So – what I’m going to tell you is that a Lookup is a join.

When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan:


I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup.

It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as


Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one.

You need to consider this when you’re thinking about your queries.

But it gets more interesting.

Consider this query:

It doesn’t look like there’s a join here either, but look at the plan.


That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable.

Another example is the simple query


This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes.

The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still).

You need to think like this too.

Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries.

…and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.