Sometimes it’s the little things…

The easiest way to convince someone to use SSMS 2008 over SSMS 2005 was to right-click on a results grid. They’d see “Copy with Headers” in the context menu and be sold immediately. I’d happily show them that they could configure this option in SSMS 2005, but to have it immediately available in SSMS 2008 was enough to convince many people to install the newer tools right away.

The new version (Denali CTP3) of Management Studio has a few little things that I don’t mind, even though none of them are quite as compelling as the “Copy with Headers” option.

I’m going to pick three:

Cycle Clipboard Ring

This is handy stuff. I copy & paste far too often when writing queries. I’ll be refactoring as I go, that kind of thing, looking at plans, copying out of the plan properties into comments, and so on. Invariably, I’ll think I have something on the clipboard, and find myself having to find it again because I will have replaced my clipboard with something else. So now I should be able to use Ctrl+Shift+V to get back to something which used to be in the clipboard. Brilliant. 🙂

Bear in mind, it’s in the Edit menu, but not in the Context menu (so you don’t see it if you right-click in the query pane).

Zoom

This is handy, but doesn’t go quite far enough for me. At the bottom of the query window, above the results, to the left of the horizontal scrollbar, it says “100%”

It’s a zoom function, so that I can make the text of my query bigger or smaller. This is great. I normally write queries in a surprisingly large font, and this means that when it’s just me looking at the screen, I can easily zoom out more and see what’s at the end of the longer lines.

I only wish that it would zoom the results too. Maybe that’s so we can tell our clients “Trust me – those results are correct”

Plus, it feels ironically small. Fortunately, you can use ZoomIt to make the Zoom button bigger.

Highlighting

Compare the following images:

  

In the interest of readability, I like to colour selected text in yellow, not blue or grey or whatever. It was a tip I picked up from Andrew Coates many years ago.

See if you can spot the difference between the SSMS 2008 Options dialog and the Denali one.

 

The Denali one is on the right, and you might notice that “Item foreground” is disabled. In SSMS 2008, it wasn’t. And we ended up with all the selected text being black on yellow.

In Denali, they’ve changed that. The yellow is made fainter (not sure why, but it is – this is as yellow as I can make it), and the text stays the colour that it was before we selected it. Useful stuff.

But I’m disappointed with this feature.

Notice – this line is commented out, but I can run it by selecting the commented text and hitting one of the many Execute buttons. The selected text is coloured green, because that’s what colour it was before it was selected.

But I can run it. Wouldn’t it be useful to have it coloured based on what’s selected? That way, you’d have a better idea about the parts of a selection were still commented out, and what you were actually about to run. This is particularly handy when you use the Alt-select option. It had always frustrated me that selected text was all the one colour, and I’m really glad they have fixed this – now I just want them to take it one step further.

Like this:

 

Am I asking too much?

@rob_farley

Denali’s Management Studio for CTP3 playing nice with others (almost)

Until recently, I’d only done stuff with Denali on a dedicated box. But this weekend, I took the plunge and installed SSMS on a machine that has older versions of SQL Server installed on it. You can download Denali from this link.

I’ve only installed the tools – I haven’t installed the Database Engine, Analysis Services, and so on. It’s more that I want to get using Denali’s tools on a more regular basis, and figure that for writing queries, demonstrating things, and so on, it should be useful to use the latest tool set. I’ve been an advocate of this approach for a long time, but had been nervous about installing a pre-“release candidate” version on a box that I wouldn’t be entirely thrilled about blowing away.

image

Anyway – so far I haven’t noticed any problems. I can happily use the new SSMS to connect to my other boxes, and it seems to play just fine.

For the most part, Denali’s SSMS feels very similar to older versions, even though the splash screen shows that it’s “Powered by Visual Studio”. Let’s face it, there were clearly things in older SSMS versions that made it clear it was using a Visual Studio shell of sorts, such as the way that panels and menus were handled.

But in Denali, they’re admitting it in an even stronger case, particularly in regard to shortcuts.

The biggest frustration I have with Denali SSMS is that it prompts you about whether you’d rather use traditional shortcuts or use the default Visual Studio ones. Straight away, I picture going into clients and feeling clumsy as I hit shortcuts that don’t do the right thing.

This scares me – the idea that I might hit a shortcut and have it do something wrong.

I haven’t gone into the shortcuts completely yet – but I’m already dreading it. People’s data is very important to them, and therefore to me as well. I will not risk it. I’ve learned that lesson before, and I hope to never have to learn it again. I do not want to ever use a particular shortcut that I’m supposedly familiar with, and find it does something different to what I expect.

Luckily, settings are stored in a file, so you can take that file with you. Also, that file is local to your own Windows profile, so you should be able to tweak stuff without affecting other people too much.

On my machine, the settings file is in: C:\Users\Rob\Documents\SQL Server Management Studio\Settings\SQL Server Management Studio

image

This means you should be able to make sure that the shortcuts you use are the ones you’re used to, not the ones that someone else prefers. I’d recommend to everyone to have their .vssettings file stored somewhere accessible (USB key, SkyDrive, whatever).

I’m already tweaking a few things, because despite the fact that I chose the option to use settings from my previous version, it didn’t bring them all across. Things like my default font size, which I generally have quite large because of the amount of presenting I do, and also the amount of working alongside clients (where it’s easier to have a large font so they can see what I’m doing). So now I’m going to be taking my .vssettings file along with me, for those times when I find things don’t quite work the way I’d like.

@rob_farley

T-SQL thoughts about the 95th percentile

95% of statisticians may be offended at this post.

Unfortunately, I’m going to be showing my ignorance about statistics here, I’m sure. I may even use Wikipedia to back up my arguments, which will offend many more of you.

Someone (ok, it was Wes Brown (@sqlserverio)) was talking about percentiles the other day, and someone else (I’m not going to volunteer who), was talking about this being a great use of the NTILE() function. Well, I disagree.

Don’t get me wrong, I love the ranking functions, and have been a long-time advocate of the OVER clause – the first presentation I ever did as an MVP (way back in 2006) was about this stuff. I just think that NTILE() isn’t really the right tool for this particular problem.

The 95th percentile is regarded as that point in your data where 95% of values are below it, and 5% are above it. Or perhaps a range, for which 2.5% are below the range and 2.5% are above the range. I don’t really want to get caught up about what’s correct here – that’s for you to decide, as you know your data better than me, and the style of percentile you want to go with. Proper statisticians can choose to argue or not.

The easy way to do it using statistical methods is to consider the range around the average, plus (or minus) twice the standard deviation. This is generally accepted as producing a “95 percent confidence interval”.

Also, it’s quite easy to do using T-SQL, as we have both AVG and STDEV functions. If I jump into the AdventureWorks database and look at the number of rows of my clustered indexes, I can run a query like this:

image

Interestingly, all my ranges start in the negatives, which doesn’t make good sense for a sales amount. Also, if I have a look to see how many outliers there are (ie, rows that fall outside the range), I see that it’s consistently more than the 2.5% that statisticians would have me believe should be higher than the top of the confidence interval (in fact, the lowest I get is 3.16%).

image

Now, this method might be just fine for your needs. There’s nothing statistically that says that you need to have 2.5% of your data higher than the particular mark. These are just numbers used to generalise over a set of data.

But if you need to find a mark over which fits 2.5% of your actual data, then you could take a different approach.

In steps NTILE()…

Consider that we divide our data into exactly 40 tiles, ordered by the value of interest. You’d expect that the highest tile would be outside the top of the range, and the lowest tile would be outside the bottom of the range. Brilliant! We’ve solved it.

image

…except that we’d be tripped over by the detail, as is so often the case. It sounds really good, but doesn’t work in practice.

Suppose we only have 39 values… Oh no! We have no tile 40. I guess we’ll put NULL in for our percentile. Or maybe we should grab the value from the maximum TileNum instead – but is that actually correct? Really?

What if we have 41 values? Then TileNum #1 will have two values in it, and all the rest will only have 1. That’s hardly fair. If we have 79, then the last tile only has one while all the rest have two. There’s definitely something not right about this method. It sounds fine on the surface, but I’m just not happy with it once we look a bit deeper.

The tile concept isn’t necessarily awful though – I’m just going to tweak it a little, and abandon the idea of using NTILE().

Imagine our tiles, where Tile 40 is our top 2.5% of values and Tile 1 is our bottom 2.5% of values. If we position our values evenly across them, we might get something like this (a bit of artistic license in play for the number of tiles and values of course)

image

Now let’s apply a bit of maths to the situation.

Suppose we have NumVals values (the red lines in the image above). This would mean that we have (NumVals – 1) gaps between those values. I’m spacing them evenly. (Otherwise, I’d be just looking at 97.5% of the difference between the min and the max)

Tile 40 starts 97.5% of the way along this range of values (because 39/40 = .975). In terms of those gaps between the red lines, it’s (NumVals – 1) * .975. And because I’m going to number my gaps starting at 1 (not zero), I’m going to add one to this. (NumVals – 1) * .975 + 1

SalesPersonID 290 has 109 values. Using this for NumVals, we find our point of interest as (109 – 1) * .975 + 1 = 106.3. My 97.5% mark is therefore going to sit .3 of the way between value 106 and 107, like in the diagram below.

image

In T-SQL, the FLOOR and CEILING functions help me find the two values of interest.

image

Now that I have these two rows, I can easily use MIN and MAX to handle the TotalDue values and find my 97.5% mark.

image

It’s okay to use MAX(PercentileMark) in that SELECT clause – it’s the same value for all the rows being extracted from the CTE, so it ought to be okay.

You’ll notice that the value of 75062 is indeed 30% of the way between 74430 and 76535. I can also be completely confident that exactly 3 items are more than this mark, which makes sense when we consider the idea behind 97.5%.

To make this work across a selection of values, I just need to use GROUP BY and PARTITION BY, so that the aggregates and row_numbers are applied to each salesperson separately.

image

You can see that the value for salesperson 290 is produced correctly, and that we’ve found the top of the range for a 95% confidence interval. By using .025 as the value in the calculation for PercentileMark, we can easily work out the bottom of the range. Alternatively, if you need to find the .95 mark, then that will apply just as easily.

But like I said – many statisticians will be upset at my working here, so make sure that the ideas I’ve shown here fit with the way that your clients understand the idea of the 95th percentile.

@rob_farley

World Series of PASS

I’m sure Americans understand the joke that is the “World Series” of baseball. Everyone else rolls their eyes and knows that this ‘world’ means North America.

Some people feel similar with PASS, unfortunately. The North American Conference and the Global Summit are one in the same, and the 2013 Summit is to be held in Charlotte, North Carolina, to bring it “closer to database pros in the Eastern US who may not have been able to make the cross-country trip in the past.” The 24 Hours of PASS events have been split into two sets of 12, because numbers show that fewer people attend the sessions when America sleeps. “Two Days of Free SQL Server Training” is actually two Nights where I live, and most other people outside Europe and the Americas. (I’m actually on the organising committee for 24HOP, and am hoping to see this change back again soon)

And so it’s very exciting to see that there are people within PASS who are trying to Globalise the organisation more. And that’s GlobaliSe with an eSs, not a Zed. Microsoft’s Mark Souza and the current PASS President Rushabh Mehta are among those keen to make this happen, and I was recently invited to Stockholm for some meetings with representatives from around the world to discuss how to make PASS more global (which unfortunately, also means less US-centric).

It’s an interesting challenge – trying to think how to cater best for people who are essentially a minority in an organisation. It’s clear that the majority of PASS members are in the US. This is no surprise – it’s also the place that more SQL Server licenses are sold. But there is also a large portion of the SQL Server world which is not being targetted by PASS. The US market is comparatively low-hanging fruit, but there’s something to be said for reaching further afield. The US community will still benefit from most initiatives that are designed to cater better for non-US folk, but the non-US folk are less likely to benefit from US-ones.

Part of the challenge will be around metrics. If PASS measures success by the number of members, then it makes sense to focus on the more-populated areas of America. On the other hand, if success is measured in other ways, perhaps even ones that are more qualitative than quantitative, then globalisation can make better sense.

The meetings in Stockholm were very interesting, and it was good to hear opinions from other parts of the world, not just Americans (or Australians). But one of the most interesting things happened towards the end of it.

Three of us were invited to be non-voting advisors on the PASS Board of Directors – Raoul Illyés (Denmark), James Rowland-Jones (UK) and me (Aus). These appointed positions, only for a year (based on what the board is allowed to do). See page 4 of http://www.sqlpass.org/Portals/0/PASS%20Bylaws%20–%20June%202009.pdf, "The Board may also choose to appoint any number of non-voting advisors to the Board by a majority vote for a period of up to one year each."

Yes, these are currently non-voting positions that we’re taking on. But we do still have the ability to go through the election process to earn the ability to vote on board issues. I’m not sure we really need it though – this appointment will give the three of us (and in turn, our respective regions) a decent voice, even if the final decision is left to the existing people. In time, who knows how we can influence PASS and how PASS can influence the SQL world.

Finally – this isn’t about PASS taking over the world. It’s about PASS being better placed to support the worldwide community. This will probably involve supporting non-PASS events (such as SQLBits) too, and possibly even reducing the focus on requiring communities to sign up as official PASS chapters. A lot of things might change, but the hope and focus is on making PASS a better vehicle for supporting the worldwide SQL Server community.

@rob_farley

How failure can be useful

First, an apology. I haven’t blogged for the last couple of months, except for T-SQL Tuesday events. I make myself blog for these, for two reasons. One is the simple fact that I like the concept – I like the idea of getting a blogging assignment, and the fact that it’s a monthly event. I’m not saying that I want to sign up for every blogging meme that goes around, but I’m convinced it’s a good thing to write (publicly) about something each month, particularly as a prompt to write about something that isn’t my own idea.

TSQLWednesday

The other reason is that I haven’t missed one yet, and I feel that missing a month would be akin to failure.

But that’s not the kind of failure that this month’s theme is about. Nor is the kind of failure that you get when you try something (like one of the Microsoft exams) and don’t quite get the mark you wanted (like a “pass”). I don’t count this as failure, assuming you are allowed to sit the thing another time. I’m a firm believer that you should treat these kind of exams as a learning experience in themselves, and see how you go without study. Assuming you fail, you get a piece of paper explaining the areas where you should study, and you probably know what those areas are yourself, as you’ve had questions presented to you that gave you that feeling that we all know. The kind of feeling that tells you that you don’t actually know what you’re doing, and that you’re going to fail. We do all know that feeling. We’ve all had it plenty of times before, and will have it again. We feel it when we have a real failure, when we have that realisation that we’ve made a mistake, and we will all experience many times more. In an exam though, particularly one like a Microsoft exam where the impact of a ‘fail’ is remarkably small, the feeling is safe. Worst case, you can try the exam again in the future, at which point you’ll probably pass easily enough. And if I’m your boss, I seriously don’t mind at all.

Today, I’m a consultant who runs a successful company called LobsterPot Solutions. I hold myself to high standards, and failure is still something which causes stress. I tell myself that failure is something to learn from, but nevertheless, I find myself rehearsing prior events, practicing for next time I have to deal with that moment, as if I’ll one day have a second shot. I remember times I’ve accidentally hurt the people I love, and I beat myself up about it. I remember the times I’ve let down clients, and beat myself up about it. I remember the times I’ve let down God, and I really beat myself up about it.

Except that all these people understand that I’m human, and understand forgiveness. They also know that it’s never my intention to fail, and that from year to year, I fail less often (I hope – man, I really do hope).

Failure has to be part of growth. And we do get second shots. The better you are, the more second shots you get. If you fail to use the second shots, then that’s a much bigger problem. Whenever we fail, we need to put it behind us and move on. It’s not a sin to fail, but if you make a habit of failure, then that’s something you need to stop.

As for the T-SQL Tuesday theme – mentioning a time that I did something which I’d thought was going to be clever, and it really wasn’t…

I remember the time I accidentally deleted all the timesheets for the company I was working for. That was lousy. I felt all those feelings that I described earlier. I’d made a copy of a table and then deleted the data in the copy. The plan was to get the copy populated with a handful of items, and then use this as a basis for bulk entry – it seemed like a smart idea. Unfortunately it was a linked table, and the copy pointed at the same location as the original. Everything disappeared. The most recent backup available still meant that some timesheet data was lost, and everyone had to re-enter a lot of data. There were useful lessons to be learned that day, and my colleagues helped make sure I learned plenty.

I haven’t done that again.