Avoiding the 'WHERE clause wasn't selected' problem

…you know the one. In SSMS it looks like this:

You meant to only update a few rows, but you weren't paying attention to what you'd selected before hitting the Execute button. The WHERE clause got missed from what you executed, and you've updated every row. Oops.

Now, I totally hear you that you could've put this in a transaction, checked your results and then only done a COMMIT when you were sure. You could even have put that COMMIT inside an IF block that checked @@ROWCOUNT… yes, yes, yes. You could have used an SSMS add-in to let you execute the whole query that you're currently on. You could've used a different tool entirely, such as Azure Data Studio. There are plenty of ways to avoid this. But I'm going to show you something that I find works for me, especially when I'm giving the script to someone else to run.

The trick is to use a sub-query, and to run my UPDATE across that. I have to use an alias, sure, but I generally do that anyway.

It looks like this:

And the trick is that this code will not run unless the whole sub-query is selected. It's updating the alias of the sub-query, so unless you explicitly remove the WHERE clause (which I know you might do for some unbeknown reason) and forget to put it back, you can't select just part of this by careless mouse-and-keyboard-shortcut work.

The "SELECT *, –" section makes sure that whoever is going to run this can tell what's about to happen, including the new values that they're looking for, confirm the number of rows, and then highlight from the UPDATE on, knowing that it's impossible for them to miss highlighting some of it. Because it's just way too easy to highlight 'not quite everything'.

I get that this idea doesn't work quite so well when CTEs are involved, and that I might need to be careful about column names if the FROM clause has multiple tables (such as by only returning the columns from the table actually being updated and anything else I need for the sake of the update). But still, it helps.

It's a small thing. Sometimes life is made up of a bunch of small things that together make a difference.

@rob_farley

Helping the research into COVID-19

Interesting times at the moment. You may have read my piece a few weeks ago about how I think it's irresponsible to do data analysis on the coronavirus situation unless you have sufficient domain knowledge and enough empathy to understand the impact of what you find. But this doesn't mean that we make like we're at the Winchester and wait for it all to blow over. There are some things we can do.

One such thing is the something that Glenn Berry (@GlennAlanBerry) put me on to, and that's Folding@Home.

One major area of research into diseases is to see how protein 'folds' into various shapes, so that they can function as enzymes, antibodies, and so on. This is all chemical biology that is way outside my area of expertise, but I understand that when proteins fold incorrectly somehow, some diseases can occur. Horrible diseases like Alzheimer's or cystic fibrosis. Understanding protein folding can also lead to knowing more about how viruses use proteins against our immune systems… you can read more about all this at https://foldingathome.org/covid19/

The thing is that to understand protein folding better, tonnes of simulations need to be run. And this takes processing power that research labs simply don't have enough of.

And this is where you and I come in.

I remember when Jim Gray went missing. I was in the US, working on a project for Microsoft, when news came in that his boat had gone missing. Within a few days, satellite images became available, and people (including me) would look through them for anything that might have been worth investigating. I trawled through hundreds of images hoping that someone somewhere might find some sign of him before it was too late. We never did and he was declared dead five years later. But seeing people all over the world come together to assist in a project like this was remarkable. And practical too. The SETI project, in which people listen to radio signals hoping to find extra-terrestrial intelligence doesn't seem as practical to me as trying to find a person who has gone missing.

Or to help find a cure for diseases which are ravaging our population.

And so I downloaded the Folding @ Home software from foldingathome.org, ran the software, and joined a team. I even spun up an old machine I wasn't using, so that it could help too. Now, processing power that I'm not using is running these simulations and it might bring researchers closer to finding cures for some of these things. And because of other processing power I have available, Kelly Broekstra has also put me onto DreamLab (that's iOS, try here for Android), which is a phone-based system that's similar. Now when I'm not using my iPhone but near a power supply (like when I'm sleeping), I run this app and let it crunch numbers. The screen dims about ten seconds after I tell it to start, which reassures me that I'm not burning the screen out.

This is a small thing. There are plenty of bigger things that we could all do, but it's also very simple to do this small thing, and it could become a very big thing if a lot of us do it.

Thank you Glenn and Kelly for putting me onto this. I hope everyone who gets involved stays involved long after COVID-19 has disappeared from society, because it seems this can really help researchers into some of the most horrible diseases out there.

@rob_farley

Patching SQL in Docker

If you read my blog regularly, you may have noticed that I installed SQL in Docker on my new laptop a couple of months ago (incidentally, I think it must still be new, because I still make mistakes when doing various shortcuts around the special keys like the arrow keys, End, PgUp, etc). Anyway, that was a while back now, and I recently wanted to try Cumulative Update.

Now, the thing with running SQL in containers is that the concept of downloading a patch file doesn't work in the same way. If it were regular Linux, the commands would be very simple, such as 'sudo yum update mssql-server' in RHEL. But Docker doesn't quite work the same way, as reflected by the Microsoft documentation which mentions Docker for installing but not in the Update section.

The trick to this is to remember that the Docker container is just a container running an image. So blowing it away and starting again is easy, as is spinning up different containers based on separate images. The database files are (hopefully) in separate volumes – accessed by the container, but not part of the image that's running.

So we just grab the latest image, stop and remove the old container, and run it up again using the new image. So I run this PowerShell code to do that. It's worth noting that the pull won't actually pull anything if the 2019-latest image hasn't changed. I could be more specific and pull the image based on the tag (which is unique), but the code I'm using will just grab whatever is marked as 2019-latest. Specific tags is what I want for testing against a variety of versions, so that I can be really sure which one is which, but if I'm just patching my local machine, 2019-latest is good.

A container on this image looks for the master database files in /var/opt/mssql/data and /var/opt/mssql/log, so by having those folders reference folders outside the container, nothing changes when I blow away the sql2019 container. When the new one starts, it sees the master database, which tells it where to find the user databases. As is the case with any upgrade, it'll apply any required changes when the SQL process spins up and loads the databases.

So this ends up being a very simple process. I can just have that block of code run whenever I want to make sure I've got the latest version. And if I forget, I'll come back here and grab the code again!

@rob_farley