Almost two years ago, I wrote about a method to use Table-Valued Parameters in SQL 2005 – or basically any environment that doesn’t support them natively.

The idea was to use a View with an ‘instead of’ trigger. Essentially, the trigger acts as a stored procedure, which is then used to be able to handle all the rows however you want. That could be distributing the rows into tables as if it’s an actual insert, but also works to just run whatever code you like, as a trigger is essentially just a stored procedure.

So anyway – today I got challenged to make it more explicit that this also works within SQL Server Integration Services. SSIS handles batch inputs into views, but if you were hoping to push data into a stored procedure in a data flow, that’s going to be on a row-by-row basis – no TVPs. I’d described it to a class I was teaching, showed them the code from my 2011 blog post, and they pointed out that I hadn’t made it particularly obvious (in my blog post) that you could use that method in SSIS.

The idea is very simple – you just create a view, as I did in my earlier post, and then use it as a destination within SSIS. This can be used for all kinds of methods for which you’d want to use a TVP, including pushing data up into SQL Azure databases.


Ultimate query tuning

Infinitely better.

100% of the reads removed. Roughly 4000 (okay, 3890), down to zero.


Let me explain…

Obviously if there’s data being returned, there are reads needed. And obviously there is some CPU needed for SQL to be given a query and do anything at all. Luckily for me, performance of a query is typically evaluated using the number of reads performed. That’s what I’m looking at.

Recently I came across a query that typically returns no rows. It was looking for rows in a table that met a particular condition, joined those rows to a bunch of other tables, and return the result. But yeah, typically no rows returned. Estimating a few, but none actually coming out.

Consider the following query, on AdventureWorks.


SELECT *       
FROM Sales.SalesOrderDetail as od        
JOIN Production.Product as p on p.ProductID = od.ProductID        
JOIN Production.ProductModel as m on m.ProductModelID = p.ProductModelID        
LEFT JOIN Production.ProductSubcategory as s on s.ProductSubcategoryID = p.ProductSubcategoryID        
where od.OrderQty > 1200; 

Running this query gives me no rows back, but well over a thousand reads.


As well as this, it suggested a missing index.


ON [Sales].[SalesOrderDetail] ([OrderQty])        
INCLUDE ([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [ProductID], [SpecialOfferID], [UnitPrice], [UnitPriceDiscount], [LineTotal], [rowguid], [ModifiedDate]); 

Creating this index does indeed reduce the reads. A lot too! Instead of scanning the Sales.SalesOrderDetail table, it can use an Index and quickly find any rows that have more than 1200 items sold.


This is good. But is it worth stopping here?

The index that I created gives a slightly different plan. It doesn’t perform a Seek on an index on the Sales.SalesOrderDetail table, it does a Scan! But zero reads.


The difference is a filter.


ON [Sales].[SalesOrderDetail] ([OrderQty])        
INCLUDE ([SalesOrderID], [SalesOrderDetailID], [CarrierTrackingNumber], [ProductID], [SpecialOfferID], [UnitPrice], [UnitPriceDiscount], [LineTotal], [rowguid], [ModifiedDate])        
WHERE OrderQty > 1200; 

Now, we have a completely empty index. Scanning this index is zero reads. It’s just like having a table with no rows in it. This index gets only populated with a row whenever a qualifying row appears in the underlying table. When that happens, there’s few reads required to be able to get the necessary data out of the other tables. But for 99.9% of the time this query is run, there are now NO READS. For the real scenario, that row gets picked up and removed quite quickly, returning the query to the ‘zero reads’ scenario.

Do you have frequently-run queries that typically return no rows, because there’s a predicate that is rarely satisfied? How about turning that predicate into an index filter, and seeing if you can reduce a big chunk of the footprint?


Looking back, or looking forward?

Sometimes I only blog for T-SQL Tuesday. I don’t want to break my run of blogging for each one just yet, but I do wonder sometimes.

The reason I’m reflecting is that what we’ve done in the past will so often affect what we do in the future. The musician Sting tells us that “History will teach nothing”, but he’s trying to address the idea of being tied down by the negative experiences of the past, rather than making a conscious effort to see improvement. We need to acknowledge what has happened in the past, but understand that we can shape our futures and see a better world ahead.TSQL2sDay150x150 History can both help keep us diligent (as in the case of my blogging), and it can help us see where change is urgently needed.

In the SQL world, we also need to find a balance between treating the past with respect, and not letting it hold us back.

Mickey Stuewe is hosting this month’s T-SQL Tuesday, and the topic is Auditing (as you may have guessed).

By keeping a record of what has happened, we can compare what things look like now, with what they looked like before, and that is incredibly powerful. (Erin Stellato is the name in SQL that most people associate with baselines – go and read some of her stuff on what you should be doing, because that’s not what I’m covering here.) It’s not just about being able to troubleshoot, but it’s about being able to implement change. Every change that we ever try involves risk, and there is a healthy amount of trepidation that goes along with that. But by being able to compare the before and after of change, you can make decisions more easily.

Let me give you a couple of examples – one in the SQL world, and one not.

The internet is full of tips and tricks about life (amongst countless other things, such as laughing penguins), with plenty being about little ways to be more effective. A while back I needed to replace a AA battery in the kitchen clock, but could only find AAA batteries. So I used a piece of Al-foil and got the clock working again. When I did this, I was curious about the impact on the battery, and the clock, and even on the foil. I wondered whether the battery would last as long, whether the clock would be affected by having less current going through it, and even whether it was safe (did the foil become hot, for example?). The problem was, I had no metrics to base my ideas on. I honestly had no idea how long it had been since the clock last got a new battery. Nor had I been tracking the health of the clock over, um, time, to see if it was staying accurate or not. I wasn’t about to start monitoring these things either – I just had to go with my gut and figure it was probably not awful for either the battery or the clock, but should a normal AA battery in soon. The risk was small, but if I’d had data about these things, perhaps I’d be able to make a more informed decision.

In SQL, I often tune queries. I can happily tell a client that I’ve reduced the amount of time taking for a query from 20 seconds to 0.2 seconds, or from 400,000 reads to 13 reads. I can easily take metrics on these things. But even these metrics aren’t always the right ones. What the customer is more interested in is whether or not these changes impact their business. Does the speed increase of these queries now mean that they are handle a larger customer base in busy times? Does this mean that they might be able to avoid spending the money on a new server? Does it both save them money and increase revenue?

Business Intelligence draws information out of all kinds of business data, and hopefully provides a platform for being able to make decisions. No matter whether the data is on the performance metrics of a server or on the sales metrics for a product, there is an opportunity to be able to implement change and notice an upturn. If you don’t have that data, if you haven’t been auditing your systems, then you’re approaching change with a very different hat on, one that probably doesn’t sit quite so comfortably. Looking back at what the past was like provides a glimpse of what the future might be, and insight into how change can become rewarding. History can teach us plenty.

But yes, although we all recognise that it’s good to have the metrics in place to measure the impact of change, we shouldn’t allow a lack of data to turn into both an excuse and a license for inactivity. Step out and see what change you can make, with both eyes open.