Merry Christmas

I just wanted to take a moment to wish a Merry Christmas to you: people in the SQL Community; people I see at clients and the like; and especially those people I am incredibly privileged to have working at possibly the best SQL Consultancy in the world.

To those who I have represent my brand: I love you guys! You’re all passionate about providing the best experience for our customers, developing the SQL community, and doing amazing things to help people improve their data story. I couldn’t be prouder of you all. Sure, there are times when I lose sleep (and hair) over stuff, but I know that we have each other’s backs, and that’s a brilliant thing. I’ve often likened us to that story about the tiger in a cage. The best way to defend such a tiger is to let it out of its cage. If I can help enable you, and remove any obstacles that come between you and your ability to be phenomenal, then that’s what I’ll try to do. We all have our different styles, but together I think we can be an incredible force. It’s been a crazy year in many ways, including starting the LobsterPot story in the US (and Ted – you’ve been incredible!), but we have even more exciting times ahead, I’m sure. The Microsoft data stack is developing quicker than ever, and people are using it in bigger and better ways all the time.

Merry Christmas guys. Let’s continue to spread the SQL cheer… 🙂


Retrieving N rows per group

Sometimes a forum response should just be a blog post… so here’s something I wrote over at

The question was somewhat staged I think, being from Paul White (@sql_kiwi), who definitely knows this stuff already.

His question:

I often need to select a number of rows from each group in a result set.

For example, I might want to list the 'n' highest or lowest recent order values per customer.

In more complex cases, the number of rows to list might vary per group (defined by an attribute of the grouping/parent record). This part is definitely optional/for extra credit and not intended to dissuade people from answering.

What are the main options for solving these types of problems in SQL Server 2005 and later? What are the main advantages and disadvantages of each method?

AdventureWorks examples (for clarity, optional)

  1. List the five most recent recent transaction dates and IDs from the TransactionHistory table, for each product that starts with a letter from M to R inclusive.
  2. Same again, but with n history lines per product, where n is five times the DaysToManufactureProduct attribute.
  3. Same, for the special case where exactly one history line per product is required (the single most recent entry by TransactionDate, tie-break on TransactionID.

And my answer:

Let's start with the basic scenario.

If I want to get some number of rows out of a table, I have two main options: ranking functions; or TOP.

First, let's consider the whole set from Production.TransactionHistory for a particular ProductID:

<code>SELECT h.TransactionID, h.ProductID, h.TransactionDate FROM Production.TransactionHistory h WHERE h.ProductID = 800;</code>

This returns 418 rows, and the plan shows that it checks every row in the table looking for this – an unrestricted Clustered Index Scan, with a Predicate to provide the filter. 797 reads here, which is ugly.

Expensive Scan with 'Residual' Predicate

So let's be fair to it, and create an index that would be more useful. Our conditions call for an equality match on ProductID, followed by a search for the most recent by TransactionDate. We need the TransactionID returned too, so let's go with: CREATE INDEX ix_FindingMostRecent ON Production.TransactionHistory (ProductID, TransactionDate) INCLUDE (TransactionID);.

Having done this, our plan changes significantly, and drops the reads down to just 3. So we're already improving things by over 250x or so…

Improved plan

Now that we've levelled the playing field, let's look at the top options – ranking functions and TOP.

<code>WITH Numbered AS ( SELECT h.TransactionID, h.ProductID, h.TransactionDate, ROW_NUMBER() OVER (ORDER BY TransactionDate DESC) AS RowNum FROM Production.TransactionHistory h WHERE h.ProductID = 800 ) SELECT TransactionID, ProductID, TransactionDate FROM Numbered WHERE RowNum &lt;= 5;  SELECT TOP (5) h.TransactionID, h.ProductID, h.TransactionDate FROM Production.TransactionHistory h WHERE h.ProductID = 800 ORDER BY TransactionDate DESC;</code>

Two plans - basic TOP\RowNum

You will notice that the second (TOP) query is much simpler than the first, both in query and in plan. But very significantly, they both use TOP to limit the number of rows actually being pulled out of the index. The costs are only estimates and worth ignoring, but you can see a lot of similarity in the two plans, with the ROW_NUMBER() version doing a tiny amount of extra work to assign numbers and filter accordingly, and both queries end up doing just 2 reads to do their work. The Query Optimizer certainly recognises the idea of filtering on a ROW_NUMBER() field, realising that it can use a Top operator to ignore rows that aren't going to be needed. Both these queries are good enough – TOP isn't so much better that it's worth changing code, but it is simpler and probably clearer for beginners.

So this work across a single product. But we need to consider what happens if we need to do this across multiple products.

The iterative programmer is going to consider the idea of looping through the products of interest, and calling this query multiple times, and we can actually get away with writing a query in this form – not using cursors, but using APPLY. I'm using OUTER APPLY, figuring that we might want to return the Product with NULL, if there are no Transactions for it.

<code>SELECT p.Name, p.ProductID, t.TransactionID, t.TransactionDate FROM  Production.Product p OUTER APPLY (     SELECT TOP (5) h.TransactionID, h.ProductID, h.TransactionDate     FROM Production.TransactionHistory h     WHERE h.ProductID = p.ProductID     ORDER BY TransactionDate DESC ) t WHERE p.Name &gt;= 'M' AND p.Name &lt; 'S';</code>

The plan for this is the iterative programmers' method – Nested Loop, doing a Top operation and Seek (those 2 reads we had before) for each Product. This gives 4 reads against Product, and 360 against TransactionHistory.

APPLY plan

Using ROW_NUMBER(), the method is to use PARTITION BY in the OVER clause, so that we restart the numbering for each Product. This can then be filtered like before. The plan ends up being quite different. The logical reads are about 15% lower on TransactionHistory, with a full Index Scan going on to get the rows out.


Significantly, though, this plan has an expensive Sort operator. The Merge Join doesn't seem to maintain the order of rows in TransactionHistory, the data must be resorted to be able to find the rownumbers. It's fewer reads, but this blocking Sort could feel painful. Using APPLY, the Nested Loop will return the first rows very quickly, after just a few reads, but with a Sort, ROW_NUMBER() will only return rows after a most of the work has been finished.

Interestingly, if the ROW_NUMBER() query uses INNER JOIN instead of LEFT JOIN, then a different plan comes up.


This plan uses a Nested Loop, just like with APPLY. But there's no Top operator, so it pulls all the transactions for each product, and uses a lot more reads than before – 492 reads against TransactionHistory. There isn't a good reason for it not to choose the Merge Join option here, so I guess the plan was considered 'Good Enough'. Still – it doesn't block, which is nice – just not as nice as APPLY.

The PARTITION BY column that I used for ROW_NUMBER() was h.ProductID in both cases, because I had wanted to give the QO the option of producing the RowNum value before joining to the Product table. If I use p.ProductID, we see the same shape plan as with the INNER JOIN variation.

<code>WITH Numbered AS ( SELECT p.Name, p.ProductID, h.TransactionID, h.TransactionDate, ROW_NUMBER() OVER (PARTITION BY p.ProductID ORDER BY h.TransactionDate DESC) AS RowNum FROM Production.Product p LEFT JOIN Production.TransactionHistory h ON h.ProductID = p.ProductID WHERE p.Name &gt;= 'M' AND p.Name &lt; 'S' ) SELECT Name, ProductID, TransactionID, TransactionDate FROM Numbered n WHERE RowNum &lt;= 5;</code>

But the Join operator says 'Left Outer Join' instead of 'Inner Join'. The number of reads is still just under 500 reads against the TransactionHistory table.

PARTITION BY on p.ProductID instead of h.ProductID

Anyway – back to the question at hand…

We've answered question 1, with two options that you could pick and choose from. Personally, I like the APPLY option.

To extend this to use a variable number (question 2), the 5 just needs to be changed accordingly. Oh, and I added another index, so that there was an index on Production.Product.Name that included the DaysToManufacture column.

<code>WITH Numbered AS ( SELECT p.Name, p.ProductID, p.DaysToManufacture, h.TransactionID, h.TransactionDate, ROW_NUMBER() OVER (PARTITION BY h.ProductID ORDER BY h.TransactionDate DESC) AS RowNum FROM Production.Product p LEFT JOIN Production.TransactionHistory h ON h.ProductID = p.ProductID WHERE p.Name &gt;= 'M' AND p.Name &lt; 'S' ) SELECT Name, ProductID, TransactionID, TransactionDate FROM Numbered n WHERE RowNum &lt;= 5 * DaysToManufacture;  SELECT p.Name, p.ProductID, t.TransactionID, t.TransactionDate FROM  Production.Product p OUTER APPLY (     SELECT TOP (5 * p.DaysToManufacture) h.TransactionID, h.ProductID, h.TransactionDate     FROM Production.TransactionHistory h     WHERE h.ProductID = p.ProductID     ORDER BY TransactionDate DESC ) t WHERE p.Name &gt;= 'M' AND p.Name &lt; 'S';</code>

And both plans are almost identical to what they were before!

Variable rows

Again, ignore the estimated costs – but I still like the TOP scenario, as it is so much more simple, and the plan has no blocking operator. The reads are less on TransactionHistory because of the high number of zeroes in DaysToManufacture, but in real life, I doubt we'd be picking that column. 😉

One way to avoid the block is to come up with a plan that handles the ROW_NUMBER() bit to the right (in the plan) of the join. We can persuade this to happen by doing the join outside the CTE. (Edited because of a silly typo that meant that I turned my Outer Join into an Inner Join.)

<code>WITH Numbered AS ( SELECT h.TransactionID, h.ProductID, h.TransactionDate, ROW_NUMBER() OVER (PARTITION BY ProductID ORDER BY TransactionDate DESC) AS RowNum FROM Production.TransactionHistory h ) SELECT p.Name, p.ProductID, t.TransactionID, t.TransactionDate FROM Production.Product p LEFT JOIN Numbered t ON t.ProductID = p.ProductID
  AND t.RowNum &lt;= 5 * p.DaysToManufacture WHERE p.Name &gt;= 'M' AND p.Name &lt; 'S';</code>


The plan here looks simpler – it's not blocking, but there's a hidden danger.

Notice the Compute Scalar that's pulling data from the Product table. This is working out the 5 * p.DaysToManufacture value. This value isn't being passed into the branch that's pulling data from the TransactionHistory table, it's being used in the Merge Join. As a Residual.


So the Merge Join is consuming ALL the rows, not just the first however-many-are-needed, but all of them and then doing a residual check. This is dangerous as the number of transactions increases. I'm not a fan of this scenario – residual predicates in Merge Joins can quickly escalate. Another reason why I prefer the APPLY/TOP scenario.

In the special case where it's exactly one row, for question 3, we can obviously use the same queries, but with 1 instead of 5. But then we have an extra option, which is to use regular aggregates.

<code>SELECT ProductID, MAX(TransactionDate) FROM Production.TransactionHistory GROUP BY ProductID;</code>

A query like this would be a useful start, and we could easily modify it to pull out the TransactionID as well for tie-break purposes (using a concatenation which would then be broken down), but we either look at the whole index, or we dive in product by product, and we don't really get a big improvement on what we had before in this scenario.

But I should point out that we're looking at a particular scenario here. With real data, and with an indexing strategy that may not be ideal, mileage may vary considerably. Despite the fact that we've seen that APPLY is strong here, it can be slower in some situations. It rarely blocks though, as it has a tendency to use Nested Loops, which many people (myself included) find very appealing.

I haven't tried to explore parallelism here, or dived very hard into question 3, which I see as a special case that people rarely want based on the complication of concatenating and splitting. The main thing to consider here is that these two options are both very strong.

I prefer APPLY. It's clear, it uses the Top operator well, and it rarely causes blocking.


Will 2015 be a big year for the SQL community?

In Australia, almost certainly yes.

Australia recently saw two Azure data centres open, meaning that customers can now consider hosting data in Azure without worrying about it going overseas. Whether you’re considering SQL Database or having an Azure VM with SQL on it, the story has vastly improved here in Australia, and conversations will go further.

The impact of this will definitely reach the community…

…a community which is moving from strength to strength in itself.

I say that because in 2014 we have seen new PASS Chapters pop up in Melbourne and Sydney (user groups that have existed for some time but have now been aligned with PASS); many of the prominent Australian partner organisations have MVPs on staff now, which was mentioned a few times at the Australian Partner Conference in September; and SQL Saturdays have come along way since the first ones were run around the country in 2012. February will see SQL Saturday 365 in Melbourne host around 30 sessions, and build on its 2013 effort of becoming one of the largest ten SQL Saturday events in the world. Microsoft Australia seems more receptive than ever to the SQL Server community, and I’m seeing individuals pushing into the community as well.

From a personal perspective, I think 2015 will be an interesting year. As well as being a chapter leader and regional mentor, I know that I need to develop some new talks, after getting rejected to speak at the PASS Summit, but I also want to take the time to develop other speakers, as I have done in recent years.

TSQL2sDay150x150I also want to write more – both blogs and white papers. I’ve blogged every month for at least five years, but many months that’s just the T-SQL Tuesday post. (Oh yeah – this post is for one of those two, hosted by Wayne Sheffield (@DBAWayne) on the topic of ‘Giving Back’.) So I want to be able to write a lot more than 12 posts in the year, and take the opportunity to get deeper in the content. I know I have a lot to talk about, whether it be in the BI space, or about query plans, or PDW, or security – there really are a lot of topics I could cover – I just need to reserve the time to get my content out there.

So challenge me. If you want help with an abstract, or a talk outline (which I know is very different to an abstract), or you want me to blog on a particular topic, then let me know and I’ll see what I can do. I want to give even more to the community, and if you’re in the community, that should include you!


Minimising Data Movement in PDW Using Query Optimisation Techniques

This is a white paper that I put together recently about APS / PDW Query Optimisation. You may have seen it at as well, but in case you haven’t, read on!

I think the significance of this paper is big, because most people who deal with data warehouses (and PDW even more so) haven’t spent much time thinking about Query Optimisation techniques, and certainly not about how they can leverage features of SQL Server’s Query Optimizer to minimise data movement (which is probably the largest culprit for poor performance in a PDW environment).

Oh, and I have another one that I’m writing too…

The Analytics Platform System, with its MPP SQL Server engine (SQL Server Parallel Data Warehouse) can deliver performance and scalability for analytics workloads that you may not have expected from SQL Server. But there are key differences in working with SQL Server PDW and SQL Server Enterprise Edition that one should be aware of in order to take full advantage of the SQL Server PDW capabilities. One of the most important considerations when tuning queries in Microsoft SQL Server Parallel Data Warehouse is the minimisation of data movement. This post shows a useful technique regarding the identification of redundant joins through additional predicates that simulate check constraints.

Microsoft’s PDW, part of the Analytics Platform System (APS), offers scale-out technology for data warehouses. This involves spreading data across a number of SQL Server nodes and distributions, such that systems can host up to many petabytes of data. To achieve this, queries which use data from multiple distributions to satisfy joins must leverage the Data Movement Service (DMS) to relocate data during the execution of the query. This data movement is both a blessing and a curse; a blessing because it is the fundamental technology which allows the scale-out features to work, and a curse because it can be one of the most expensive parts of query execution. Furthermore, tuning to avoid data movement is something which many SQL Server query tuning experts have little experience, as it is unique to the Parallel Data Warehouse edition of SQL Server.

Regardless of whether data in PDW is stored in a column-store or row-store manner, or whether it is partitioned or not, there is a decision to be made as to whether a table is to be replicated or distributed. Replicated tables store a full copy of their data on each compute node of the system, while distributed tables distribute their data across distributions, of which there are eight on each compute node. In a system with six compute nodes, there would be forty-eight distributions, with an average of less than 2.1% (100% / 48) of the data in each distribution.

When deciding whether to distribute or replicate data, there are a number of considerations to bear in mind. Replicated data uses more storage and also has a larger management overhead, but can be more easily joined to data, as every SQL node has local access to replicated data. By distributing larger tables according to the hash of one of the table columns (known as the distribution key), the overhead of both reading and writing data is reduced – effectively reducing the size of databases by an order of magnitude.

Having decided to distribute data, choosing which column to use as the distribution key is driven by factors including the minimisation of data movement and the reduction of skew. Skew is important because if a distribution has much more than the average amount of data, this can affect query time. However, the minimisation of data movement is probably the most significant factor in distribution-key choice.

Joining two tables together involves identifying whether rows from each table match to according a number of predicates, but to do this, the two rows must be available on the same compute node. If one of the tables is replicated, this requirement is already satisfied (although it might need to be ‘trimmed’ to enable a left join), but if both tables are distributed, then the data is only known to be on the same node if one of the join predicates is an equality predicate between the distribution keys of the tables, and the data types of those keys are exactly identical (including nullability and length). More can be read about this in the excellent whitepaper about Query Execution in Parallel Data Warehouse at

To avoid data movement between commonly-performed joins, creativity is often needed by the data warehouse designers. This could involve the addition of extra columns to tables, such as adding the CustomerKey to many fact data tables (and using this as the distribution key), as joins between orders, items, payments, and other information required for a given report, as all these items are ultimately about a customer, and adding additional predicates to each join to alert the PDW Engine that only rows within the same distribution could possibly match. This is thinking that is alien for most data warehouse designers, who would typically feel that adding CustomerKey to a table not directly related to a Customer dimension is against best-practice advice.

Another technique commonly used by PDW data warehouse designers that is rarely seen in other SQL Server data warehouses is splitting tables up into two, either vertically or horizontally, whereas both are relatively common in PDW to avoid some of the problems that can often occur.

Splitting a table vertically is frequently done to reduce the impact of skew when the ideal distribution key for joins is not evenly distributed. Imagine the scenario of identifiable customers and unidentifiable customers, as increasingly the situation as stores have loyalty programs allowing them to identify a large portion (but not all) customers. For the analysis of shopping trends, it could be very useful to have data distributed by customer, but if half the customers are unknown, there will be a large amount of skew.

To solve this, sales could be split into two tables, such as Sales_KnownCustomer (distributed by CustomerKey) and Sales_UnknownCustomer (distributed by some other column). When analysing by customer, the table Sales_KnownCustomer could be used, including the CustomerKey as an additional (even if redundant) join predicate. A view performing a UNION ALL over the two tables could be used to allow reports that need to consider all Sales.

The query overhead of having the two tables is potentially high, especially if we consider tables for Sales, SaleItems, Deliveries, and more, which might all need to be split into two to avoid skew while minimising data movement, using CustomerKey as the distribution key when known to allow customer-based analysis, and SalesKey when the customer is unknown.

By distributing on a common key the impact is to effectively create mini-databases which are split out according to groups of customers, with all of the data about a particular customer residing in a single database. This is similar to the way that people scale out when doing so manually, rather than using a system such as PDW. Of course, there is a lot of additional overhead when trying to scale out manually, such as working out how to execute queries that do involve some amount of data movement.

By splitting up the tables into ones for known and unknown customers, queries that were looking something like the following:

FROM Sales AS s 
JOIN SaleItems AS si ON si.SalesKey = s.SalesKey
JOIN Delivery_SaleItems AS dsi ON dsi.LineItemKey = si.LineItemKey 
JOIN Deliveries AS d ON d.DeliveryKey = dsi.DeliveryKey

…would become something like:

FROM Sales_KnownCustomer AS s 
JOIN SaleItems_KnownCustomer AS si ON si.SalesKey = s.SalesKey AND si.CustomerKey = s.CustomerKey 
JOIN Delivery_SaleItems_KnownCustomer AS dsi ON dsi.LineItemKey = si.LineItemKey AND dsi.CustomerKey = s.CustomerKey 
JOIN Deliveries_KnownCustomer AS d ON d.DeliveryKey = dsi.DeliveryKey AND d.CustomerKey = s.CustomerKey UNION ALL SELECT … FROM Sales_UnknownCustomer AS s JOIN SaleItems_UnknownCustomer AS li ON si.SalesKey = s.SalesKey JOIN Delivery_SaleItems_UnknownCustomer AS dsi ON dsi.LineItemKey = s.LineItemKey AND dsi.SalesKey = s.SalesKey JOIN Deliveries_UnknownCustomer AS d ON d.DeliveryKey = s.DeliveryKey AND d.SalesKey = s.SalesKey

I’m sure you can appreciate that this becomes a much larger effort for query writers, and the existence of views to simplify querying back to the earlier shape could be useful. If both CustomerKey and SalesKey were being used as distribution keys, then joins between the views would require both, but this can be incorporated into logical layers such as Data Source Views much more easily than using UNION ALL across the results of many joins. A DSV or Data Model could easily define relationships between tables using multiple columns so that self-serving reporting environments leverage the additional predicates.

The use of views should be considered very carefully, as it is easily possible to end up with views that nest views that nest view that nest views, and an environment that is very hard to troubleshoot and performs poorly. With sufficient care and expertise, however, there are some advantages to be had.

The resultant query would look something like:

SELECT … FROM Sales AS s JOIN SaleItems AS li ON si.SalesKey = s.SalesKey AND si.CustomerKey = s.CustomerKey JOIN Delivery_SaleItems AS dsi ON dsi.LineItemKey = si.LineItemKey AND dsi.CustomerKey = s.CustomerKey AND dsi.SalesKey = s.SalesKey JOIN Deliveries AS d ON d.DeliveryKey = dsi.DeliveryKey AND d.CustomerKey = s.CustomerKey AND d.SalesKey = s.SalesKey

Joining multiple sets of tables which have been combined using UNION ALL is not the same as performing a UNION ALL of sets of tables which have been joined. Much like any high school mathematics teacher will happily explain that (a*b)+(c*d) is not the same as (a+c)*(b+d), additional combinations need to be considered when the logical order of joins and UNION ALLs.


Notice that when we have (TableA1 UNION ALL TableA2) JOIN (TableB1 UNION ALL TableB2), we must perform joins not only between TableA1 and TableB1, and TableA2 and TableB2, but also TableA1 and TableB2, and TableB1 and TableA2. These last two combinations do not involve tables with common distribution keys, and therefore we would see data movement. This is despite the fact that we know that there can be no matching rows in those combinations, because some are for KnownCustomers and the others are for UnknownCustomers. Effectively, the relationships between the tables would be more like the following diagram:


There is an important stage of Query Optimization which must be considered here, and which can be leveraged to remove the need for data movement when this pattern is applied – that of Contradiction.

The contradiction algorithm is an incredibly useful but underappreciated stage of Query Optimization. Typically it is explained using an obvious contradiction such as WHERE 1=2. Notice the effect on the query plans of using this predicate.


Because the Query Optimizer recognises that no rows can possibly satisfy the predicate WHERE 1=2, it does not access the data structures seen in the first query plan.

This is useful, but many readers may not consider queries that use such an obvious contradiction are going to appear in their code.

But suppose the views that perform a UNION ALL are expressed in this form:

SELECT * FROM dbo.Sales_KnownCustomer WHERE CustomerID > 0 
SELECT * FROM dbo.Sales_UnknownCustomer WHERE CustomerID = 0;

Now, we see a different kind of behaviour.

Before the predicates are used, the query on the views is rewritten as follows (with SELECT clauses replaced by ellipses).

SELECT … FROM (SELECT … FROM (SELECT ... FROM [sample_vsplit].[dbo].[Sales_KnownCustomer] AS T4_1 UNION ALL SELECT … FROM [tempdb].[dbo].[TEMP_ID_4208] AS T4_1) AS T2_1 INNER JOIN (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[SaleItems_KnownCustomer] AS T5_1 UNION ALL SELECT … FROM [tempdb].[dbo].[TEMP_ID_4209] AS T5_1) AS T3_1 INNER JOIN (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[Delivery_SaleItems_KnownCustomer] AS T6_1 UNION ALL SELECT … FROM [tempdb].[dbo].[TEMP_ID_4210] AS T6_1) AS T4_1 INNER JOIN (SELECT … FROM [sample_vsplit].[dbo].[Deliveries_KnownCustomer] AS T6_1 UNION ALL SELECT … FROM [tempdb].[dbo].[TEMP_ID_4211] AS T6_1) AS T4_2 ON (([T4_2].[CustomerKey] = [T4_1].[CustomerKey]) AND ([T4_2].[SalesKey] = [T4_1].[SalesKey]) AND ([T4_2].[DeliveryKey] = [T4_1].[DeliveryKey]))) AS T3_2 ON (([T3_1].[CustomerKey] = [T3_2].[CustomerKey]) AND ([T3_1].[SalesKey] = [T3_2].[SalesKey]) AND ([T3_2].[SaleItemKey] = [T3_1].[SaleItemKey]))) AS T2_2 ON (([T2_2].[CustomerKey] = [T2_1].[CustomerKey]) AND ([T2_2].[SalesKey] = [T2_1].[SalesKey]))) AS T1_1

Whereas with the inclusion of the additional predicates, the query simplifies to:

SELECT … FROM (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[Sales_KnownCustomer] AS T4_1 WHERE ([T4_1].[CustomerKey] > 0)) AS T3_1 INNER JOIN (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[SaleItems_KnownCustomer] AS T5_1 WHERE ([T5_1].[CustomerKey] > 0)) AS T4_1 INNER JOIN (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[Delivery_SaleItems_KnownCustomer] AS T6_1 WHERE ([T6_1].[CustomerKey] &amp;gt; 0)) AS T5_1 INNER JOIN (SELECT … FROM [sample_vsplit].[dbo].[Deliveries_KnownCustomer] AS T6_1 WHERE ([T6_1].[CustomerKey] > 0)) AS T5_2 ON (([T5_2].[CustomerKey] = [T5_1].[CustomerKey]) AND ([T5_2].[SalesKey] = [T5_1].[SalesKey]) AND ([T5_2].[DeliveryKey] = [T5_1].[DeliveryKey]))) AS T4_2 ON (([T4_1].[CustomerKey] = [T4_2].[CustomerKey]) AND ([T4_1].[SalesKey] = [T4_2].[SalesKey]) AND ([T4_2].[SaleItemKey] = [T4_1].[SaleItemKey]))) AS T3_2 ON (([T3_2].[CustomerKey] = [T3_1].[CustomerKey]) AND ([T3_2].[SalesKey] = [T3_1].[SalesKey])) UNION ALL SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[Sales_UnknownCustomer] AS T4_1 WHERE ([T4_1].[CustomerKey] = 0)) AS T3_1 INNER JOIN (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[SaleItems_UnknownCustomer] AS T5_1 WHERE ([T5_1].[CustomerKey] = 0)) AS T4_1 INNER JOIN (SELECT … FROM (SELECT … FROM [sample_vsplit].[dbo].[Delivery_SaleItems_UnknownCustomer] AS T6_1 WHERE ([T6_1].[CustomerKey] = 0)) AS T5_1 INNER JOIN (SELECT … FROM [sample_vsplit].[dbo].[Deliveries_UnknownCustomer] AS T6_1 WHERE ([T6_1].[CustomerKey] = 0)) AS T5_2 ON (([T5_2].[CustomerKey] = [T5_1].[CustomerKey]) AND ([T5_2].[SalesKey] = [T5_1].[SalesKey]) AND ([T5_2].[DeliveryKey] = [T5_1].[DeliveryKey]))) AS T4_2 ON (([T4_1].[CustomerKey] = [T4_2].[CustomerKey]) AND ([T4_1].[SalesKey] = [T4_2].[SalesKey]) AND ([T4_2].[SaleItemKey] = [T4_1].[SaleItemKey]))) AS T3_2 ON (([T3_2].[CustomerKey] = [T3_1].[CustomerKey]) AND ([T3_2].[SalesKey] = [T3_1].[SalesKey]))) AS T1_1

This may seem more complex – it’s certainly longer – but this is the original, preferred version of the join. This is a powerful rewrite of the query.


Furthermore, the astute PDW-familiar reader will quickly realise that the UNION ALL of two local queries (queries that don’t require data movement) is also local, and that therefore, this query is completely local. The TEMP_ID_NNNNN tables in the first rewrite are more evidence that data movement has been required.

When the two plans are shown using PDW’s EXPLAIN keyword, the significance is shown even clearer.

The first plan appears as following, and it is obvious that there is a large amount of data movement involved.


The queries passed in are identical, but the altered definitions of the views have removed the need for any data movement at all. This should allow your query to run a little faster. Ok, a lot faster.


When splitting distributed tables vertically to avoid skew, views over those tables should include predicates which reiterate the conditions that cause the data to be populated into each table. This provides additional information to the PDW Engine that can remove unnecessary data movement, resulting in much-improved performance, both for standard reports using designed queries, and ad hoc reports that use a data model.

Check us out at or talk to me via Twitter at @rob_farley