I hate writing 'optimised' with a 'z', but as a feature, I'm okay with writing about Memory-Optimized Tables as a way of optimising your SQL environment.
Let me start by saying that if you really want to get the most out of this feature, you will dive deep into questions like durability and natively-compiled stored procedures, which can really make your database fly if the conditions are right. Arguably, any process you're doing (such as ETL) where the data doesn't have to survive a system restart should be considered for Memory-Optimized Tables with durability set to SCHEMA_ONLY (I say 'considered' because the answer isn't always obvious – at the moment inserting into memory-optimised tables won't run in parallel, and this could be a show-stopper for you).
But today I'm going to mention one of the quick-wins available: Table Variables that use User-defined Table Types
User-defined table types are table definitions that you've created for the purpose of table variables that you're going to pass between stored procedures. You can use them for any table variables, but typically I find user-defined table types are only used if the table variable is being used as a parameter somewhere. Still, this is remarkably often these days, for lists of things. Typically small lists, where operations are quick, and the need for parallelism is less.
For example, a list of Products:
CREATE TYPE dbo.ItemsTable AS TABLE (ID int NOT NULL, Name varchar(200) NULL);
, which is then used like:
DECLARE @NewProducts dbo.ItemsTable; INSERT @NewProducts (ID, Name) SELECT ProductID, Name FROM dbo.Products WHERE Created > DATEADD(day, –1, SYSDATETIME()); EXEC dbo.DoSomethingWithSomeProducts @NewProducts;
The key thing here is that these tables already don't survive a restart. They're only scoped to the session, because they're variables. In fact, a common misconception about table variables is that they're automatically in memory only, but that's simply not true.
…so why not change your type definition and make it that way? It's really easy…
First, if you're on-prem you need a filegroup and file that supports Memory-Optimized tables.
ALTER DATABASE YourDB ADD FILEGROUP MemOpFG CONTAINS MEMORY_OPTIMIZED_DATA; ALTER DATABASE YourDB ADD FILE (NAME = N'MemOpFile', FILENAME = N'D:\SQLData\MemOpFile') TO FILEGROUP MemOpFG;
This creates a folder (not a file; despite what the command looks like, it's actually a folder) for it to use. It's just a thing that's needed. You don't need to do this on Azure SQL DB.
Then you simply have to change your definition. You'll need to add an index, and use the WITH option.
CREATE TYPE dbo.ItemsTable AS TABLE (ID int NOT NULL INDEX ixID, Name varchar(200) NULL) WITH (MEMORY_OPTIMIZED = ON);
Frustratingly you'll need to script any objects that use your type, because you can't drop the type until you've dropped any objects that refer to it – but this change should be going through source control and proper DevOps practices and testing anyway, in case it turns out that performance doesn't improve.
Make sure you're patched to CU3 of SQL 2016 SP1 to allow the table variable to be scanned in parallel to reduce the risk things slow down at all (chances are you're already seeing serial inserts on your table variables, but you don't need them when reading back), and CU7 to avoid some other annoying errors.
But it's really as simple as that. For systems where data is frequently being pushed between procedures using TVPs, this is a low-risk change that can often have a noticeable impact.
(Thanks to Steve Jones for hosting this month!)