Migrating to Azure SQL DB using Transactional Replication

Azure SQL DB is a PaaS offering from Microsoft on Azure for hosting a SQL database. This may not be ideal for all environments. But if you have a database which can work in isolation handling requests from one or more applications, Azure SQL DB may be a right fit for you. One common use-case is for ISVs who host multiple tenant databases.

One of the challenges faced is how to migrate to Azure with minimal downtime, especially when your database is already in production use. There are a couple of methods to migrate the database from your production environment to Azure SQL DB.

One method is by exporting your database as a BACPAC file and importing it into Azure SQL DB. There are few points to note when using this method.

  • The production database should not change from the point you start the export till the database is imported into Azure SQL DB. This increases the production downtime.
  • The process involves exporting a BACPAC file, using tools like SqlPackage.exe to import the database into Azure SQL DB.
  • The rollback process is simple, allowing you to revert back to existing database (if your business can afford to lose any changes done on Azure SQL DB). The amount of data loss depends on at what stage you decided to rollback.

To achieve minimal downtime, I will detail a second method of migration: using Transactional Replication.

Transactional replication to Azure SQL DB was announced in 2015 as being in Public Preview. On 23 October 2017 this feature was announced as generally available. To setup transactional replication from on-prem SQL Server to Azure SQL DB, you will need to install the latest version of SSMS (v17.3). The process of migration involves the following steps.

Testing/POC

  1. Run the Microsoft Data Migration Assistant (DMA) to check if your database is ready to migrate to Azure. This tool will help you assess if your database has any compatibility issues to migrate to Azure SQL DB. More information about DMA is available here.
  2. Fix any issues identified in step 1 to prepare your database for migration and repeat.
  3. Create an Azure SQL DB shell (empty database) using the Azure portal. You can follow the step by step guidelines as described here.
  4. Setup transactional replication with the following configuration:
    1. Configure an on-prem SQL Server as the distributor. You can choose the SQL Server hosting the production database to be the distributor, or any other SQL Server.
    2. Configure a new publication for the production database you wish to migrate to Azure SQL DB. Choose all the objects you want to migrate. In general, choose all databases unless you wish not to migrate specific unwanted tables, e.g., archive history tables.
    3. Add a push subscriber by connecting to the Azure SQL DB you created in step 3. When connecting to the subscriber, connect using the details provided when creating the database in Azure.

  5. After successfully adding the subscriber, run the snapshot agent and complete the initialization.
  6. Once the database is in sync (and being kept in sync), choose a time when you want to test your application connectivity to Azure SQL DB.
  7. Remove replication.
  8. Make changes in test application configuration to connect to Azure SQL DB.
  9. Complete end-to-end testing of the application. This should be exhaustive and detailed to ensure business continuity.

Production Migration

  1. Complete steps 1 to 4 as performed for testing.
  2. Once the Azure SQL DB is in sync, choose a time to cutover the production database to Azure SQL DB.
  3. Shutdown the applications and make sure the replication is in sync.
  4. Remove replication and shutdown the on-prem production instance.
  5. Make changes in the application configurations so that they will connect to Azure SQL DB.
  6. Start the applications and complete functional testing.

Rollback Plan

There are two cases to consider rollback.

The first case, when the functional testing fails and the Azure SQL DB is still not updated with any valid production data.

  1. In this case, the rollback is simple. Reconfigure the applications to point to the on-prem database.
  2. Start the on-prem SQL Server and start the applications.
  3. Complete functional testing to ensure applications are behaving as expected using on-prem database.

The second case, when the functional testing passes. and the application runs using Azure SQL DB for few hours or days and then you realise that there are functional issues. Hopefully this is a rare scenario and mostly arises due to poor testing.

  1. In this case, it becomes critical to get back the production data inserted into Azure SQL DB.
  2. The method to sync data back from Azure SQL DB to the on-prem SQL Server database is provided by Azure Data Sync. This will be covered in detail in a future article.

Downtime

The only downtime needed for migration using transactional replication to Azure SQL DB is during the cutover. This depends on how quickly you can stop your applications, shutdown the on-prem SQL Server and make the configuration changes in applications to point to migrated Azure SQL DB. All these tasks can be automated with PowerShell or other tools, so that the downtime can be minimal.

Conclusion

Azure SQL DB is a tailored service that will suit most applications that connect to a single database. With transactional replication now GA, and the tools available in the latest SSMS, the door has been opened to more users for easier migrations. Transactional Replication is the easiest and most flexible method to migrate to Azure SQL DB with minimal downtime and lowest impact on business continuity.

Happy Learning,

Manu

The effort of relocating blog sites

Hopefully you’ve realised that I’m not posting at sqlblog.com any more. There’s still some excellent content there, but it has come up time and time again that I should be posting at a company blog site – so the move has now been made. I’ve also brought across the material that I wrote at msmvps.com, which had also been Community Server until a few years ago when it became WordPress.

Adam Machanic (@AdamMachanic) had put together some C# code for moving posts off Community Server (which is what sqlblog uses) onto WordPress, and combined with a regular WordPress Export + Import from msmvps.com, I had most of my content moved over. I don’t code in C# very often these days, but it felt nice. I spent some time in PowerShell and XML tweaking dates in the WordPress export file to make sure they matched the time zone that I’d originally used, which introduced some frustrating character-mapping that needed fixing in MySQL, so all in all I felt like I was moving around a variety of toolsets that I don’t often swim in.

A big thanks again to Barb and Susan who host msmvps.com still – they (particularly Barb) have helped a lot with sorting out some of my content from the old site. Some things are still broken from years back, but they did find the picture of me with Desmond Tutu, so I’m happy. At some point I’ll be going through old posts and seeing what doesn’t work.

I no longer use Categories – I lost the msmvps.com categories when they moved to WordPress, and the sqlblog.com ones didn’t seem to want to come across either. I don’t know that I ever did categories particularly well, so perhaps it’s a good opportunity to stop pretending that I do. Not everything should within a ‘sql’ category.

I discovered that I have quite a bit of content that needed to use a ‘powershell’ formatting. There is still a bunch of formatting on old posts that I won’t get to for some time though (there’s almost 500 posts, so I’ll take a bit of a run at the rest another day).

I had to install some plugins to get a few things to work. SyntaxHighlighter was one, but also RWD’s Responsive Image Maps to get an image map from an old T-SQL Tuesday round-up working. I tried a stats plugin, only to find that I needed a later version of PHP to support it. Luckily I don’t think I was showing an error for too long, but I’m really not keen on the error messages that WordPress gives.

CSS was a bit of fun to get the “Popular Posts” to look similar to the “Recent Posts”. I ended up just finding a way to have the Popular Posts widget use the same CSS class as the Recent Posts.

And it turns out I do like the Segoe font-face. I know it’s Microsoft’s one, and perhaps that’s what makes it feel right for me – I spend so long looking at Microsoft web pages it feels quite natural to me. Since we deal almost entirely in the Microsoft space, it’s quite appropriate too. We’ll probably use the same when we do a rebranding of our company site.

@rob_farley

New blog site!

It’s about time I moved content to a more central site – one that I own, rather than controlled by others. One that is part of the company, which can help demonstrate the capabilities of the company, and where the other skilled people within the team can also post content.

So I’ve moved the content that I had written at sqlblog.com across (big thanks to Peter and Adam for having me blog there for so long), and the content from msmvps.com (where I’d blogged from the time I first became a SQL MVP, even before the company was set up). I’ll still write for sqlperformance.com when I have something they’ll be interested in, and I’ll post something here to let you know that I’ve done that.

Feel free to let me know what you think of it all – whether I should be use WordPress differently, for example – you can ping me via email, or DM me on Twitter at @rob_farley.

I’ve redirected feedburner, but also feel free to follow this site in general.

The BigData Legacy

Trends come along, and trends pass. Some hang around for quite a while, and then move on, and some seem to disappear quickly. Often we’re glad that they’ve gone, but we still bear scars. We live and work differently because they were there. In the world of IT, I feel like this is all too common.

When ORMs became trendy, people were saying that writing T-SQL would be a thing of the past. LINQ was another ways that people were reassuring the developer community that writing database queries would never again be needed. The trend of avoiding T-SQL through ORMs has hung around a bit, and many developers have recognised that ORMs don’t necessarily create the best database experiences.

And yet when we consider what’s happening with Azure SQL Data Warehouse (SQL DW), you find yourself querying the data through an interface. Sure, that interface looks like another database, but it’s not where the data is (because the data is in the 60 databases that live in the back), and has to it translates our query into a series of other queries that actually run. And we’re fine with this. I don’t hear anyone complaining about the queries that appear in SQ DW’s explain plans.

When CLR came in, people said it was a T-SQL killer. I remember a colleague of mine telling me that he didn’t need to learn T-SQL, because CLR meant that he would be able to do it all in .Net. Over time, we’ve learned that CLR is excellent for all kinds of things, but it’s by no means a T-SQL killer. It’s excellent for a number of reasons – CLR stored procedures or functions have been great for things like string splitting and regular expressions – and we’ve learned its place now.

I don’t hear people talking about NoSQL like they once did, and it’s been folded somehow into BigData, but even that seems to have lost a little of its lustre from a year or two ago when it felt like it was ‘all the rage’. And yet we still have data which is “Big”. I don’t mean large, necessarily, just data that satisfies one of the three Vs – volume, velocity, variety.

Of these Vs, Volume seems to have felt like a misnomer. Everything thinks what they have is big, but if you compared it to others, it probably wouldn’t actually be that big. Generally, if people are thinking “BigData” because they think their data is big, then they just need a reality check, and then deal with it like all your regular data.

Velocity is interesting. If your system can’t respond to things quickly enough, then perhaps pushing your data through something like Stream Analytics could be reasonable, to pick up the alert conditions. But if your data is flowing through to a relational database, then is it really “BigData”?

And then we have Variety. This is about whether your data is structured or not. I’m going to suggest that your data probably is structured – and BigData solutions wouldn’t disagree with this. It’s just that you might not want to define the structure when the data is first arriving. To get data into a structured environment (such as a data table), types need to be tested, the data needs to be converted appropriately, and if you don’t have enough control over the data that’s coming in, the potential for something to break is high. Sorting out that mess when you need to query it back again means that you have a larger window to deal with it.

So this is where I think BigData is leaving its legacy – in the ability to accept data even if it doesn’t exactly fit the structure you have. I know plenty of systems that will break if the data arriving is in the wrong structure, which makes change and adaptability hard to achieve. A BigData solution can help mitigate that risk. Of course, there’s a price to pay, but for those times when the structure tends to change overly regularly, BigData’s ideology can definitely help.

We see this through the adoption of JSON within SQL Server, which is much less structured even than XML. We see PolyBase’s external tables define structure separately to the collection of data. Concepts that were learned in a void of relational data have now become part of our relational databases.

Don’t dismiss fads that come through. Look into them, and try to spot those things which have more longevity. By adopting those principles, you might find yourself coming through as a stronger professional.

@rob_farley

This post was put together for T-SQL Tuesday 95, hosted by Derik Hammer (@sqlhammer). Thanks Derik!