Tag Archives: Azure SQL Database

Import Databases to Azure SQL Database

I recently spoke at a conference and was asked what is the easiest way to import databases to Azure SQL Database. Therefore, I wanted to share how I do this with DBATools.io. You can use the same code to just export if you need a local copy of an Azure SQL database as well.

Import-Module dbatools -Force
<# Variables #>
$BackupPath = "C:\Demo\AzureSQL\Bacpac" #folder location for backups
$SourceInstance = "sql2019\sql2016"
$DBName = "AdventureWorksLT2012"
$AzureDestInstance = "procuresqlsc.database.windows.net"
$DBNameDest = $DBName

<# backpac options for import and export #>
$option = New-DbaDacOption -Type Bacpac -Action Export
$option.CommandTimeout = 0

$option2 = New-DbaDacOption -Type Bacpac -Action Publish
$option2.CommandTimeout = 0

<#  
The following assums Azure SQL Database exists and is empty 
Azure will create database by default if it doesn't exist 
#>
$bacpac = Export-DbaDacPackage -Type Bacpac -DacOption $option -Path `
$BackupPath -SqlInstance $SourceInstance -Database $DBName 
Publish-DbaDacPackage -Type Bacpac -SqlInstance `
$AzureDestInstance -Database $DBNameDest -Path $bacpac.path `
 -DacOption $option2 -SqlCredential username

What Is my Performance Tier?

Great question, as of 3/3/2020 if the database in Azure SQL Database does not exist then it will be created. When its created the following database uses the default performance tier. This is General Purpose (Gen5) with 2 vCores.

Default performance tier for a new Azure SQL Database costs $371.87 per month.
The default cost of a new Azure SQL Database is 371.87 per month.

How to create cheaper databases

Great question, you can import databases to Azure SQL Database cheaper using PowerShell. It is as simple as using the Azure PowerShell Module. The following example below I use my existing Azure SQL Database server and I end up creating a new database with the “S0” tier.

<# 
Install Azure Powershell module requires local admin
Install-Module -Name Az -AllowClobber -Scope AllUsers
#>

Import-Module Az -Force
$RGName = "<Your Existing Resource Group>"
$ServerName = "<Your Azure SQL Database Server>"
$Tenant = "<Add Tenant ID>"
$Subscription = "<Add your Subscription ID>"

Connect-AzAccount -Tenant $Tenant  -SubscriptionId $Subscription
$server = Get-AzSqlServer -ResourceGroupName $RGName -ServerName  $ServerName

$db = New-AzSqlDatabase -ResourceGroupName $RGName `
-ServerName $server.ServerName -DatabaseName $DBName `
-RequestedServiceObjectiveName "S0"

Your Homework Assignment

You got all the nuts and bolts to generate a script that can migrate all your databases on an instance. Then you can import databases to Azure SQL Database in one loop.

Need Help Moving to Azure?

Let us help you! Schedule a free 30-minute chat to see if we can point you the right direction. Take advantage of our free resources too.

If you liked this blog post signup for our free newsletter!

Benchmark Azure SQL Database Wait Stats

Today, I want to focus on how we can monitor wait statistics in an Azure SQL Database.  In the past, I blogged about how you should benchmark wait stats with the box product.  This process will give you misleading data in Azure SQL Database.  You will want to focus on wait stats that are specific to your database as you are using shared resources in Azure SQL Databases.

Finding Database Waits Statistics

Query we are talking to you!

Query we are talking to you!

With an instance of SQL Server regardless of using IaaS or on-premise, you would want to focus on all the waits that are occurring in your instance because the resources are dedicated to you.

In database as a service (DaaS), Microsoft gives you a special DMV that makes troubleshooting performance in Azure easier than any other competitor.  This feature is the dm_db_wait_stats DMV.  This DMV allows us specifically to get the details behind why our queries are waiting within our database and not the shared environment.  Once again it is worth repeating, wait statistics for our database in a shared environment.

The following is the script is a stored procedure I use to collect wait statistics for my Azure SQL Databases.  I hope it is a helpful benchmarking tool for you when you need to troubleshoot performance in Azure SQL Database.

The Good Stuff

/***************************************************************************
    Author : John Sterrett, Procure SQL LLC

    File:     AzureSQLDB_WaitStats.sql

    Summary:  The following code creates a stored procedure that can be used
                 to collect wait statistics for an Azure SQL Database.
                        
    Date:     August 2016

    Version:  Azure SQL Database V12 
  
  ---------------------------------------------------------------------------
  
  For more scripts and sample code, check out 
    https://johnsterrett.com

  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF 
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED 
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
************************************************************************/

If NOT EXISTS (Select 1 from sys.schemas where name = N'Waits')
        execute sp_executesql @stmt = N'CREATE SCHEMA [Waits] AUTHORIZATION [dbo];'
GO


CREATE TABLE Waits.WaitStats (CaptureDataID bigint, WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint, Avg_Wait_S numeric(10, 6), Avg_Resource_S numeric(10, 6),Avg_Signal_S numeric(10, 6), CaptureDate datetime)
CREATE TABLE Waits.BenignWaits (WaitType varchar(200))
CREATE TABLE Waits.CaptureData (
ID bigint identity PRIMARY KEY,
StartTime datetime,
EndTime datetime,
ServerName varchar(500),
PullPeriod int
)

INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_SEMAPHORE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('LAZYWRITER_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('RESOURCE_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_TASK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_SYSTEMTASK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_BUFFER_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('WAITFOR')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('LOGMGR_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CHECKPOINT_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('REQUEST_FOR_DEADLOCK_SEARCH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_TIMER_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('BROKER_TO_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_TASK_STOP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_MANUAL_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_AUTO_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DISPATCHER_QUEUE_SEMAPHORE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('FT_IFTS_SCHEDULER_IDLE_WAIT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_DISPATCHER_WAIT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_DISPATCHER_JOIN')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_EVENTHANDLER')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('TRACEWRITE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('FT_IFTSHC_MUTEX')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_INCREMENTAL_FLUSH_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_RECEIVE_WAITFOR')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('ONDEMAND_TASK_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DBMIRROR_EVENTS_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DBMIRRORING_CMD')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_TRANSMITTER')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_WAIT_ENTRIES')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_BPOOL_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_LOCK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DIRTY_PAGE_POLL')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SP_SERVER_DIAGNOSTICS_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('HADR_FILESTREAM_IOMGR_IOCOMPLETION')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('HADR_WORK_QUEUE')

insert Waits.BenignWaits (WaitType) VALUES ('QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP');
insert Waits.BenignWaits (WaitType) VALUES ('QDS_PERSIST_TASK_MAIN_LOOP_SLEEP');
GO

--DROP PROCEDURE Waits.GetWaitStats
CREATE PROCEDURE Waits.GetWaitStats 
    @WaitTimeSec INT = 60,
    @StopTime DATETIME = NULL
AS
BEGIN
    DECLARE @CaptureDataID int
    /* Create temp tables to capture wait stats to compare */
    IF OBJECT_ID('tempdb..#WaitStatsBench') IS NOT NULL
        DROP TABLE #WaitStatsBench
    IF OBJECT_ID('tempdb..#WaitStatsFinal') IS NOT NULL
        DROP TABLE #WaitStatsFinal

    CREATE TABLE #WaitStatsBench (WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint)
    CREATE TABLE #WaitStatsFinal (WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint)

    DECLARE @ServerName varchar(300)
    SELECT @ServerName = convert(nvarchar(128), serverproperty('servername'))
    
    /* Insert master record for capture data */
    INSERT INTO Waits.CaptureData (StartTime, EndTime, ServerName,PullPeriod)
    VALUES (GETDATE(), NULL, @ServerName, @WaitTimeSec)
    
    SELECT @CaptureDataID = SCOPE_IDENTITY()
     
/* Loop through until time expires  */
    IF @StopTime IS NULL
        SET @StopTime = DATEADD(hh, 1, getdate())
    WHILE GETDATE() < @StopTime
    BEGIN

        /* Get baseline */
        
        INSERT INTO #WaitStatsBench (WaitType, wait_S, Resource_S, Signal_S, WaitCount)
        SELECT
                wait_type,
                wait_time_ms / 1000.0 AS WaitS,
                (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
                signal_wait_time_ms / 1000.0 AS SignalS,
                waiting_tasks_count AS WaitCount
            FROM sys.dm_db_wait_stats
            WHERE wait_time_ms > 0.01 
            AND wait_type NOT IN ( SELECT WaitType FROM Waits.BenignWaits)
        

        /* Wait a few minutes and get final snapshot */
        WAITFOR DELAY @WaitTimeSec;

        INSERT INTO #WaitStatsFinal (WaitType, wait_S, Resource_S, Signal_S, WaitCount)
        SELECT
                wait_type,
                wait_time_ms / 1000.0 AS WaitS,
                (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
                signal_wait_time_ms / 1000.0 AS SignalS,
                waiting_tasks_count AS WaitCount
            FROM sys.dm_db_wait_stats
            WHERE wait_time_ms > 0.01
            AND wait_type NOT IN ( SELECT WaitType FROM Waits.BenignWaits)
        
        DECLARE @CaptureTime datetime 
        SET @CaptureTime = getdate()

        INSERT INTO Waits.WaitStats (CaptureDataID, WaitType, Wait_S, Resource_S, Signal_S, WaitCount, Avg_Wait_S, Avg_Resource_S,Avg_Signal_S, CaptureDate)
        SELECT  @CaptureDataID,
            f.WaitType,
            f.wait_S - b.wait_S as Wait_S,
            f.Resource_S - b.Resource_S as Resource_S,
            f.Signal_S - b.Signal_S as Signal_S,
            f.WaitCount - b.WaitCount as WaitCounts,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.wait_S - b.wait_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Wait_S,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.Resource_S - b.Resource_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Resource_S,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.Signal_S - b.Signal_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Signal_S,
            @CaptureTime
        FROM #WaitStatsFinal f
        LEFT JOIN #WaitStatsBench b ON (f.WaitType = b.WaitType)
        WHERE (f.wait_S - b.wait_S) > 0.0 -- Added to not record zero waits in a time interval.
        
        TRUNCATE TABLE #WaitStatsBench
        TRUNCATE TABLE #WaitStatsFinal
 END -- END of WHILE
 
 /* Update Capture Data meta-data to include end time */
 UPDATE Waits.CaptureData
 SET EndTime = GETDATE()
 WHERE ID = @CaptureDataID
END

Special Wait Statistics Types

The following are wait statistics you will want to focus on specifically in Azure SQL Database.  If you made it this far, I strongly encourage you to read how DTU is measured.  That blog post will help you understand exactly why these waits can be signs of DTU pressure.

IO_QUEUE_LIMIT :  Occurs when the asynchronous IO queue for the Azure SQL Database has too many IOs pending. Tasks trying to issue another IO are blocked on this wait type until the number of pending IOs drop below the threshold. The threshold is proportional to the DTUs assigned to the database.

LOG_RATE_GOVERNOR :  Occurs when DB is waiting for quota to write to the log.  Yes,  Azure SQL Database is capping your transactional log writes to adhere to DTU.

SOS_SCHEDULER_YIELD: This occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait, the task is waiting in the runnable queue to get a scheduler to run.  If your DTU calculation is based on CPU usage you will typically see these waits.

Want More Azure Articles?

If you enjoyed this blog post I think you will also enjoy the following related blog posts.

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, John would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.

Photo Credit:  

Calculating DTU in Azure SQL Database

A few months ago, I posted a question over on ask.sqlservercentral.com.  In a nutshell, it was how do you measure DTU? How could you pull the data shown in the Azure Portal graphs with T-SQL?  No one answered, so this motivated me get off my butt and answer my question.  In doing so, I wanted to share the results with all my readers as well.

I started thinking like a detective.  If I was DTU how would I want to be found with T-SQL?

It was the Azure Portal in the Query Store

It was the Azure Portal in the Query Store

Knowing, that both “sys.dm_db_resource_stats” and “sys.resource_stats” holds data that is used to calculate DTU. I decided to leverage Query Store on an Azure SQL Database to see if I could quickly see how DTU is calculated. Behold, I was right.

 

Query Store

How DTU is calculated in Azure Portal

The Secret Sauce

The whole query is below. Right now, let’s just focus on the secret sauce. The secret sauce is how DTU percentage gets calculated.  In a nutshell, the maximum of CPU, Data IO, Log Write Percent determine your DTU percentage.  What does this mean to you? Your max consumer limits you. So, you can be using 1% of your IO but still be slowed down because CPU could be your max consumer resource.

(SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) as dtu_consumption_percent

DTU Calculating Query

SELECT ((CONVERT(BIGINT, DATEDIFF(day, 0, [end_time])) * 24 * 3600 + DATEDIFF(second, DATEADD(day, DATEDIFF(day, 0, [end_time]), 0), [end_time])) / @timeGrain) * @timeGrain as start_time_interval
                , MAX(cpu_percent) as cpu_percent
                , MAX(physical_data_read_percent) as physical_data_read_percent
                , MAX(log_write_percent) as log_write_percent
                , MAX(memory_usage_percent) as memory_usage_percent
                , MAX(xtp_storage_percent) as xtp_storage_percent
                , MAX(dtu_consumption_percent) as dtu_consumption_percent
                , MAX(workers_percent) as workers_percent
                , MAX(sessions_percent) as sessions_percent
                , MAX(dtu_limit) as dtu_limit
                , MAX(dtu_used) as dtu_used
             FROM
                 (SELECT
                     end_time
                    , ISNULL(avg_cpu_percent, 0) as cpu_percent
                    , ISNULL(avg_data_io_percent, 0) as physical_data_read_percent
                    , ISNULL(avg_log_write_percent, 0) as log_write_percent
                    , ISNULL(avg_memory_usage_percent, 0) as [memory_usage_percent]
                    , ISNULL(xtp_storage_percent, 0) as xtp_storage_percent
                    , ISNULL((SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) as dtu_consumption_percent
                    , ISNULL(max_worker_percent, 0) as workers_percent
                    , ISNULL(max_session_percent, 0) as sessions_percent
                    , ISNULL(dtu_limit, 0) as dtu_limit
                    , ISNULL(dtu_limit, 0) * ISNULL((SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) / 100.0 as dtu_used
                 FROM sys.dm_db_resource_stats 
                 WHERE [end_time] >= @startTime AND [end_time] <= @endTime
                 ) t
             GROUP BY ((CONVERT(BIGINT, DATEDIFF(day, 0, [end_time])) * 24 * 3600 + DATEDIFF(second, DATEADD(day, DATEDIFF(day, 0, [end_time]), 0), [end_time])) / @timeGrain) * @timeGrain

A Future Blog Post

Now that we can calculate DTU we could trend this data and automatically make changes like moving up and down from the current performance tier.  I would love to wrap this into an Azure SQL Database Alert but a process in PowerShell might be a great starting point.

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, he would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.

Azure SQL Database Live Migrations

Unfortunately, with Azure SQL Database you are not able to take an existing SQL Server Backup and restore it on an Azure SQL Database server.  We are talking about Microsoft’s database as a service (DaaS)  offering, not Azure VMs also known as infrastructure as a service (IaaS).  The only current way to migrate an existing database is to move the schema and the data, period. We have some good tools that can make this seamless, especially with smaller databases and outage windows. You can easily use SQLPackage.exe (My recommended tool), SSMS Wizard , or import/export bacpacs.

Migrating Existing Databases to Azure SQL Database

Migrating Existing Databases to Azure SQL Database

This is great, except for the case I want to talk about today. What if you need to do a live migration with as little downtime (business wants no downtime) as possible with bigger databases. For example, say and existing 50 GB to 500GB database? Your only, option today is a good old friend of mine called transactional replication. You see, you can configure transactional replication and have the snapshot occur and all data in your current production system can be syncing live with your Azure SQL Database until it’s time to cutover which will make your cutover downtime as short as possible.

Below I will give you step by step instructions on how you can configure your subscriber. This would be your Azure SQL Database. The publisher would be your existing production database which could either be on-premise or an Azure VM.

Prereqs

To make this blog post as consumable as possible, we will assume a few prereqs have been completed.  Links in the reference section are provided in case you might want help with the following prereqs.

  • Current production database has a primary key on all tables that need to be migrated. This is a requirment for transactional replication.
  • A new empty Azure SQL Database exists.
  • Azure SQL Database Firewall includes your distributor SQL Server (more in this below)
  • You have an SQL Authenticated Account configured for your Azure SQL Database
  • Publication currently exists

Step One:

Below we will start creating our subscription by utilizing an existing publication. For this article, we will use AdventureWorks. We start by creating a new subscription. I assume you have an existing publication on your primary server and that it is configured. We will use the same server for the distribution agent and the distribution database.

Create Subscription Azure DB

Next, we will walk through the wizard.
17_CreateSubscriptionAzureDB

 

Step Two:

Select the publication you would want to use to add the new subscriber to use the empty Azure SQL Database. Remember in this case, were going to use this subscriber to migrate the schema and data to a new Azure SQL Database.

18_CreateSubscriptionAzureDB

Step Three:

Now, you will need to configure the distribution properties. With Azure SQL Databases you will have no access to the OS and only the database. Therefore, you will need to create a push subscription. A push subscription is a nice way of saying all the agents that initialize the data sync, capture and send data are configured on the distributor server.

19_CreateSubscriberAzureDB

NOTE: If you plan on having a lot of publications it is recommended to use an individual SQL Server to isolate your distribution databases. 

Step Four:

Now we will select where our subscriber will reside. The subscriber will be the Azure SQL Database that we are migrating too.

20_CreateSubscriberAzureDB

21_CreateSubscriberAzureDB_Adjusted

Now you will be able to select your database. In this case, we are going to use SQL 2016ctp3 database. This is an empty database that was created in the prereqs.

22_CreateSubscriptionAzureDB_adjusted

NOTE: Your publisher and distribution SQL Servers must be on a supported version of SQL Server to have Azure SQL Database as a subscriber or you will not be able to select your Azure SQL Database Server as a new subscriber. Trust me, I learned this initially the hard way.

23_CreateSubscriptionAzureDB_altered

Step Five:

Now that our subscriber database is selected we need to define how we are going to connect to our subscriber.

We will use SQL Server authentication using an account on your Azure SQL Database that will allow you to create the schema and move the data. You will also need to update your SQL Azure firewall to include your existing distribution SQL Server as mentioned in the prereq section.

For simplicity of this guide, we are going to run the distribution agent under the SQL Agent Service account. Ideally, you would want a separate account that only had the security needed for transactional replication.

24_CreateSubscription_DistributionSettings_Altered

25_CreateSubscriptionAzureDB_altered

Next, we will define how often we will synchronize the data. We will select to run continuously. You could also have this run on a schedule as well.

27_CreateSubscription_AzureDB_altered

Now we select when to initialize the subscription. Initializing the subscription will execute the snapshot agent (process) that will initialize building the schema and bulk inserting data from the publisher to the distributor and then this data will be sent to the subscriber.

NOTE: The snapshot process will require a schema lock on the publisher database so make sure you initialize at a time when the schema lock is appropriate. In this article, we will initialize immediately.

Step Six:

Now the hard work is done. We will go through the final processes of adding the new subscription with the wizard provided with SSMS.

28_CreateSubscription_AzureDB

29_CreateSubscriptionAzureDB_Altered

30_CreateSubscriber_AzureDB

Finally, once the New Subscription Wizard is successful, you will see your subscription like shown in the image below. You can then monitor the synchronization just like you would with any on-prem transactional replication configuration.

31_CreateSubscription_Review_ALTERED

Conclusion

Configuring an Azure SQL Database as a subscriber is a lot easier than many IT Professionals would initially think. It’s also the best current option to reduce downtime required to synchronize data during a migration of an existing database. The data sync is done in advance and data changes will be kept in sync.

Once data is synchronized, you can stop access to the current production server and remove the subscriber and let the users access the new Azure SQL Database.

Keep in mind, as of the time of publishing this blog post an Azure SQL Database can only be a subscriber and not a publisher in transactional replication. What does this mean to you? Do performance and acceptance testing because you would have to generate a bacpac and export it and import it if you decide to move away from Azure SQL Database.

Additional Reference

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, he would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.