Cutover 30+ GB databases in 60 seconds with SQL Server 2005/2008

You kid brother just messed up a database migration.  You now have sixty seconds to migrate a 30 GB database or you kid brother is…

Okay I gave it a shot 🙂

If you are familiar with moving databases across servers most likely you are using one of the following methods.  Today I am going to write about what goes on during these methods with a focus on speed and provide another option that might work for you.

  • Detach/Attach
  • Copy Database Wizard
  • Backup/Restore
  • 3rd Party Tools


The detach/attach method allows you to detach and reattach databases. Therefore, in order to move a database from one server to another you have to detach the database, physically move the data and log files to the new server and then attach the database.  An unforeseen bottleneck with this strategy might be network latency. While this method is very straight forward and simple how long can the database be offline while you are moving the database files (mdf, ndf, ldf) during the detach/attach process?

Copy Database Wizard

The copy database wizard is a tool that leverages SQL Server Integration Services (SSIS) to copy a database from one server to another.  The account that the package uses at runtime has to have sysadmin role on both the source and destination instances.  You have two options during the copy process.  The first method is detach/attach see the paragraph above for feedback on using detach/attach.  The second method includes using SMO to script database objects.  This method keeps the source database online during the copy but is much slower than the detach/attach.  Therefore, I have no feedback on this method.  Have you used it? If so please add your comments.


Assuming you are using the Full recovery mode for your database the backup restore method for moving a database involves taking a full backup and a transactional log backup where you backup the tail of the log, and leave the database in restoring state.  This will take the source database offline keep the data in sync.  With restore time being a factor you could restore the full backup and do incremental transaction logs up to the point of cutting over assuming assuming no full backups occurred on your database while you started applying transactional backups.

And the winner is…

Drum roll please……..  And the winner is Database Mirroring in High Availability Mode as it can allow you to cutover failover to migrate huge databases in less than sixty seconds.  Okay I might be cheating, it will take much more than sixty seconds to configure. The important fact is that it will seam like it only took seconds to the end users. If your application uses .NET 2.0+ framework and you configure client side redirect the end users might not experience an outage at all.

To setup database mirroring it is highly recommended to make sure the principal and mirror database engine services are using service accounts, they also need access the the ports used by SQL endpoints, and more….  Check out the MSDN site for a great guide for Database Mirroring

  • Assuming the database file and log files are on the SAN one trick I have used is to detach the database, move the disk to the new server and the attach the database on the new server. Network latency is no longer a problem. I do agree that mirroring is best, if it is setup.

  • Michael you bring up a great point. I should modify my post as I was assuming two physical servers were in play. I will also throw out another tip that goes with clusters.

    Make sure you fully understand security of data and log files. If you are on a cluster and you don’t know this it might bite yourself in the ……..

    I will admit it happened to me. I think my next post will be over this lesson learned. In the meantime check out this MSDN article for more info