Tag Archives: Restore

SQL Server 2017: Making Backups Great Again!

In some DBA circles, backups are just as popular as politicians! However, recoverability is the most important task for database administrators.  While SQL Server 2017 added so many great features like graph, SQL on Linux, and more.  Today, I want to focus on two small underdog features that might be game changers on how you do backups.

SQL Server Backups are often as popular as politicians.

SQL Server Backups are often as popular as politicians.

Smart Differential Backups

Databases are getting bigger, not smaller. More storage capacity is needed for these backups. Backup compression might hurt your storage capacity. Today, I am seeing more policies include full and differential backups along with transactional log backups. Differential backups are used to offset daily full backups. Typically people will use time increments as the basis for when backups should occur.  It’s very common to see automated jobs that do weekly full and daily differentials to reduce storage capacity needed for backups.

How often does your data change? Is the rate of change very consistent or does it change depending on the week?  Let’s assume this week it’s Tuesday and over 80% of your data pages have changed. You are not benefiting from taking daily differentials for the rest of the week. The opposite goes for data that doesn’t change that often.  Maybe you can save a lot of space by doing less frequent full backups.

Leveraging smart differential backups could greatly reduce your storage footprint and potentially reduce the time it takes to recover.

In SQL Server 2017 you can see exactly how many pages changed since your last full backup. This could be leveraged to determine if you should take a full or differential backup.  Backup solutions and backup vendors will be better for this.

select CAST(ROUND((modified_extent_page_count*100.0)/allocated_extent_page_count,2)
as decimal(6,2)) AS 'DiffChangePct'
,modified_extent_page_count
,allocated_extent_page_count
from sys.dm_db_file_space_usage
GO

Smart Transactional Log Backups

 The time your users are offline while you are recovering to the point of failure is critical. It could be the difference between keeping and losing customers.  Point-in-time recovery is mandatory for a critical database.  Transactional log backups have to be restored in order.

Recovery Point Objectives (RPO) drive how often you take transactional log backups.  If you have a policy that says you can only lose ten minutes of data, you need transactional log backups every ten minutes. Is this really true if there were no changes? What if your RPO is driven by the amount of data loss and not the time of the loss?  Either way, you can now control when transactional log backups occur based on the amount of data that has changed since the last transactional log backup.

SELECT name AS 'DatabaseName',dls.log_since_last_log_backup_mb,
dls.log_truncation_holdup_reason, dls.active_vlf_count,
dls.active_log_size_mb
FROM sys.databases s
CROSS APPLY sys.dm_db_log_stats(s.database_id) dls

 

This post was written by John Sterrett, CEO & Principal Consultant for Procure SQL.  Sign up for our monthly newsletter to receive free tips.  See below for some great related articles.

Adding a VLDB database to an SQL Server Availability Group

Recently, I was tasked with making an 60TB database highly available by adding it to an availability group. The following, is how I successfully completed adding the very large database (VLDB) to an Availability Group that had two replicas in the primary data center and a third replica in another data center with no downtime. We wanted to utilize the least amount of time during the initial synchronization process so we leverage log shipping to prepare the database on each replica.

We will call the three servers DC1Server1, DC1Server2, and DC2Server3.  We will have a pre-created AG called SQL2014AG1. We will say that the 60TB database is named VLDB.

Previously we did a side by side upgrade to upgrade VLDB from running on SQL Server 2012 to SQL Server 2014 on new hardware.  Previously, we also leveraged the new hardware to upgrade a subset of the data and build out the Availability Group for testing and training purposes.

  1. Create Availability Group. This was created in advance to deliver a proof of concept and to provide training for understanding how to use and manage an Availability Group. Therefore, there was no reason to break this and recreate it. If needed, you could obviously create the AG as part of this deployment where we just Add the VLDB into the AG.  Creating an Availability Group will not be covered here.
  2. Configure Log Shipping. The primary database in log shipping should be on the instance that is also the read/write replica for the availability group. In this case, VLDB already exists and is configured as primary on DC1Server1. This is also the primary replica for the SQL2014AG1 availability group. If you need examples on how to configure log shipping checkout my article on MSSQLTips.com.
  3. Identify the Log Shipping backup Job. This job usually starts with LSBackup_ for your database. This will be on DC1Server1.
    1. Execute the SQL Agent Job to take a log transactional log backup
    2. Disable the SQL Agent Job when the backup completes.
  4. On the secondary servers (DC1Server2 and DC2Server3) identify the LSCopy_ job for your database named VLDB.
    1. Execute the SQL Agent Job to copy the last log backup that you just created in step three.
    2. Disable the SQL Agent Job when copy backup job completes.
  5. On the secondary servers (DC1Server2 and DC2Server3) identify the LSRestore_ job for your database named VLDB.
    1. Execute the SQL Agent Job to restore the backups pending to be restored. This should include your backup taken in Step four.
    2. Disable the SQL Agent Job when the restore completes.
  6. Validate that the last log backups were copied and restored. This can be done using sp_help_log_shipping_monitor stored procedure on each instance. We would do this on DC1Server1 to validate the last log backup that was completed. On DC1Server2 and DC2Server3 we can validate that the backup was copied and also restored.
  7. On the primary server DC1Server1 add the VLDB to the existing availability group named SQL2014AG1. You can do this via the GUI.
    1. In SSMS under Availability Groups expand “SQL2014AG1” Availability Group and right click on Availability Databases and select “Add Database” to add VLDB.
    2. Select the “VLDB” database in the checkbox
    3. Select Join Only to start data synchronization as we used Log shipping to keep the VLDB in sync across all servers.
    4. Connect to all existing secondary servers (DC1Server2, and DC2Server3)
    5. Validate adding the Database to the Availability Group
    6. Add the Availability Group to your Availability Group named SQL2014AG1
  8.  Validate that Adding VLDB was successful to SQL2014AG1
    1. Remove log shipping on DC1Server1 for the VLDB database.
    2. Enable your log backup process for all three servers. This would be used to replace log shipping for taking transactional log backups.

 

 

Cutover 30+ GB databases in 60 seconds with SQL Server 2005/2008

You kid brother just messed up a database migration.  You now have sixty seconds to migrate a 30 GB database or you kid brother is…

Okay I gave it a shot 🙂

If you are familiar with moving databases across servers most likely you are using one of the following methods.  Today I am going to write about what goes on during these methods with a focus on speed and provide another option that might work for you.

  • Detach/Attach
  • Copy Database Wizard
  • Backup/Restore
  • 3rd Party Tools

Detach/Attach

The detach/attach method allows you to detach and reattach databases. Therefore, in order to move a database from one server to another you have to detach the database, physically move the data and log files to the new server and then attach the database.  An unforeseen bottleneck with this strategy might be network latency. While this method is very straight forward and simple how long can the database be offline while you are moving the database files (mdf, ndf, ldf) during the detach/attach process?

Copy Database Wizard

The copy database wizard is a tool that leverages SQL Server Integration Services (SSIS) to copy a database from one server to another.  The account that the package uses at runtime has to have sysadmin role on both the source and destination instances.  You have two options during the copy process.  The first method is detach/attach see the paragraph above for feedback on using detach/attach.  The second method includes using SMO to script database objects.  This method keeps the source database online during the copy but is much slower than the detach/attach.  Therefore, I have no feedback on this method.  Have you used it? If so please add your comments.

Backup/Restore

Assuming you are using the Full recovery mode for your database the backup restore method for moving a database involves taking a full backup and a transactional log backup where you backup the tail of the log, and leave the database in restoring state.  This will take the source database offline keep the data in sync.  With restore time being a factor you could restore the full backup and do incremental transaction logs up to the point of cutting over assuming assuming no full backups occurred on your database while you started applying transactional backups.

And the winner is…

Drum roll please……..  And the winner is Database Mirroring in High Availability Mode as it can allow you to cutover failover to migrate huge databases in less than sixty seconds.  Okay I might be cheating, it will take much more than sixty seconds to configure. The important fact is that it will seam like it only took seconds to the end users. If your application uses .NET 2.0+ framework and you configure client side redirect the end users might not experience an outage at all.

To setup database mirroring it is highly recommended to make sure the principal and mirror database engine services are using service accounts, they also need access the the ports used by SQL endpoints, and more….  Check out the MSDN site for a great guide for Database Mirroring