Category Archives: SQL Server

All SQL Server posts that don’t fit in other categories

SQL Saturday Jacksonville – May 14th, 2022

Hi Everyone,

This is your Austin SQL Server Consultant and I will be speaking at SQL Saturday Jacksonville. I am happy to get back out and, on the road, again. I know COVID has impacted everyone in multiple different ways. While I have enjoyed my time alone with the people who matter the most to me, I am also excited to get back out on the road and reconnect with friends and make new ones as well.

This weekend, I am happy to get do my first in-person conference in the past two years. I will be at SQL Saturday in Jacksonville talking about my favorite topic Making SQL Server Queries go Faster! My SQL Server Consulting company will be sponsoring as well. Make sure to come on over as we look forward to connect, share and learn all day long!

SQL Saturday Jacksonville is May 14th 2022 and I am speaking!

Sample Code from Presentations

Persist Sample Percent in SQL Server IS Fixed!

Hi Everyone, this is John Sterrett. I am a SQL Server Consultant in Austin, TX. Last year I blogged about a feature called Persist Sample Percent. It had a nasty bug that could negatively impact performance. I have great news to share. The fix is now rolled into SQL 2016 SP2 CU17 and SQL 2019 CU10. Pedro Lopes let me know that with the fix now queued for SQL 2017 CU26, this becomes fixed in all versions.

Breaking News… Persist Sample Percent is coming to SQL 2017 Soon!



Kudos to Pedro Lopes and the MSSQL Development team for this update. Make sure you are applying the latest updates so you can leverage all great enhancements, updates, and fixes.

Persist Sample Percent Matters

Okay, you might be wondering why should I consider utilizing persist sample percent? If you have large tables, auto update statistics might be hurting you instead of helping. Yup, that is not a typo. Also, if you update statistics and don’t provide a sample percent you can have the same problem. Worst case, you have a job that updates statistics with a good sample percent, data changes and auto-update uses a subpar percent.

By default, modern versions of SQL Server will utilize a smaller sample percent as your table row count grows. This can potentially give you bad execution plans.

Let’s take a look at the example below. It doesn’t take a whole lot of rows to get a sample percent under 10%

If Puff Daddy was a DBA he would say, “More Rows More Problems..”

If you want to identify if this is a potential problem in your environment I am including the script below that we utilize in our SQL Server Health Check.

;with cte as (SELECT CAST(((rows_sampled * 1.00)/ [rows] )*100.00 AS NUMERIC(5,2)) AS SamplePCT, OBJECT_NAME(s.object_id) as TableNAME, s.name StatsName,
    sp.*
    FROM sys.stats AS s
    OUTER APPLY sys.dm_db_stats_properties (s.[object_id], s.stats_id) AS sp
	JOIN sys.objects o on s.object_id = o.object_id and o.is_ms_shipped = 0
    WHERE 1=1)
	select * from cte where SamplePCT IS NULL or SamplePCT < 10
	order by SamplePCT

The following is an example of this occurring. The only change we made was updating stats with a fixed sample rate.

Does anyone want to guess when Stats was updated?

Persist Sample Percent Is Your Friend

You can utilize persist sample percent as long as you are utilizing one of the cumulative updates (CU’s) provided above or a newer CU. Persist sample percent will lock in your sample percent. You will no longer need to worry about an index rebuild removing the persisted sample percent which puts you back at the default sample percent.

You can follow this demo to test this out on your own.

If you enjoyed this blog post subscribe to our newsletter so we can make sure to send you more free tips and videos.

Allow users to start/stop Azure VMs

Today I wanted to cover how you can grant the least privilege required to stop, start or restart an Azure VM. This is also a fun great example of how you can create custom Azure Security Roles too. That’s right, we are killing two birds with one stone today.

Why Should you create a custom Role?

Where possible I like to grant security towards resource groups. Therefore, let’s assume we got multiple VM’s built for the developer group to do some testing. You want to grant people access to start, restart or stop any VM in that group. We can then grant access to the resource group to our custom role. As VMs come in and out of the resource group they would inherit our custom group.

Now, you might be wondering why don’t I just give them the “Contributor” role or the “Virtual Machine Contributor” role and be on our way? Well, if you were to do this on a resource group you just gave access to create VM’s and a whole lot more.

Least privileged roles are your best friend. Today, you will see they are also not that hard to create either.

How do we create custom roles?

Great question, first you need to identify what tasks do we need the role to complete. In our case, you have to be able to see a VM in order to take any action against the VM. Then we want to start, stop (deallocate) and restart the VM. Digging through IAM. I found the following security options.

 "Microsoft.Compute/*/read",
 "Microsoft.Compute/virtualMachines/start/action",
 "Microsoft.Compute/virtualMachines/restart/action",
 "Microsoft.Compute/virtualMachines/deallocate/action"

Now, we can create our custom JSON text file that we will then import using Azure CLI. Below you will find a sample JSON file to build our custom security role. You will need to add your subscription id(s). You can also change your name and description you would see in the Azure Portal.

  "Name": "VM Operator",
      "IsCustom": true,
      "Description": "Can start, restart and stop (deallocate) virtual machines.",
      "Actions": [
        "Microsoft.Compute/*/read",
        "Microsoft.Compute/virtualMachines/start/action",
        "Microsoft.Compute/virtualMachines/restart/action",
        "Microsoft.Compute/virtualMachines/deallocate/action"
      ],
      "NotActions": [


      ],
      "AssignableScopes": [
        "/subscriptions/<Subscription ID Goes Here>"
      ]
    }

How to Import Custom Security

Now that we are ready to go with our custom security role in a JSON file. We can then utilize Azure CLI to log in to the tenant and import our security role. First, we will log in to Azure with CLI as shown below.

az login --username <myEmailAddress> -t <customerTenantId-or-Domain>

Now we will load our saved JSON file. After a few minutes, we should then see our new security role in the Azure portal.

az role definition create --role-definition IAMRole-VMOperator.json

Now you can grant access to your custom role just like you would with any other role in Azure.

Previous SQL Server Updates

Hello everyone, this is John your Austin SQL Server Consultant here and today I am going to answer a question that comes up often so I wanted to blog about it for everyone. The question of the day is where can I download the previous SQL Server Updates?

The History towards Updates

Back in the day when we were young but not a kid anymore there were service packs and cumulative updates. We could download these separately and all of the updates were easy to find. Now today, if you click on a KB article to download an update you get pointed to the latest update as shown below.

Current SQL Server KB articles point to only the latest update.

How far is My SQL Server on Updates?

This is also another great question. My favorite place to find all the history of updates toward SQL Server is the SQL Server Build List Blog. You can cross-reference this towards your version by running the following query below.

You can use SELECT @@VERSION to get your current version number.
You can use SELECT @@VERSION to get your current version number.

I fully get exactly why Microsoft is trying to point everyone to the latest update. Normally, it makes perfect sense but let’s take a look at today Jan 9th, 2020. I am planning to update SQL Server 2017 to CU17. Its been out for two months. Today CU18 is released and if I wasn’t careful I would have downloaded a different update than expected.

SQL Server Blog List is a great resource for finding a list of all SQL Server Updates
SQL Server Blog List is a great resource for finding a list of all SQL Server Updates

Getting a previous SQL Server Update

So, on to the solution. It’s actually an easy one but also one that is easy to overlook as well. Let’s go back to the new standard update page for SQL Server updates.

That is right, the Microsoft Update Catalog is your best friend to find all your updates for Microsoft products including SQL Server. You can search for the product you want. For example, in this case, I am looking for SQL Server 2017 and can see all the previous updates for SQL Server.

All SQL Server Updates Can be Found in the Microsoft Update Catalog.
All the SQL Server Updates can be found in the Microsoft Update Catalog

If you enjoyed this tip and found it helped make sure to join our newsletter so you can enjoy more free tips, tricks, and video demos.

Free SQL Server Training Videos

Hello everyone! This is your SQL Server Consultant in Austin, TX and due to some posts on twitter about SQL PASS recordings costing $999 I wanted to share some of my favorite places to find free SQL Server training videos. I hope this helps make your data fast, secure and highly available in 2020 and beyond!

Where Is the Good Stuff Give Me Some More

Speaking on Migrating to Azure SQL Database at Ignite 2017
Attending conferences is nice but free recorded sessions are priceless!

Those who know me know I love music. I especially love the underground non-mainstream content. Therefore, my first recommendation is UserGroup.TV. As of December 27th, there are 127 videos tagged as SQL Saturday alone. Shawn goes around to almost every Tech conference he can find and brings his rig and records sessions for the community.

Are you in love with the new pop singles? Wish you could hear them before they hit the radio? If you like your tech like your music than Microsoft Ignite is for you. Every year Microsoft puts on a conference called, Ignite. This conference is usually where Microsoft will break its cutting edge tech. My favorite thing about the conference is that the content is available online for free. Midway through the page, you can search through the massive collection of free recorded sessions.

Next up, is the consistent greatest hits. Almost every session is a banger! This reminds me of my favorite Microsoft Data Platform conference. This is SQLBits and yes, their video content is also available for FREE.

Finally, Here is a collection of some great videos on performance tuning. Make SQL Server Go Faster, SQL Server Performance Improvements with SQL 2019. Another one is 7 Reasons your SQL Server Code is Slow! You can also find more great videos at Procure SQL Youtube Channel.

Azure Backup for SQL Server VMs

I am not sure why but sometimes I am glutting for punishment. Maybe its why I try every backup and restore solution I can get my hands on? While Microsoft has done an amazing job at building the best relational database engine Azure Backup for SQL Server Virtual Machines has some architecture problems. In this post, I will showcase things you need to focus on, problems, and workarounds for your initial run with an Azure Backup for SQL Server VMs.

What’s Azure Backup for SQL Server Virtual Machines (VMs)?

If you take a look at Azure Backup they added functionality for backing up SQL Server databases inside an Azure VM. This seems like a really cool feature. Let’s use the same technology we use to backup our VM’s to also backup our databases. You know the whole one-stop-shop for your disaster recovery needs. Comes with built-in monitoring and it also eliminates the struggle some people have with setting up certificates, encryptions, purging old backups in blob storage, backups and restores from blob storage. It is really nice to also have a similar experience as restoring Azure SQL Databases as well.

Unfortunately, the product doesn’t work as expected at this point in time. I would expect any database backup tool to be able and backup the system databases by default without any customization. Therefore, Last night I setup my first Azure Backup for SQL Server Virtual Machines in the Backup Vault and this morning you can see my results below.

Azure Backup for SQL Server VM’s gets 0/3 system databases backed up by default

Now we will dig into concerns and initial problems with Azure Backup for SQL Server Virtual Machines (VMs).

Automatically Backup New Databases

Having the ability to backup new databases automatically is taken for granted. So much, that I noticed that Azure Backup for SQL Server VM’s will not automatically backup new databases for you. That’s right. Make sure you remember to go in and detect and select your new database every time you add a database or you will not be able to recover.

Azure Backup for SQL Server VM’s has an interesting feature called Autoprotect. This should automatically backup all your databases for you. Unfortunately, this does not work. Yes, I double-checked by enabling autoprotect for a VM and I added a new database. The database didn’t get backed up so I had to manually add the database.

Simple Recovery Problems

Looking into the failures for my system database backups I noticed something interesting in the log for the master database. It looks like you will get errors with the only SQL Server backup policy created by default. The reason is the policy includes transactional log backups and as you know its impossible to take a transactional log backup if your database utilizes the simple recovery model. Now, most backup tools know how to roll with databases in simple and full recovery.

Looks like Azure Backup for SQL Server VM’s is not one of these tools that easily allow you to mix databases utilizing both simple and full recovery models.

Yup.. Simple recovery model is no bueno..
Yup, simple recovery model is no bueno..

So, how do we get around this? It is not too hard. Just create a new backup policy that does not include transactional log backups and assign it to your databases that utilize the simple recovery model.

Transactional Log Backup Problems

So, what happens when you try to take a transactional log backup of a database that doesn’t have a full backup? It fails. This is by design. If you try to take a log backup in this scenario with T-SQL it will fail as well. That said, several 3rd Party open source backup solutions like my recommended one can gracefully handle this for you. It can take a full backup instead of the log backup. I have grown to expect this behavior.

Here is what you will see in the logs of Azure Backup for SQL Server VM’s.

Log backups without a full backup fail. You have to force a full backup
You have to force a full backup or wait until the scheduled full backup occurs. Yuck!

So, the workaround here is simple. You can force a backup. This will start the process of allowing your schedule log backups to work as designed. You could also wait until the scheduled full backup runs but know this means you will not have point in time recovery until that full backup runs. There should be an option to perform a full backup instead of a transactional log backup if a full backup does not exist. This would prevent the transactional log scheduled backups from failing.

Things to Know!

Azure Backup for SQL Server VM’s pricing goes off of storage as well as instances of SQL. By default, compression is not used for the SQL Server Backups. You will most likely want to make sure you enable this to save some money.

Azure Backup for SQL Server VMs has a good amount of documentation for troubleshooting common problems.

There are many documented limitations that we didn’t cover in this blog post. Some shocking ones to me are SQL Server Failover Cluster Instances and don’t configure backup for more than 50 databases in one go

Looking Back: 2016 was Awesome!

With 2017 starting this week I would like to take a moment and reflect on 2016. I have to admit that 2016 was a great year for many different reasons, both personally and professionally. It’s nice just to stop and acknowledge some significant accomplishments that occurred last year!

2016 in Review

I always try to keep my Community contributions up to date here on my blog. I would have never gotten to where I am today without others who helped me along the way, so I am always happy to give back.  It’s been an honor to host another SQL Saturday in Austin for over 250 attendees, be a leader for the Austin SQL Server User Group and continue to grow the High Availability and Disaster Virtual Chapter to 2145 members.

Professionally, it was also a great year in 2016.  I was able to obtain two major goals on my career bucket list. I completed the Microsoft Certified Solution Expert for Data Platform certification (Yes, my procrastination took me to the

Microsoft Certified Solution Expert on Data Platform

Microsoft Certified Solution Expert on Data Platform

last work day in 2016, but I got it done!).  As Tom LaRock says, “They do not hand out certifications like candy. You have to earn them.” I also was acknowledged as a Microsoft Data Platform MVP which goes to about 100 people in the USA.

In 2016, I was also able to continue to grow as a speaker.  Once again, I was blessed with an opportunity to speak at the prestigious PASS Member Summit. I never take getting accepted lightly. This year even though I was sick and almost

Almost a perfect score for the biggest stage.

Almost a perfect score at the Superbowl for SQL Server Speakers!

had to cancel I was able to deliver my session while obtaining my best evaluation score to date (ranked 1 to 3).  I gave 15 sessions in 2016; my highest attended session of the year was at the Data Architecture Virtual Chapter to 457 attendees.

My company, Procure SQL LLC also achieved some great milestones in 2016. I was able to hire an employee, Angela Tidwell as our Marketing DBA. We were able to procure our very first office as well. Finally, after spending several hours working with an excellent health care broker, I was proud to announce that we were able to set up our very first group health plan at Procure SQL. While this isn’t required, it was a huge goal of mine to help make sure that our employees are taken care of as we grow in the future.

Personally, I was able to catch up with some great friends and spend some quality time with my family.  We took the family to Galveston, TX for our very first beach trip as a family.  I learned that it was Nina’s Dads first time in the Gulf as well.  Bonus; the weather was so nice in December my Brother and rootsportsroyjohndecided to bring our kids back for a weekend.  During my birthday weekend, I was able to see my Pirates and Wheeling Nailers in person in Dallas which is both very rare.   During the weekend I was able to hangout with one of my best friends from college.  We even made it on Root Sports Pittsburgh raising the jolly roger.

Finally, my loyalty with some friendships was tested in 2016. While one could have only focused on themselves,  I decided to help my real friends in need in 2016. I remember asking myself (yes I talk to myself), “what would I hope my friends would do for me in my hour of need?” I then tried to do that.

2017 Goals

These days, I am not a huge fan of publicly sharing my goals.  With that said, I wanted to share three goals to help keep me accountable.

Build Two Local Speakers

In the past, I have had the opportunity to help build some new speakers. I cannot think of a more rewarding experience in the SQL Community than helping someone grow. While I have more than two different people in mind, I hope to have at least two new speakers give their very first SQL Server presentations in 2017. Who knows? Maybe they will love it, and it can change their careers like it did for me.

Fill My Office

While I plan on getting us a fridge, when I say, “Fill My Office” I wasn’t just planning on filling the fridge with adult beverages. Our office fits one more person easily, and I hope to find that FTE employee in 2017.  I will provide more details on this later on as we get closer to that point of pulling that trigger.

Building Training Plan

 

Testing out the new office white board.

Literally, this was the first thing written on our new six-foot whiteboard

With the hiring of Angela last year, one of my goals was to build some high-Quality training material that could help someone become a SQL Server Database Administrator (DBA). I want this to be online content that could be used on demand to help as many people as possible to get a great start as a DBA. I hope to have this implemented in 2017.  It might also force me to get out of my shell and focus on building some great video content which will be a new challenge.

 

I am a Microsoft Data Platform MVP

My First Microsoft SQL Server MVP Award

My First Microsoft SQL Server MVP Award

Today, I have achieved an SQL Someday moment.  I am excited to share some exciting news. Microsoft has chosen me to receive the Microsoft “MVP” award. I am completely shocked as this is my first time obtaining this award.

It is a tremendous honor to be given this award. Words cannot describe how humbling it is just to be nominated for this award by peers within the Data Platform community. I am blessed to be part of the community.  I am thankful to work in a career that supports the ability for everyone to connect, share and learn. I look forward to doing the same as a new member of the MVP community.

Who Are Microsoft MVP’s?

The following excerpt comes directly from the Microsoft MVP website.

“Microsoft Most Valuable Professionals, or MVPs, are community leaders who’ve demonstrated an exemplary commitment to helping others get the most out of their experience with Microsoft technologies. They share their exceptional passion, real-world knowledge, and technical expertise with the community and with Microsoft.”

How My Journey Started

I would never forget and always be thankful to Adolph Santorine my boss at the time who told me, “John, I think you should get involved in our local Association for Informational Technology Professionals (AITP) chapter.” He shared how it help him personally and how he thought it could be beneficial for my career.   Adolph was right. I learned so much from some well experienced IT leaders. The Greater Wheeling Chapter of AITP is where my community involvement started, and I was lucky to start with a community who has been connecting, sharing and learning since 1960’s.  When I decided to get my local community involved in the SQL Server community my fellow AITP members had my back and helped make our event a success. From this moment, I was introduced into PASS and the SQL Server Community.  I have been connecting, sharing and learning ever since. 

How Will Things Change?

They will not change, and to me, that is the greatest part of this award. I still plan to connect, share and learn with as many people as possible. The data platform is evolving.  I plan on being involved in the community and building relationships with individuals who are interested in seeing the Microsoft Data Platform grow and succeed.

Special Thanks

I would like to thank my family, friends, and mentors.  Without their help and support, I would never be the person who I am today or tomorrow.

I would also like to thank everyone I have worked with in my career.  Thank you to everyone at Deloitte, Schedule Star, Orrick Herrington & Sutcliffe, RDX, Dell and Linchpin People. I am truly blessed to have a lot of great mentors and friends.   Thank you for giving me the opportunity to learn, grow, and have a career doing what I love to do.

DONT FORGET TO NOMINATE

Finally, you might not know this, but anyone can nominate someone for a Microsoft MVP Award. You don’t have to be an MVP or a Microsoft employee. I have personally been nominating people for years and will continue to do so as an MVP. If you know of anyone who you think is deserving of the Microsoft MVP Award nominate them.

Automatic Seeding Very Large Databases in SQL Server 2016

There are a lot of new features in SQL Server 2016. Availability Groups by itself got a lot of new features.  Being that I am the founder of the High Availability and Disaster Recovery Virtual Chapter, I plan on blogging about the new availability group features.

Today, I wanted to write about Automatic Seeding.  Microsoft did an excellent job of explaining how to enable and monitor automatic seeding.  I wanted to focus this post on my experience utilizing automatic seeding to seed an existing 600gb database in a test environment to test my endpoint network throughput.

The initial data synchronization easy button.

When you add a database to an availability group, the replicas must synchronize the data between the availability groups to join the database on the replicas.  In the past, data initialization has been done with mirroring, log shipping, backup, and restores.  Personally, I have been a big fan of log shipping for the initial data synchronization of VLDB’s especially when you need more than two replicas. Here is how I added a 60TB (Yes, TB not GB) database to an availability group that utilized multiple data centers.

Automatic seeding is a feature that has been in Azure SQL Databases for a while. It’s how the initial data synchronization occurs for Geo-Replication of Azure SQL Databases.  Automatic seeding utilizes a VDI Backup to take a copy only backup and send it over the endpoint network to seed the replicas and then join the databases with the replicas. This eliminates the need to manually take full and log backups from the primary replica to all the secondary replicas. It will also join the database on the replicas for you.

Bonus Feature of Automatic Seeding

There is also a bonus feature of automatic seeding for DBA’s and Information Technology professionals.  Even if you decide to not use automatic seeding I recommend testing this feature as automatic seeding can be a great way to stress your endpoint network to validate its throughput.

Background Information

This availability group has been configured with a separate 10Gbps network dedicated to endpoint traffic. Nothing else is active on the network or the replicas during the time of testing.

Setup

I configured the following performance monitor counters.

  • Bytes Received/sec on Secondary replicas
  • Bytes Sent/sec on Primary replica.

I also configured the following extended event session to monitor seeding activity on the primary and secondary replicas. We will focus on the “hadr_physical_seeding_progress” event today. We will talk about others in a future blog post.

CREATE EVENT SESSION [AlwaysOn_autoseed] ON SERVER 
ADD EVENT sqlserver.hadr_automatic_seeding_state_transition,
ADD EVENT sqlserver.hadr_automatic_seeding_timeout,
ADD EVENT sqlserver.hadr_db_manager_seeding_request_msg,
ADD EVENT sqlserver.hadr_physical_seeding_backup_state_change,
ADD EVENT sqlserver.hadr_physical_seeding_failure,
ADD EVENT sqlserver.hadr_physical_seeding_forwarder_state_change,
ADD EVENT sqlserver.hadr_physical_seeding_forwarder_target_state_change,
ADD EVENT sqlserver.hadr_physical_seeding_progress,
ADD EVENT sqlserver.hadr_physical_seeding_restore_state_change,
ADD EVENT sqlserver.hadr_physical_seeding_submit_callback
ADD TARGET package0.event_file(SET filename=N'autoseed.xel',max_file_size=(20),max_rollover_files=(4))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO

The following T-SQL script is then used to read the results once your seeding process has completed. We will talk about the results in the results section below.

DECLARE @XFiles VARCHAR(300) = 'S:\MSSQL13.MSSQLSERVER\MSSQL\Log\autoseed*'

;WITH cXEvent
AS (
     SELECT    object_name AS event
              ,CONVERT(XML,event_data) AS  EventXml
     FROM      sys.fn_xe_file_target_read_file(@XFiles, NULL,NULL,NULL)
     where object_name like 'hadr_physical_seeding_progress')

 SELECT 
c1.value('(/event/@timestamp)[1]','datetime') AS time
,c1.value('(/event/@name)[1]','varchar(200)') AS XEventType
,c1.value('(/event/data[@name="database_id"]/value)[1]','int') AS database_id
,c1.value('(/event/data[@name="database_name"]/value)[1]','sysname') AS [database_name]
,c1.value('(/event/data[@name="transfer_rate_bytes_per_second"]/value)[1]','float') AS [transfer_rate_bytes_per_second]
,(c1.value('(/event/data[@name="transfer_rate_bytes_per_second"]/value)[1]','float')*8)/1000000.00 AS [transfer_Mbps]
,c1.value('(/event/data[@name="transferred_size_bytes"]/value)[1]','float') AS [transferred_size_bytes]
,c1.value('(/event/data[@name="database_size_bytes"]/value)[1]','float') AS [database_size_bytes]
,(c1.value('(/event/data[@name="transferred_size_bytes"]/value)[1]','float') / c1.value('(/event/data[@name="database_size_bytes"]/value)[1]','float'))*100.00 AS [PctCompleted]
,c1.value('(/event/data[@name="is_compression_enabled"]/value)[1]','varchar(200)') AS [is_compression_enabled]
,c1.value('(/event/data[@name="total_disk_io_wait_time_ms"]/value)[1]','bigint') AS [total_disk_io_wait_time_ms]
,c1.value('(/event/data[@name="total_network_wait_time_ms"]/value)[1]','int') AS [total_network_wait_time_ms]
,c1.value('(/event/data[@name="role_desc"]/value)[1]','varchar(300)') AS [role_desc]
,c1.value('(/event/data[@name="remote_machine_name"]/value)[1]','varchar(300)') AS [remote_machine_name]
,c1.value('(/event/data[@name="internal_state_desc"]/value)[1]','varchar(300)') AS [internal_state_desc]
,c1.value('(/event/data[@name="failure_code"]/value)[1]','int') AS [failure_code]
,c1.value('(/event/data[@name="failure_message"]/value)[1]','varchar(max)') AS [failure_message]

FROM cXEvent
    CROSS APPLY EventXml.nodes('//event') as t1(c1)

Results

The 600 GB databases took about 66 minutes to seed across the network from a primary replica to the secondary replica.  I noticed 1.4 Gbps of consistent throughput during the seeding process. This makes a lot of sense as it caps out around what the storage system can deliver in this environment.

The first thing I would look at for benchmarking throughput for network activity would be the bytes sent per second from the primary replica and bytes received per second on the secondary replicas.

AG Seeding Primary Replica's Bytes Sent per Second

AG Seeding VLDB Primary Replica – Bytes Sent per Second

AG Seed VLDB Secondary Perfmon

AG Seed VLDB Secondary Replica – Bytes Received per Second

I am seeing average around 1.4 Gbps.  Normally, just looking at bytes sent and bytes received will be good enough for you to measure your throughput.  Especially, when nothing else is utilizing the dedicated endpoint network. In the field, I usually do not see dedicated networks for endpoint traffic so I wanted to take this a step further and monitor with some of the new extended event events for automatic seeding.

Here is a look at the raw data from the extended event capture showing the progress and throughput of the seeding.

AG Seed Xevent Transfer

The following is a nice excel graph showing the throughput in Mbps. I added an extra row with zero for throughput just to show the rise and fall of network usages as seeding starts and completes.

AG Seeding Excel Graph

My Thoughts

Initial data synchronization process just got a lot easier.  I will use this for sure for adding new databases to availability groups.  If you can live with your transactional log not being truncated during the seeding process I strongly encourage you to use automatic seeding.

I also did not use trace flag 9567  to enable compression during this test. It is why you saw compression not being enabled.  If you have the CPU resources I recommend you test this as well.

Reference Links

For more great information on SQL Server subscribe to my blog and follow me on twitter.

Photo Credit: Mike Mozart (Creative Commons)

Converting identifiers into SQL Server table variables in Management Studio

If I had a nickel for every time someone gave me a list of identifiers and asked me to pull a result set for them I would be a very wealthy man. Over the years this is an endless request. I used to use several different tools like excel for example to convert the list of identifiers into SQL Statements. I would then use set based logic to complete my task inside SQL Server.

Being a SQL Server Database Administrator I like to use one consistent tool for working with queries. One day I noticed an interesting feature inside of the search and replace functionality inside Management Studio. I could utilize regular expression as a tool in my tool belt for manipulating text inside of SQL Server Management Studio. This gave me the functionality to stay within my preferred tool for doing SQL Server development work.

If you want to see how you can quickly manipulate the identifiers and convert them into table variable check out the three minute video below.