Austin, Texas Best Microsoft Technical Training Oppertunity

If you’re looking to master SQL Server, Power BI, or Microsoft Fabric, attending the SQL Saturday event in Austin, Texas, is one of the smartest moves you can make for your career.

Note: Austin, Texas, is hosting their event on May 2nd and 3rd.

Here’s why:

Free, High-Quality Training

SQL Saturday events are renowned for offering a full day of technical sessions that are entirely free of charge (pay for lunch). Whether a beginner or an experienced professional, you’ll find sessions tailored to your skill level, led by Microsoft employees, industry experts, and Microsoft MVPs passionate about sharing their knowledge. This includes all-day hands-on workshops (usually a paid add-on) and deep dives into the latest features of SQL Server, Power BI, and Microsoft Fabric, ensuring you stay current with the rapidly evolving Microsoft data platform.

Learn from The Experts

Austin Texas SQL Saturday session

Speakers at SQL Saturday events are practitioners who solve real business problems with these technologies daily. You’ll gain practical insights, best practices, and tips you can immediately apply to your job to add value instantly. You’ll see how other companies and consultants leverage SQL Server, Power BI, and Microsoft Fabric to drive their business success.

Networking Opportunities in Austin, Texas

Experts go to and share their knowledge at SQL Saturdays because of their desire to connect, share, and learn together. These connections lead to mentorship, job opportunities, and lasting professional relationships. SQL Saturdays are more than just technical content. It’s a community gathering. You’ll connect with fellow data professionals, speakers, and recruiters. The supportive, grassroots atmosphere makes it easy for newcomers to feel at home and get involved. You never know, your next boss might be sitting next to you in a session.

Career and Community Growth

Attending SQL Saturday is a proven way to invest in your professional development. My company, ProcureSQL, is a living example. We wouldn’t exist without the technical and professional development at SQL Saturdays. It is a key reason why we continue to invest time and money to help these events succeed.

You’ll sharpen your technical skills and gain exposure to leadership and volunteering opportunities that can accelerate your career. Plus, you’ll become part of a global network of data professionals passionate about learning and sharing.

John Sterrett teaching performance tuning
SQL Saturday training class

In short, if you want to learn SQL Server, Power BI, or Microsoft Fabric, SQL Saturday offers an unbeatable combination of free training, expert guidance, and community support. Don’t miss your chance to level up. Join us at SQL Saturday Austin on May 2nd and 3rd, 2025.

PS: If you cannot attend SQL Saturday in Austin and still would like help with your Microsoft Data Platform problems, I am happy to chat one-on-one.

Weekly Content – Nov 4 2024

The following is content I created, links, videos, and other things I found interesting and wanted to share this week.

My Content

Microsoft Fabric Mirroring is Change the Game for Data Analytics – In this quick write-up, you will learn about a no-code, near real-time solution to get your data from your applications into your data lake.

Cloud / Data / Programming

Tech Links

Non-Tech

Videos

Azure Managed Instance Changing DNS prefix

Azure Managed Instances are provisioned by default with a yourname.uniqueid.databases.windows.net DNS fully qualified domain name (FQDN). Even on your private virtual network you will still have to use this FQDN.

If you want your Azure Managed Instance to connect with your DNS prefix like yourname.domain.com this blog post is for you.

Azure Managed Instance Virtual Network

Your first step is to make sure you are either using Azure DNS service or providing one on your own.

In my example we have hybrid setup with Active Directory and DNS Servers in both our virtual network and also on-premises so we will be utilizing an VM in the virtual network to provide DNS.

Azure Virtual Network DNS configuration for your Azure Managed Instance network.
Azure Virtual Network DNS Servers

DNS CNAME Alias Configuration

To allow private users to connect utilizing your DNS Zone you need to create a CName alias in DNS. The alias needs to have the same name as your managed instance. In this example, I created a Azure Managed Instance named “procuresql01mi”. Its FQDN is procuresql01mi.b6d698c00851.database.windows.net. The Domain name of my lab is PASS2020.com. I will configure a CName alias so all requests to procuresql01mi.pass2020.com internally will be routed to procuresql01mi.b6d698c00851.database.windows.net.

DNS CName Alias for changing DNS Zone when connecting to a Azure Managed Instance
DNS CName Alias for Azure Managed Instance

Azure Managed Instance DNS Zone Change Wrap Up

Now all you have to do is connect with the new DNS name used with the alias and you are good to go! If you use Azure AD to connect make sure enable “Trust Server Certificate” on your connection.

rust server certificate for Azure Managed Instance DNS CName Alias
Trust server certificate for Azure Managed Instance DNS CName Alias

Here you can see both connections via SSMS.

Azure managed instance with your DNS domain
Azure Managed Instance with your DNS domain.

If you have any problems and want some help contact us. Also, if you like these tips subscribe to our newsletter.

SQL Saturday Jacksonville – May 14th, 2022

Hi Everyone,

This is your Austin SQL Server Consultant and I will be speaking at SQL Saturday Jacksonville. I am happy to get back out and, on the road, again. I know COVID has impacted everyone in multiple different ways. While I have enjoyed my time alone with the people who matter the most to me, I am also excited to get back out on the road and reconnect with friends and make new ones as well.

This weekend, I am happy to get do my first in-person conference in the past two years. I will be at SQL Saturday in Jacksonville talking about my favorite topic Making SQL Server Queries go Faster! My SQL Server Consulting company will be sponsoring as well. Make sure to come on over as we look forward to connect, share and learn all day long!

SQL Saturday Jacksonville is May 14th 2022 and I am speaking!

Sample Code from Presentations

2021 PASS Member Summit Session Review

This past month I was honored to build and deliver a session that would help anyone get started with performance tuning. If you signed up for free to attend you can catch the session on-demand for free for six months.

How Did the Session Go?

This was one of my favorite sessions and it looks like it did get attended well.

Speaker Survey Results

The following were the survey results from the virtual conference.

Persist Sample Percent in SQL Server IS Fixed!

Hi Everyone, this is John Sterrett. I am a SQL Server Consultant in Austin, TX. Last year I blogged about a feature called Persist Sample Percent. It had a nasty bug that could negatively impact performance. I have great news to share. The fix is now rolled into SQL 2016 SP2 CU17 and SQL 2019 CU10. Pedro Lopes let me know that with the fix now queued for SQL 2017 CU26, this becomes fixed in all versions.

Breaking News… Persist Sample Percent is coming to SQL 2017 Soon!



Kudos to Pedro Lopes and the MSSQL Development team for this update. Make sure you are applying the latest updates so you can leverage all great enhancements, updates, and fixes.

Persist Sample Percent Matters

Okay, you might be wondering why should I consider utilizing persist sample percent? If you have large tables, auto update statistics might be hurting you instead of helping. Yup, that is not a typo. Also, if you update statistics and don’t provide a sample percent you can have the same problem. Worst case, you have a job that updates statistics with a good sample percent, data changes and auto-update uses a subpar percent.

By default, modern versions of SQL Server will utilize a smaller sample percent as your table row count grows. This can potentially give you bad execution plans.

Let’s take a look at the example below. It doesn’t take a whole lot of rows to get a sample percent under 10%

If Puff Daddy was a DBA he would say, “More Rows More Problems..”

If you want to identify if this is a potential problem in your environment I am including the script below that we utilize in our SQL Server Health Check.

;with cte as (SELECT CAST(((rows_sampled * 1.00)/ [rows] )*100.00 AS NUMERIC(5,2)) AS SamplePCT, OBJECT_NAME(s.object_id) as TableNAME, s.name StatsName,
    sp.*
    FROM sys.stats AS s
    OUTER APPLY sys.dm_db_stats_properties (s.[object_id], s.stats_id) AS sp
	JOIN sys.objects o on s.object_id = o.object_id and o.is_ms_shipped = 0
    WHERE 1=1)
	select * from cte where SamplePCT IS NULL or SamplePCT < 10
	order by SamplePCT

The following is an example of this occurring. The only change we made was updating stats with a fixed sample rate.

Does anyone want to guess when Stats was updated?

Persist Sample Percent Is Your Friend

You can utilize persist sample percent as long as you are utilizing one of the cumulative updates (CU’s) provided above or a newer CU. Persist sample percent will lock in your sample percent. You will no longer need to worry about an index rebuild removing the persisted sample percent which puts you back at the default sample percent.

You can follow this demo to test this out on your own.

If you enjoyed this blog post subscribe to our newsletter so we can make sure to send you more free tips and videos.

Replay your Workload in the Cloud

Join Kevin Kline and me on Monday, November 23rd to learn how we do load testing in the cloud.

Want to save money, validate performance, and make sure you do not have errors while migrating to the cloud? You need to learn about the DEA today. Once you’ve added the DEA to your toolkit, you’ll look like the rockstar while the business saves money and makes the customers happy.

There is! It’s time to learn about the SQL Server Database Experimentation Assistant (DEA)! In this session, you will learn how to:

  • Capture your workload database on-premise or in the cloud
  • Replay your workload on-demand as needed
  • Analyze and compare results, giving you full confidence in the best outcome for Azure migrations and implementing new on-prem hardware

If you want to save money, validate performance, and make sure you do not have errors while migrating to the cloud, then you need to learn about the DEA today. Once you’ve added the DEA to your toolkit, you’ll look like the rockstar while the business saves money and makes the customers happy.

Got Cloud Migration Questions?

Workload Replay for Azure SQL Database and Amazon RDS

Want to make sure you don’t have errors, validate performance, and save money while making changes with Azure SQL Database, Azure SQL Managed Instance, SQL Server RDS in Amazon AWS? In this video, you will learn how to use the Data Experimentation Assistant to perform workload replay and compare your on-premise or cloud SQL Server workloads on-demand.

Replay Workloads is your Secret Weapon to being a Rockstar!
Replaying Workloads is your Secret Weapon to being a Rockstar!

Recommend Links

Persist Sample Percent in SQL Server IS NOT PERSISTED!

Update: Sept 14, 2021
Persist sample percent is fixed. The fix is now rolled into SQL 2016 SP2 CU17SQL 2019 CU10, and the future SQL 2017 CU26.


When a bug jumps out and surprises me I like to share it so others do not run into the same unexpected result. I don’t think there is any reason for a bug to bite multiple people in the butt. Therefore, I want to show you why persist sample percent IS NOT PERSISTED!

Why should I use Persist Sample Percent?

When your table grows and the rows multiply the default statistics sample percent used by SQL Server gets smaller and smaller. In theory, persist sample percent lets you update your statistic once specifying the percent it should use going forward when a sample percent is not specified. Unfortunately, this feature is broken, IMHO.

Persist Sample Percent Setup

To set up the scene so you can reproduce and learn. Below we create a single-column table that is an identity and also primary key. Therefore, an index is created which also will create statistics on our column.

DROP TABLE dbo.Test
CREATE TABLE dbo.Test (ID INT IDENTITY NOT NULL CONSTRAINT TestPK PRIMARY KEY);

 INSERT INTO dbo.Test DEFAULT VALUES
 GO 10000000 --100 million rows..

	CREATE PROCEDURE dbo.DemoStatsReview 
	AS BEGIN
	SELECT CAST(((rows_sampled * 1.00)/ [rows] )*100.00 AS NUMERIC(5,2)) AS SamplePCT,
	sp.*
	FROM sys.stats AS s
	OUTER APPLY sys.dm_db_stats_properties (s.[object_id], s.stats_id) AS sp
	WHERE s.[name] = N'TestPK';
	END

First, we will rebuild our index utilizing a fullscan. This is expected and normal activity for an index rebuild.

/* Index Rebuild uses 100% rows for sampling */
ALTER INDEX TestPK ON dbo.Test REBUILD WITH (STATISTICS_NORECOMPUTE = OFF)
EXEC dbo.DemoStatsReview 

Now, we will update statistics utilizing the new Persist Sample Percent feature. This should give us two benefits. One, auto stats updates will use this sample rate going forward on this statistic. Two, we no longer need to supply an sample percent if we update statistics manually or with our maintenance jobs.

/* Now lets update stats by using the PERSIST_SAMPLE_PERCENT */
UPDATE STATISTICS dbo.Test TestPK WITH SAMPLE 60 PERCENT, PERSIST_SAMPLE_PERCENT = ON;
EXEC dbo.DemoStatsReview 
We now have persisted sample percent to 60%. Now that we set it. We can forget it, right?? NOOOOO!
We now have persisted sample percent to 60%. Now that we set it. We can forget it, right??

Let’s go ahead and update statistics now without any sample percent specified. We will see that the persist sample percent is applied as expected.

/* Update stats to validate sample size is persisted */
UPDATE STATISTICS dbo.Test TestPK
EXEC dbo.DemoStatsReview 
Persist Sample Percent holds and the sample percent is still 60%
Persist Sample Percent holds and the sample percent at 60%

Let’s see what happens when we rebuild the index. We expect that a FULLSCAN is used to update the statistics behind the index. Did anything else change? OH THE SUSPENSE!

/* What happens if we rebuild an index that has its stats persisted?
Do we still use 100% rows for sample? */

ALTER INDEX TestPK ON dbo.Test REBUILD WITH (STATISTICS_NORECOMPUTE = OFF)
EXEC dbo.DemoStatsReview 
BOOM! Persisted Sample Percent is reset to ZERO on a Index Rebuild. Large tables have 1% or lower sample rate used going forward...
BOOM! Persisted Sample Percent is reset to ZERO on a Index Rebuild
/* Lets update Stats again. 
Remember presisted a sample size is 60%. */
UPDATE STATISTICS dbo.Test TestPK
EXEC dbo.DemoStatsReview 
We went from our desired 60% sample rate to 1%. This sample rate will only get lower as your data grows!
We went from our desired 60% sample rate to 1%. This sample rate will only get lower as your data grows!

There you have it. The persist sample percent not only went away on the Index Rebuild but because we updated statistics without forcing a sample percent on 100 million rows the sample percent went to 1%. I will add another blog post that focuses on this later. For now, if the late Notorious BIG was a DBA he would say, “More rows, more problems with stats you get!” If this didn’t make any sense. The more row the lower there sample rate when statistics get updated.

How do we fix this?

This is a bug inside of SQL Server. There is a feedback item that hasn’t received any feedback from Microsoft in two years since the bug was reported. Please upvote so this can get the focus of Microsoft so persist sample percent is actually persisted!

Did you enjoy this post?

If you enjoyed this post let us know in the comments. Also, go ahead and subscribe to our newsletter to get more free tips and videos.

Free SQL Data Compare

I am a consultant in Austin who can help make your data go fast, be secure and highly available. When I am engaged in a performance tuning project priority #1 isn’t to make sure your data go faster. Priority #1 is to make sure we get the same result sets while making your data go faster.

Free SQL Data Compare with T-SQL?

There are several tools out there that can be used to compare data. Today, I want to share how you can quickly do this on your own with T-SQL!

Let’s simplify the process. Our goal is to check two temp tables and validate if any of the data is different. This would include inserts, updates, and deletes. For this example, I will just do a dump of Sales.SalesOrderDetail in AdventureWorks into two temp tables as shown below.

SELECT * 
INTO #Tmp1
FROM Sales.SalesOrderDetail

SELECT * 
INTO #Tmp2
FROM Sales.SalesOrderDetail

Now we shouldn’t see any differences since we used the same table to create both temp tables. We are going to use two different SQL operators to compare these two temp tables while applying some data changes. We will focus on the UNION ALL and EXCEPT operators.

The Power of EXCEPT

Except is an underrated and underused SQL operation. In a nutshell, it will give you the results of the first query that are different from the next query. So, if the data of any column in #tmp1 is different from #tmp2 or if the row doesn’t exist in #tmp2 but is in #tmp1 it will get returned.

SELECT * FROM #Tmp1
EXCEPT 
SELECT * FROM #Tmp2

Let’s go ahead and modify a column in #Tmp1 so you can see how this works. We are going to set OrderQty to five when SalesOrderId is 45313 and SalesOrderDetailId is 6210. This will change just one column in one row. We will then select these columns from both temp tables to see the change.

This is how most people would start using T-SQL to identify changes in data.

UPDATE #Tmp1 SET OrderQty = 5 
WHERE SalesOrderID = 45313 
AND SalesOrderDetailID  = 6210

SELECT SalesOrderId, SalesOrderDetailID, 
OrderQty FROM #Tmp1 
WHERE SalesOrderID = 45313 
AND SalesOrderDetailID = 6210

SELECT SalesOrderId, SalesOrderDetailID, 
OrderQty FROM #Tmp2 
WHERE SalesOrderID = 45313 
AND SalesOrderDetailID = 6210

Data Compare is easy when we know what changed.
Data Compare is easy when we know what changed and not much changed. Just select it..

Data Compare is easy when we know what changed. Data Compare is easy when we know what changed and not much changed. Just select it..

Finding Data Changes The Easy Way

Selecting the two tables is easy if we know what change occurred and there aren’t many changes. This can get complicated quickly. Therefore, if we just want to quickly know if we have differences lets take a look at my goto method using EXCEPT. To make this example easier to read instead of using “SELECT *” I will just focus on columns that are changing. In a real example, I would want to know if any columns changed.

SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp1
EXCEPT 
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp2
Data Compare using EXCEPT quickly lets us see that we had a data change
Data Compare using EXCEPT quickly lets us see that we had a data change

If an insert or a column change occurs in #tmp1 we will see it in our EXCEPT SQL statement. This isn’t true if the change is only in #tmp2.

For example, an insert in #tmp2 or delete in #tmp1 would not be shown. To see this we would have to switch the temp tables in the EXCEPT clause as shown below.

INSERT INTO #tmp2 (SalesOrderId, ProductID, 
SpecialOfferID, OrderQty, UnitPrice, 
UnitPriceDiscount,LineTotal, 
rowguid, ModifiedDate)
VALUES (45313, 1, 3, 1,1.25,0,
1.25*1, NEWID(), GETDATE())

DELETE FROM #Tmp1
WHERE SalesOrderID = 45313 
AND SalesOrderDetailID = 6211

SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp1
EXCEPT 
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp2
/* We will now see our insert and delete */
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp2
EXCEPT 
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp1
Our first EXEMPT clause only shows the update that occurred in #tmp1. The delete in #tmp1 and insert in #tmp2 cannot be seen because the data doesn't exist in #tmp1.
Our first EXEMPT clause only shows the update that occurred in #tmp1. The delete in #tmp1 and insert in #tmp2 cannot be seen because the data doesn’t exist in #tmp1.
Our Second EXEMPT shows the insert in #tmp2, delete in #tmp1 and update on #tmp1 because the column is different on #tmp2
Our Second EXEMPT shows the insert in #tmp2, delete in #tmp1 and update on #tmp1 because the column is different on #tmp2

Our first except shows us data in #tmp1 that is not in #tmp2 because the OrderQty column changed in #tmp1. The second EXCEPT shows us data in #tmp2 that isn’t in #tmp1 because of our insert into #tmp2 and also our delete from #tmp1 would be found in #tmp2 but not #tmp1.

UNION ALL for the Win!

To wrap this up now we can include a UNION ALL operation between the two EXCEPT operations. This would get us any data changes to the columns selected from the temp tables.

SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp1
EXCEPT 
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp2
UNION ALL
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp2
EXCEPT 
SELECT SalesOrderId, SalesOrderDetailID, OrderQty 
FROM #Tmp1
UNION ALL and EXCEPT for the free Data Compare Win! Quickly shows rows that are different between the two tables.
UNION ALL and EXCEPT for the free Data Compare Win! Quickly shows rows that are different between the two tables.

Typically, I need to verify is the data before and after is the same. This is a quick and easy way to get that answer. Now I know you might want to take this to the next level. You might be thinking how do I just get the unique key for the table and columns that changed. I will leave that as an exercise for you.

If you enjoyed this post subscribe to get more free SQL tips.