T-SQL Tuesday #096: The Group Who Changed My Career Forever!

T-SQL Tuesday #96: Folks Who Have Made A Difference

T-SQL Tuesday #96: Folks Who Have Made A Difference

Today’s blog post is about T-SQL Tuesday.  If you haven’t seen T-SQL Tuesday before its a great monthly call for all SQL Server bloggers to write about one topic that always changes from month to month.  In this months installment, we’re focusing on folks who made a difference.

Looking back,  I wrote about this subject almost seven years ago.  While I covered a lot of great people there is a group left out. I wished I included them because they were the first to believe in me, mentor me, and help me become the IT Pro I am today.

Being A SQL Server MVP you might think I would focus on data people.   I am actually going to focus on a forgotten IT organization.  This would be the Association of Information Technology Professionals or AITP for short.  In fact, its really for my local chapter the Greater Wheeling Chapter of AITP (GWC of AITP). For those who didn’t know I grew up as an adult in Wheeling, WV.

The Greater Wheeling Chapter of AITP taught me how to be the best IT Professional I could be. While I attended to eat great food and to learn tech. I learned a whole lot more. I learned that it’s the people who matter and that technology will always change. I started to learn how to lead here as well.  I became a chapter president, heck even became the VP of the region.  I learned a lot of skills here that helped make me the IT person and business owner I am today.

When I wanted to bring SQL Saturday #36 to Wheeling, WV the whole Greater Wheeling AITP Chapter helped and supported me. Even though none of us knew what we were doing.

Earlier this year, Dolph Santorine the current President of GWC of AITP had a speaker cancel. In a bind, he asked if I could do a webinar to help. I refused to do the webinar because I told him I would jump on a flight and be there to do it in-person.  During this trip, over a Bridgeport brownie, I learned there might be another way I could pay things forward back to the GWC of AITP.

How I Will Payback Those Who Helped Me

The GWC of AITP could use another great event to raise awareness and drive up membership.  I will be using my knowledge and #sqlfamily to help bring a third SQL Saturday to Wheeling, WV. 

That’s right, we’re going to go from SQL Saturday #36 to #717 with a good old Throwback SQL Saturday. Save the date: April 28th. Like the old days, this will be a low budget grassroots event that brings some of the best SQL Server training to West Virginia. We are going to prove again that if a SQL Saturday can happen in Wheeling, WV it can happen anywhere!

I look forward to showing my data friends why Wheeling, WV is one of my favorite places in the world! You will quickly see that Wheeling is very different from your typical American small town.

 

 

SQL Server 2017: Making Backups Great Again!

In some DBA circles, backups are just as popular as politicians! However, recoverability is the most important task for database administrators.  While SQL Server 2017 added so many great features like graph, SQL on Linux, and more.  Today, I want to focus on two small underdog features that might be game changers on how you do backups.

SQL Server Backups are often as popular as politicians.

SQL Server Backups are often as popular as politicians.

Smart Differential Backups

Databases are getting bigger, not smaller. More storage capacity is needed for these backups. Backup compression might hurt your storage capacity. Today, I am seeing more policies include full and differential backups along with transactional log backups. Differential backups are used to offset daily full backups. Typically people will use time increments as the basis for when backups should occur.  It’s very common to see automated jobs that do weekly full and daily differentials to reduce storage capacity needed for backups.

How often does your data change? Is the rate of change very consistent or does it change depending on the week?  Let’s assume this week it’s Tuesday and over 80% of your data pages have changed. You are not benefiting from taking daily differentials for the rest of the week. The opposite goes for data that doesn’t change that often.  Maybe you can save a lot of space by doing less frequent full backups.

Leveraging smart differential backups could greatly reduce your storage footprint and potentially reduce the time it takes to recover.

In SQL Server 2017 you can see exactly how many pages changed since your last full backup. This could be leveraged to determine if you should take a full or differential backup.  Backup solutions and backup vendors will be better for this.

select CAST(ROUND((modified_extent_page_count*100.0)/allocated_extent_page_count,2)
as decimal(6,2)) AS 'DiffChangePct'
,modified_extent_page_count
,allocated_extent_page_count
from sys.dm_db_file_space_usage
GO

Smart Transactional Log Backups

 The time your users are offline while you are recovering to the point of failure is critical. It could be the difference between keeping and losing customers.  Point-in-time recovery is mandatory for a critical database.  Transactional log backups have to be restored in order.

Recovery Point Objectives (RPO) drive how often you take transactional log backups.  If you have a policy that says you can only lose ten minutes of data, you need transactional log backups every ten minutes. Is this really true if there were no changes? What if your RPO is driven by the amount of data loss and not the time of the loss?  Either way, you can now control when transactional log backups occur based on the amount of data that has changed since the last transactional log backup.

SELECT name AS 'DatabaseName',dls.log_since_last_log_backup_mb,
dls.log_truncation_holdup_reason, dls.active_vlf_count,
dls.active_log_size_mb
FROM sys.databases s
CROSS APPLY sys.dm_db_log_stats(s.database_id) dls

 

This post was written by John Sterrett, CEO & Principal Consultant for Procure SQL.  Sign up for our monthly newsletter to receive free tips.  See below for some great related articles.

Talking about Migrating to Azure SQL Database at #MSIgnite this week

Speaking on Migrating to Azure SQL Database at Ignite 2017

Speaking on Migrating to Azure SQL Database at Ignite 2017

This week I will have two talks on migrating existing database to Azure SQL Databases at Microsoft Ignite.  If you are there and curious about migrating your existing databases we would love to talk with you.

If you are attending or not attending you can use our Migration to Azure SQL Database Resource Cheat Sheet

My talk is Successfully Migrating Existing Databases to Azure SQL Databases.

Monday 4:35 pm in Hyatt Recency Theater – Level One

Wednesday 1:35 pm OCCC South – Expo Theater #8

 

 

 

Why is My SQL Server Execution Plan Changing and How do I fix It!

Speaking at Performance Virtual Chapter

Speaking at Performance Virtual Chapter

Have, you noticed an important query suddenly takes longer to run? Have you noticed the execution plan looks completely different than what you normally expect for your query? Today, I talked at the Performance Virtual Chapter going over how to identify plans that change and went over options to fix them in SQL Server 2005 to 2016 by forcing plans.

Where is the Code?

As promised, the demo code is attached here.

How Can Query Store Get Better?

I would love to see Query Store have the ability to capture statistics for AG Secondaries that are enabled for read traffic. A bonus, would be the ability to force plans for AG Secondaries as well.

Blue Screen Your SQL Server On-Demand

These days I do a lot of testing with SQL Server.  When I am testing new features or helping clients implement SQL Server High Availability solutions I want to have several tests including blue screens.

Simulating blue screens actually might be much easier than you think.  For example, you can use Not My Fault from the system utilities to create blue screens whenever your heart desires.

Blue Screen with Not My Fault

Do you really want to push that Crash button?

Just remember, if you use Not My Fault and click on the Crash button, It Aint My Fault.

Instantly after clicking on the crash button you should see the following blue screen.

SQL Server Blue Screen on Demand

SQL Server Blue Screen on Demand

Looking Back: 2016 was Awesome!

With 2017 starting this week I would like to take a moment and reflect on 2016. I have to admit that 2016 was a great year for many different reasons, both personally and professionally. It’s nice just to stop and acknowledge some significant accomplishments that occurred last year!

2016 in Review

I always try to keep my Community contributions up to date here on my blog. I would have never gotten to where I am today without others who helped me along the way, so I am always happy to give back.  It’s been an honor to host another SQL Saturday in Austin for over 250 attendees, be a leader for the Austin SQL Server User Group and continue to grow the High Availability and Disaster Virtual Chapter to 2145 members.

Professionally, it was also a great year in 2016.  I was able to obtain two major goals on my career bucket list. I completed the Microsoft Certified Solution Expert for Data Platform certification (Yes, my procrastination took me to the

Microsoft Certified Solution Expert on Data Platform

Microsoft Certified Solution Expert on Data Platform

last work day in 2016, but I got it done!).  As Tom LaRock says, “They do not hand out certifications like candy. You have to earn them.” I also was acknowledged as a Microsoft Data Platform MVP which goes to about 100 people in the USA.

In 2016, I was also able to continue to grow as a speaker.  Once again, I was blessed with an opportunity to speak at the prestigious PASS Member Summit. I never take getting accepted lightly. This year even though I was sick and almost

Almost a perfect score for the biggest stage.

Almost a perfect score at the Superbowl for SQL Server Speakers!

had to cancel I was able to deliver my session while obtaining my best evaluation score to date (ranked 1 to 3).  I gave 15 sessions in 2016; my highest attended session of the year was at the Data Architecture Virtual Chapter to 457 attendees.

My company, Procure SQL LLC also achieved some great milestones in 2016. I was able to hire an employee, Angela Tidwell as our Marketing DBA. We were able to procure our very first office as well. Finally, after spending several hours working with an excellent health care broker, I was proud to announce that we were able to set up our very first group health plan at Procure SQL. While this isn’t required, it was a huge goal of mine to help make sure that our employees are taken care of as we grow in the future.

Personally, I was able to catch up with some great friends and spend some quality time with my family.  We took the family to Galveston, TX for our very first beach trip as a family.  I learned that it was Nina’s Dads first time in the Gulf as well.  Bonus; the weather was so nice in December my Brother and rootsportsroyjohndecided to bring our kids back for a weekend.  During my birthday weekend, I was able to see my Pirates and Wheeling Nailers in person in Dallas which is both very rare.   During the weekend I was able to hangout with one of my best friends from college.  We even made it on Root Sports Pittsburgh raising the jolly roger.

Finally, my loyalty with some friendships was tested in 2016. While one could have only focused on themselves,  I decided to help my real friends in need in 2016. I remember asking myself (yes I talk to myself), “what would I hope my friends would do for me in my hour of need?” I then tried to do that.

2017 Goals

These days, I am not a huge fan of publicly sharing my goals.  With that said, I wanted to share three goals to help keep me accountable.

Build Two Local Speakers

In the past, I have had the opportunity to help build some new speakers. I cannot think of a more rewarding experience in the SQL Community than helping someone grow. While I have more than two different people in mind, I hope to have at least two new speakers give their very first SQL Server presentations in 2017. Who knows? Maybe they will love it, and it can change their careers like it did for me.

Fill My Office

While I plan on getting us a fridge, when I say, “Fill My Office” I wasn’t just planning on filling the fridge with adult beverages. Our office fits one more person easily, and I hope to find that FTE employee in 2017.  I will provide more details on this later on as we get closer to that point of pulling that trigger.

Building Training Plan

 

Testing out the new office white board.

Literally, this was the first thing written on our new six-foot whiteboard

With the hiring of Angela last year, one of my goals was to build some high-Quality training material that could help someone become a SQL Server Database Administrator (DBA). I want this to be online content that could be used on demand to help as many people as possible to get a great start as a DBA. I hope to have this implemented in 2017.  It might also force me to get out of my shell and focus on building some great video content which will be a new challenge.

 

Three Reasons Why I Am Attending PASS Member Summit 2016

Three reasons why I am attending PASS Member Summit in 2016

Three reasons why I am attending PASS Member Summit in 2016

Over the past few weeks, I saw on social media that many of my #sqlfamily members were not attending the PASS Member Summit conference this year.  It made me want to blog about why I am attending this year.

Many years ago, I heard that the PASS Member Summit conference could change your career as a data professional. I thought that statement was a great marketing pitch until I attended for the first time in 2011.  These days, I get more excited for other people than myself. With that said, here are three reasons why I am attending this year’s PASS Summit.

Local Grass Roots

Being a chapter leader, I always want to do everything I can to help my local user members, and I love seeing them succeed and grow.  In fact, it’s been fun watching some of the members grow professionally and in the community. One of Austin’s Finest SQL Server presenters, Lance Tidwell will be doing a full session at 3:30 pm on Friday on Parameter Sniffing the Good, the Bad, and the Ugly. There is not another session I look forward to seeing more this year (Yes, I am also presenting but luckily not at the same time as Lance).

The Dream I Never Had

After coming back from my very first PASS Summit in 2011, I had all kinds of thoughts on how my career would evolve. I had dreams of making six figures, working from home, being my boss.  I even had mentors in the PASS community who helped me realize all those dreams were possible.  I never imagined that I would hire my first my first employee directly from conversations I had with a local user group member. I tried to help Angela Tidwell (Yes, Lance’s wife) break into the IT field a few times.  After several conversations, I learned we could help each other out so this week Angela became my second employee.  Angela is at the PASS Member Summit this week as a first timer.  I hope I can do a good job introducing her to everyone just like Tom LaRock did for me when I was a first timer.

If you are at the PASS Member Summit and you see Angela, please say hello. Just please don’t do it during the middle of Lance’s session on Friday.

The Speaker That Almost Never Presented

Many years ago, I had a boss who I knew would be a great speaker in the PASS Community. Like most people, he was afraid of public speaking. I had to dare him to go to the local Pittsburgh SQL Server User Group with me and co-present. When I say co-present, I meant just stand next to me and share some real-world stories while I do demos.  Now he is speaking at the PASS Member Summit for the second time in a row. I love being able to say I knew him when. Now he is a superstar, and I look forward to watching him succeed and continue to grow in the SQL Server Community.

[Update 9:48 PST]

Why are you attending this year’s PASS Member Summit? I would love to hear your reasons. If you couldn’t make it this year you can still watch parts of the conference live on the internet.

Benchmark Azure SQL Database Wait Stats

Today, I want to focus on how we can monitor wait statistics in an Azure SQL Database.  In the past, I blogged about how you should benchmark wait stats with the box product.  This process will give you misleading data in Azure SQL Database.  You will want to focus on wait stats that are specific to your database as you are using shared resources in Azure SQL Databases.

Finding Database Waits Statistics

Query we are talking to you!

Query we are talking to you!

With an instance of SQL Server regardless of using IaaS or on-premise, you would want to focus on all the waits that are occurring in your instance because the resources are dedicated to you.

In database as a service (DaaS), Microsoft gives you a special DMV that makes troubleshooting performance in Azure easier than any other competitor.  This feature is the dm_db_wait_stats DMV.  This DMV allows us specifically to get the details behind why our queries are waiting within our database and not the shared environment.  Once again it is worth repeating, wait statistics for our database in a shared environment.

The following is the script is a stored procedure I use to collect wait statistics for my Azure SQL Databases.  I hope it is a helpful benchmarking tool for you when you need to troubleshoot performance in Azure SQL Database.

The Good Stuff

/***************************************************************************
    Author : John Sterrett, Procure SQL LLC

    File:     AzureSQLDB_WaitStats.sql

    Summary:  The following code creates a stored procedure that can be used
                 to collect wait statistics for an Azure SQL Database.
                        
    Date:     August 2016

    Version:  Azure SQL Database V12 
  
  ---------------------------------------------------------------------------
  
  For more scripts and sample code, check out 
    http://www.johnsterrett.com

  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF 
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED 
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
************************************************************************/

If NOT EXISTS (Select 1 from sys.schemas where name = N'Waits')
        execute sp_executesql @stmt = N'CREATE SCHEMA [Waits] AUTHORIZATION [dbo];'
GO


CREATE TABLE Waits.WaitStats (CaptureDataID bigint, WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint, Avg_Wait_S numeric(10, 6), Avg_Resource_S numeric(10, 6),Avg_Signal_S numeric(10, 6), CaptureDate datetime)
CREATE TABLE Waits.BenignWaits (WaitType varchar(200))
CREATE TABLE Waits.CaptureData (
ID bigint identity PRIMARY KEY,
StartTime datetime,
EndTime datetime,
ServerName varchar(500),
PullPeriod int
)

INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_SEMAPHORE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('LAZYWRITER_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('RESOURCE_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_TASK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_SYSTEMTASK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_BUFFER_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('WAITFOR')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('LOGMGR_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CHECKPOINT_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('REQUEST_FOR_DEADLOCK_SEARCH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_TIMER_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('BROKER_TO_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_TASK_STOP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_MANUAL_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_AUTO_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DISPATCHER_QUEUE_SEMAPHORE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('FT_IFTS_SCHEDULER_IDLE_WAIT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_DISPATCHER_WAIT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_DISPATCHER_JOIN')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_EVENTHANDLER')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('TRACEWRITE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('FT_IFTSHC_MUTEX')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_INCREMENTAL_FLUSH_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_RECEIVE_WAITFOR')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('ONDEMAND_TASK_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DBMIRROR_EVENTS_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DBMIRRORING_CMD')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_TRANSMITTER')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_WAIT_ENTRIES')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_BPOOL_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_LOCK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DIRTY_PAGE_POLL')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SP_SERVER_DIAGNOSTICS_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('HADR_FILESTREAM_IOMGR_IOCOMPLETION')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('HADR_WORK_QUEUE')

insert Waits.BenignWaits (WaitType) VALUES ('QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP');
insert Waits.BenignWaits (WaitType) VALUES ('QDS_PERSIST_TASK_MAIN_LOOP_SLEEP');
GO

--DROP PROCEDURE Waits.GetWaitStats
CREATE PROCEDURE Waits.GetWaitStats 
    @WaitTimeSec INT = 60,
    @StopTime DATETIME = NULL
AS
BEGIN
    DECLARE @CaptureDataID int
    /* Create temp tables to capture wait stats to compare */
    IF OBJECT_ID('tempdb..#WaitStatsBench') IS NOT NULL
        DROP TABLE #WaitStatsBench
    IF OBJECT_ID('tempdb..#WaitStatsFinal') IS NOT NULL
        DROP TABLE #WaitStatsFinal

    CREATE TABLE #WaitStatsBench (WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint)
    CREATE TABLE #WaitStatsFinal (WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint)

    DECLARE @ServerName varchar(300)
    SELECT @ServerName = convert(nvarchar(128), serverproperty('servername'))
    
    /* Insert master record for capture data */
    INSERT INTO Waits.CaptureData (StartTime, EndTime, ServerName,PullPeriod)
    VALUES (GETDATE(), NULL, @ServerName, @WaitTimeSec)
    
    SELECT @CaptureDataID = SCOPE_IDENTITY()
     
/* Loop through until time expires  */
    IF @StopTime IS NULL
        SET @StopTime = DATEADD(hh, 1, getdate())
    WHILE GETDATE() < @StopTime
    BEGIN

        /* Get baseline */
        
        INSERT INTO #WaitStatsBench (WaitType, wait_S, Resource_S, Signal_S, WaitCount)
        SELECT
                wait_type,
                wait_time_ms / 1000.0 AS WaitS,
                (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
                signal_wait_time_ms / 1000.0 AS SignalS,
                waiting_tasks_count AS WaitCount
            FROM sys.dm_db_wait_stats
            WHERE wait_time_ms > 0.01 
            AND wait_type NOT IN ( SELECT WaitType FROM Waits.BenignWaits)
        

        /* Wait a few minutes and get final snapshot */
        WAITFOR DELAY @WaitTimeSec;

        INSERT INTO #WaitStatsFinal (WaitType, wait_S, Resource_S, Signal_S, WaitCount)
        SELECT
                wait_type,
                wait_time_ms / 1000.0 AS WaitS,
                (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
                signal_wait_time_ms / 1000.0 AS SignalS,
                waiting_tasks_count AS WaitCount
            FROM sys.dm_db_wait_stats
            WHERE wait_time_ms > 0.01
            AND wait_type NOT IN ( SELECT WaitType FROM Waits.BenignWaits)
        
        DECLARE @CaptureTime datetime 
        SET @CaptureTime = getdate()

        INSERT INTO Waits.WaitStats (CaptureDataID, WaitType, Wait_S, Resource_S, Signal_S, WaitCount, Avg_Wait_S, Avg_Resource_S,Avg_Signal_S, CaptureDate)
        SELECT  @CaptureDataID,
            f.WaitType,
            f.wait_S - b.wait_S as Wait_S,
            f.Resource_S - b.Resource_S as Resource_S,
            f.Signal_S - b.Signal_S as Signal_S,
            f.WaitCount - b.WaitCount as WaitCounts,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.wait_S - b.wait_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Wait_S,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.Resource_S - b.Resource_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Resource_S,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.Signal_S - b.Signal_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Signal_S,
            @CaptureTime
        FROM #WaitStatsFinal f
        LEFT JOIN #WaitStatsBench b ON (f.WaitType = b.WaitType)
        WHERE (f.wait_S - b.wait_S) > 0.0 -- Added to not record zero waits in a time interval.
        
        TRUNCATE TABLE #WaitStatsBench
        TRUNCATE TABLE #WaitStatsFinal
 END -- END of WHILE
 
 /* Update Capture Data meta-data to include end time */
 UPDATE Waits.CaptureData
 SET EndTime = GETDATE()
 WHERE ID = @CaptureDataID
END

Special Wait Statistics Types

The following are wait statistics you will want to focus on specifically in Azure SQL Database.  If you made it this far, I strongly encourage you to read how DTU is measured.  That blog post will help you understand exactly why these waits can be signs of DTU pressure.

IO_QUEUE_LIMIT :  Occurs when the asynchronous IO queue for the Azure SQL Database has too many IOs pending. Tasks trying to issue another IO are blocked on this wait type until the number of pending IOs drop below the threshold. The threshold is proportional to the DTUs assigned to the database.

LOG_RATE_GOVERNOR :  Occurs when DB is waiting for quota to write to the log.  Yes,  Azure SQL Database is capping your transactional log writes to adhere to DTU.

SOS_SCHEDULER_YIELD: This occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait, the task is waiting in the runnable queue to get a scheduler to run.  If your DTU calculation is based on CPU usage you will typically see these waits.

Want More Azure Articles?

If you enjoyed this blog post I think you will also enjoy the following related blog posts.

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, John would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.

Photo Credit:  

Calculating DTU in Azure SQL Database

A few months ago, I posted a question over on ask.sqlservercentral.com.  In a nutshell, it was how do you measure DTU? How could you pull the data shown in the Azure Portal graphs with T-SQL?  No one answered, so this motivated me get off my butt and answer my question.  In doing so, I wanted to share the results with all my readers as well.

I started thinking like a detective.  If I was DTU how would I want to be found with T-SQL?

It was the Azure Portal in the Query Store

It was the Azure Portal in the Query Store

Knowing, that both “sys.dm_db_resource_stats” and “sys.resource_stats” holds data that is used to calculate DTU. I decided to leverage Query Store on an Azure SQL Database to see if I could quickly see how DTU is calculated. Behold, I was right.

 

Query Store

How DTU is calculated in Azure Portal

The Secret Sauce

The whole query is below. Right now, let’s just focus on the secret sauce. The secret sauce is how DTU percentage gets calculated.  In a nutshell, the maximum of CPU, Data IO, Log Write Percent determine your DTU percentage.  What does this mean to you? Your max consumer limits you. So, you can be using 1% of your IO but still be slowed down because CPU could be your max consumer resource.

(SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) as dtu_consumption_percent

DTU Calculating Query

SELECT ((CONVERT(BIGINT, DATEDIFF(day, 0, [end_time])) * 24 * 3600 + DATEDIFF(second, DATEADD(day, DATEDIFF(day, 0, [end_time]), 0), [end_time])) / @timeGrain) * @timeGrain as start_time_interval
                , MAX(cpu_percent) as cpu_percent
                , MAX(physical_data_read_percent) as physical_data_read_percent
                , MAX(log_write_percent) as log_write_percent
                , MAX(memory_usage_percent) as memory_usage_percent
                , MAX(xtp_storage_percent) as xtp_storage_percent
                , MAX(dtu_consumption_percent) as dtu_consumption_percent
                , MAX(workers_percent) as workers_percent
                , MAX(sessions_percent) as sessions_percent
                , MAX(dtu_limit) as dtu_limit
                , MAX(dtu_used) as dtu_used
             FROM
                 (SELECT
                     end_time
                    , ISNULL(avg_cpu_percent, 0) as cpu_percent
                    , ISNULL(avg_data_io_percent, 0) as physical_data_read_percent
                    , ISNULL(avg_log_write_percent, 0) as log_write_percent
                    , ISNULL(avg_memory_usage_percent, 0) as [memory_usage_percent]
                    , ISNULL(xtp_storage_percent, 0) as xtp_storage_percent
                    , ISNULL((SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) as dtu_consumption_percent
                    , ISNULL(max_worker_percent, 0) as workers_percent
                    , ISNULL(max_session_percent, 0) as sessions_percent
                    , ISNULL(dtu_limit, 0) as dtu_limit
                    , ISNULL(dtu_limit, 0) * ISNULL((SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) / 100.0 as dtu_used
                 FROM sys.dm_db_resource_stats 
                 WHERE [end_time] >= @startTime AND [end_time] <= @endTime
                 ) t
             GROUP BY ((CONVERT(BIGINT, DATEDIFF(day, 0, [end_time])) * 24 * 3600 + DATEDIFF(second, DATEADD(day, DATEDIFF(day, 0, [end_time]), 0), [end_time])) / @timeGrain) * @timeGrain

A Future Blog Post

Now that we can calculate DTU we could trend this data and automatically make changes like moving up and down from the current performance tier.  I would love to wrap this into an Azure SQL Database Alert but a process in PowerShell might be a great starting point.

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, he would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.

WordPress Community Podcast

I had the pleasure of catching up with John Parkinson, one of my mentors and a great friend of mine. It feels like yesterday John, GWC of AITP, and I brought two SQL Saturday’s to Wheeling, WV (#36 and #$80).

It’s fun to see how we both stayed highly involved in the IT Community over the years. John is highly involved in the Greater Wheeling Chapter of AITP and WordPress while I am highly involved in the SQL Server Community.

John reached out to me to do a podcast with him talking about my SQL Saturday experiences so people in the WordPress community could learn about other all-day training events. The result is the 10-minute video which is provided below.