Tag Archives: performance tuning

Workload Replay for Azure SQL Database and Amazon RDS

Want to make sure you don’t have errors, validate performance, and save money while making changes with Azure SQL Database, Azure SQL Managed Instance, SQL Server RDS in Amazon AWS? In this video, you will learn how to use the Data Experimentation Assistant to perform workload replay and compare your on-premise or cloud SQL Server workloads on-demand.

Replay Workloads is your Secret Weapon to being a Rockstar!
Replaying Workloads is your Secret Weapon to being a Rockstar!

Recommend Links

5 Game Changers with SQL Server 2019

Microsoft made SQL Server 2019 Generally Available this week we want to share some videos and code examples of our favorite new features. Most of these will make your code go faster without any code changes!

We have been testing SQL Server 2019 for months and hope you enjoy these features as much as we do!

SQL Server Automatic Tuning in the Real-World

In SQL Server 2016 we saw Query Store.  Query Store was a game changer to help database administrators identify troublesome queries. Query Store helps DBAs make those queries run faster.  Microsoft’s marketing team even jumped on to help coin the phrase, “SQL Server It Just Runs Faster.” With SQL Server 2017, this started to get even better with automatic tuning. Don’t worry database administrators.  Automatic Tuning will just enhance your career and not replace it.

SQL Server 2017 Automatic Tuning looks for queries where execution plans change and performance regresses. This feature depends on Query Store being enabled. Note, even if you don’t turn on Automatic Tuning you still get the benefits of having access to the data. That is right. Automatic Tuning would tell you what it would do if it was enabled.  Think of this as free performance tuning training.  Go look at the DMVs and try to understand why the optimizer would want to lock in an execution plan. We will actually go through a real-world example:

Automatic Tuning with SQL Server 2017

First, let’s take a quick look at the output of the data. You can find the query and results we will focus on below.

SELECT reason, score,
      script = JSON_VALUE(details, '$.implementationDetails.script'),
      planForceDetails.*,
      estimated_gain = (regressedPlanExecutionCount+recommendedPlanExecutionCount)
                  *(regressedPlanCpuTimeAverage-recommendedPlanCpuTimeAverage)/1000000,
      error_prone = IIF(regressedPlanErrorCount>recommendedPlanErrorCount, 'YES','NO')
--INTO DBA.Compare.Tunning_Recommendations
FROM sys.dm_db_tuning_recommendations
  CROSS APPLY OPENJSON (Details, '$.planForceDetails')
    WITH (  [query_id] int '$.queryId',
            [current plan_id] int '$.regressedPlanId',
            [recommended plan_id] int '$.recommendedPlanId',

            regressedPlanErrorCount int,
            recommendedPlanErrorCount int,

            regressedPlanExecutionCount int,
            regressedPlanCpuTimeAverage float,
            recommendedPlanExecutionCount int,
            recommendedPlanCpuTimeAverage float

          ) as planForceDetails;

I will break the results down into two photos to make them fit well in this blog post.

Free Tuning Recommendations with Automatic Tuning in SQL Server 2017

Free Tuning Recommendations with SQL Server 2017 (1/2)

Automatic Tuning Results in SQL Server 2017

Free Tuning Recommendations with SQL Server 2017 (2/2)

Now we know in the query store query_id 2271 has CPU time changing from 7,235ms to 26ms. That’s a big difference. Let’s take that query and look at its findings by using the tracked query report inside SSMS.

Query Store find history of a query

Find my Changed Query. Did the plans change?

Here we can see the major difference between the two execution plans. One is averaging over 14 seconds in duration while the other is under a second.

Reviewing Query performance in Query Store

Query Store showing the performance difference between the two plans

Now we can select both plans on the screen above and look at the execution plans side by side inside of SSMS. When doing so, we see the common example of the optimizer determining if it is better to scan an index vs a seek with a key lookup.

Side by Side Execution Plan Review in SSMS.

Using SSMS to compare auto tuning recommended query.

To complete the example I want to point out that automatic tuning would lock in the index seek plan (Plan 2392). In SQL Server 2016 you can do this as well manually inside Query Store. With SQL Server 2017 it can be done automatically for you with Automatic Tuning. If you have ever woken up to slow performance due to an execution plan changing and performance going through the drain this might be a life saver.

If you would like to learn about performance tuning changes in SQL Server 2016 and 2017 sign up for our newsletter or catch me talking about these features at SQL Saturday Denver and SQL Saturday Pittsburgh this month.  If you need any help with tuning or upgrading contact us. We would love to chat with you!

 

Why is My SQL Server Execution Plan Changing and How do I fix It!

Speaking at Performance Virtual Chapter

Speaking at Performance Virtual Chapter

Have, you noticed an important query suddenly takes longer to run? Have you noticed the execution plan looks completely different than what you normally expect for your query? Today, I talked at the Performance Virtual Chapter going over how to identify plans that change and went over options to fix them in SQL Server 2005 to 2016 by forcing plans.

Where is the Code?

As promised, the demo code is attached here.

How Can Query Store Get Better?

I would love to see Query Store have the ability to capture statistics for AG Secondaries that are enabled for read traffic. A bonus, would be the ability to force plans for AG Secondaries as well.

Benchmark Azure SQL Database Wait Stats

Today, I want to focus on how we can monitor wait statistics in an Azure SQL Database.  In the past, I blogged about how you should benchmark wait stats with the box product.  This process will give you misleading data in Azure SQL Database.  You will want to focus on wait stats that are specific to your database as you are using shared resources in Azure SQL Databases.

Finding Database Waits Statistics

Query we are talking to you!

Query we are talking to you!

With an instance of SQL Server regardless of using IaaS or on-premise, you would want to focus on all the waits that are occurring in your instance because the resources are dedicated to you.

In database as a service (DaaS), Microsoft gives you a special DMV that makes troubleshooting performance in Azure easier than any other competitor.  This feature is the dm_db_wait_stats DMV.  This DMV allows us specifically to get the details behind why our queries are waiting within our database and not the shared environment.  Once again it is worth repeating, wait statistics for our database in a shared environment.

The following is the script is a stored procedure I use to collect wait statistics for my Azure SQL Databases.  I hope it is a helpful benchmarking tool for you when you need to troubleshoot performance in Azure SQL Database.

The Good Stuff

/***************************************************************************
    Author : John Sterrett, Procure SQL LLC

    File:     AzureSQLDB_WaitStats.sql

    Summary:  The following code creates a stored procedure that can be used
                 to collect wait statistics for an Azure SQL Database.
                        
    Date:     August 2016

    Version:  Azure SQL Database V12 
  
  ---------------------------------------------------------------------------
  
  For more scripts and sample code, check out 
    https://johnsterrett.com

  THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF 
  ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED 
  TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
  PARTICULAR PURPOSE.
************************************************************************/

If NOT EXISTS (Select 1 from sys.schemas where name = N'Waits')
        execute sp_executesql @stmt = N'CREATE SCHEMA [Waits] AUTHORIZATION [dbo];'
GO


CREATE TABLE Waits.WaitStats (CaptureDataID bigint, WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint, Avg_Wait_S numeric(10, 6), Avg_Resource_S numeric(10, 6),Avg_Signal_S numeric(10, 6), CaptureDate datetime)
CREATE TABLE Waits.BenignWaits (WaitType varchar(200))
CREATE TABLE Waits.CaptureData (
ID bigint identity PRIMARY KEY,
StartTime datetime,
EndTime datetime,
ServerName varchar(500),
PullPeriod int
)

INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_SEMAPHORE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('LAZYWRITER_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('RESOURCE_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_TASK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_SYSTEMTASK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_BUFFER_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('WAITFOR')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('LOGMGR_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CHECKPOINT_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('REQUEST_FOR_DEADLOCK_SEARCH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_TIMER_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES  ('BROKER_TO_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_TASK_STOP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_MANUAL_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('CLR_AUTO_EVENT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DISPATCHER_QUEUE_SEMAPHORE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('FT_IFTS_SCHEDULER_IDLE_WAIT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_DISPATCHER_WAIT')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('XE_DISPATCHER_JOIN')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_EVENTHANDLER')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('TRACEWRITE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('FT_IFTSHC_MUTEX')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_INCREMENTAL_FLUSH_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_RECEIVE_WAITFOR')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('ONDEMAND_TASK_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DBMIRROR_EVENTS_QUEUE')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DBMIRRORING_CMD')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('BROKER_TRANSMITTER')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_WAIT_ENTRIES')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SLEEP_BPOOL_FLUSH')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SQLTRACE_LOCK')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('DIRTY_PAGE_POLL')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('SP_SERVER_DIAGNOSTICS_SLEEP')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('HADR_FILESTREAM_IOMGR_IOCOMPLETION')
INSERT INTO Waits.BenignWaits (WaitType)
VALUES ('HADR_WORK_QUEUE')

insert Waits.BenignWaits (WaitType) VALUES ('QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP');
insert Waits.BenignWaits (WaitType) VALUES ('QDS_PERSIST_TASK_MAIN_LOOP_SLEEP');
GO

--DROP PROCEDURE Waits.GetWaitStats
CREATE PROCEDURE Waits.GetWaitStats 
    @WaitTimeSec INT = 60,
    @StopTime DATETIME = NULL
AS
BEGIN
    DECLARE @CaptureDataID int
    /* Create temp tables to capture wait stats to compare */
    IF OBJECT_ID('tempdb..#WaitStatsBench') IS NOT NULL
        DROP TABLE #WaitStatsBench
    IF OBJECT_ID('tempdb..#WaitStatsFinal') IS NOT NULL
        DROP TABLE #WaitStatsFinal

    CREATE TABLE #WaitStatsBench (WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint)
    CREATE TABLE #WaitStatsFinal (WaitType varchar(200), wait_S decimal(20,5), Resource_S decimal (20,5), Signal_S decimal (20,5), WaitCount bigint)

    DECLARE @ServerName varchar(300)
    SELECT @ServerName = convert(nvarchar(128), serverproperty('servername'))
    
    /* Insert master record for capture data */
    INSERT INTO Waits.CaptureData (StartTime, EndTime, ServerName,PullPeriod)
    VALUES (GETDATE(), NULL, @ServerName, @WaitTimeSec)
    
    SELECT @CaptureDataID = SCOPE_IDENTITY()
     
/* Loop through until time expires  */
    IF @StopTime IS NULL
        SET @StopTime = DATEADD(hh, 1, getdate())
    WHILE GETDATE() < @StopTime
    BEGIN

        /* Get baseline */
        
        INSERT INTO #WaitStatsBench (WaitType, wait_S, Resource_S, Signal_S, WaitCount)
        SELECT
                wait_type,
                wait_time_ms / 1000.0 AS WaitS,
                (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
                signal_wait_time_ms / 1000.0 AS SignalS,
                waiting_tasks_count AS WaitCount
            FROM sys.dm_db_wait_stats
            WHERE wait_time_ms > 0.01 
            AND wait_type NOT IN ( SELECT WaitType FROM Waits.BenignWaits)
        

        /* Wait a few minutes and get final snapshot */
        WAITFOR DELAY @WaitTimeSec;

        INSERT INTO #WaitStatsFinal (WaitType, wait_S, Resource_S, Signal_S, WaitCount)
        SELECT
                wait_type,
                wait_time_ms / 1000.0 AS WaitS,
                (wait_time_ms - signal_wait_time_ms) / 1000.0 AS ResourceS,
                signal_wait_time_ms / 1000.0 AS SignalS,
                waiting_tasks_count AS WaitCount
            FROM sys.dm_db_wait_stats
            WHERE wait_time_ms > 0.01
            AND wait_type NOT IN ( SELECT WaitType FROM Waits.BenignWaits)
        
        DECLARE @CaptureTime datetime 
        SET @CaptureTime = getdate()

        INSERT INTO Waits.WaitStats (CaptureDataID, WaitType, Wait_S, Resource_S, Signal_S, WaitCount, Avg_Wait_S, Avg_Resource_S,Avg_Signal_S, CaptureDate)
        SELECT  @CaptureDataID,
            f.WaitType,
            f.wait_S - b.wait_S as Wait_S,
            f.Resource_S - b.Resource_S as Resource_S,
            f.Signal_S - b.Signal_S as Signal_S,
            f.WaitCount - b.WaitCount as WaitCounts,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.wait_S - b.wait_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Wait_S,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.Resource_S - b.Resource_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Resource_S,
            CAST(CASE WHEN f.WaitCount - b.WaitCount = 0 THEN 0 ELSE (f.Signal_S - b.Signal_S) / (f.WaitCount - b.WaitCount) END AS numeric(10, 6))AS Avg_Signal_S,
            @CaptureTime
        FROM #WaitStatsFinal f
        LEFT JOIN #WaitStatsBench b ON (f.WaitType = b.WaitType)
        WHERE (f.wait_S - b.wait_S) > 0.0 -- Added to not record zero waits in a time interval.
        
        TRUNCATE TABLE #WaitStatsBench
        TRUNCATE TABLE #WaitStatsFinal
 END -- END of WHILE
 
 /* Update Capture Data meta-data to include end time */
 UPDATE Waits.CaptureData
 SET EndTime = GETDATE()
 WHERE ID = @CaptureDataID
END

Special Wait Statistics Types

The following are wait statistics you will want to focus on specifically in Azure SQL Database.  If you made it this far, I strongly encourage you to read how DTU is measured.  That blog post will help you understand exactly why these waits can be signs of DTU pressure.

IO_QUEUE_LIMIT :  Occurs when the asynchronous IO queue for the Azure SQL Database has too many IOs pending. Tasks trying to issue another IO are blocked on this wait type until the number of pending IOs drop below the threshold. The threshold is proportional to the DTUs assigned to the database.

LOG_RATE_GOVERNOR :  Occurs when DB is waiting for quota to write to the log.  Yes,  Azure SQL Database is capping your transactional log writes to adhere to DTU.

SOS_SCHEDULER_YIELD: This occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait, the task is waiting in the runnable queue to get a scheduler to run.  If your DTU calculation is based on CPU usage you will typically see these waits.

Want More Azure Articles?

If you enjoyed this blog post I think you will also enjoy the following related blog posts.

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, John would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.

Photo Credit:  

Calculating DTU in Azure SQL Database

A few months ago, I posted a question over on ask.sqlservercentral.com.  In a nutshell, it was how do you measure DTU? How could you pull the data shown in the Azure Portal graphs with T-SQL?  No one answered, so this motivated me get off my butt and answer my question.  In doing so, I wanted to share the results with all my readers as well.

I started thinking like a detective.  If I was DTU how would I want to be found with T-SQL?

It was the Azure Portal in the Query Store

It was the Azure Portal in the Query Store

Knowing, that both “sys.dm_db_resource_stats” and “sys.resource_stats” holds data that is used to calculate DTU. I decided to leverage Query Store on an Azure SQL Database to see if I could quickly see how DTU is calculated. Behold, I was right.

 

Query Store

How DTU is calculated in Azure Portal

The Secret Sauce

The whole query is below. Right now, let’s just focus on the secret sauce. The secret sauce is how DTU percentage gets calculated.  In a nutshell, the maximum of CPU, Data IO, Log Write Percent determine your DTU percentage.  What does this mean to you? Your max consumer limits you. So, you can be using 1% of your IO but still be slowed down because CPU could be your max consumer resource.

(SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) as dtu_consumption_percent

DTU Calculating Query

SELECT ((CONVERT(BIGINT, DATEDIFF(day, 0, [end_time])) * 24 * 3600 + DATEDIFF(second, DATEADD(day, DATEDIFF(day, 0, [end_time]), 0), [end_time])) / @timeGrain) * @timeGrain as start_time_interval
                , MAX(cpu_percent) as cpu_percent
                , MAX(physical_data_read_percent) as physical_data_read_percent
                , MAX(log_write_percent) as log_write_percent
                , MAX(memory_usage_percent) as memory_usage_percent
                , MAX(xtp_storage_percent) as xtp_storage_percent
                , MAX(dtu_consumption_percent) as dtu_consumption_percent
                , MAX(workers_percent) as workers_percent
                , MAX(sessions_percent) as sessions_percent
                , MAX(dtu_limit) as dtu_limit
                , MAX(dtu_used) as dtu_used
             FROM
                 (SELECT
                     end_time
                    , ISNULL(avg_cpu_percent, 0) as cpu_percent
                    , ISNULL(avg_data_io_percent, 0) as physical_data_read_percent
                    , ISNULL(avg_log_write_percent, 0) as log_write_percent
                    , ISNULL(avg_memory_usage_percent, 0) as [memory_usage_percent]
                    , ISNULL(xtp_storage_percent, 0) as xtp_storage_percent
                    , ISNULL((SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) as dtu_consumption_percent
                    , ISNULL(max_worker_percent, 0) as workers_percent
                    , ISNULL(max_session_percent, 0) as sessions_percent
                    , ISNULL(dtu_limit, 0) as dtu_limit
                    , ISNULL(dtu_limit, 0) * ISNULL((SELECT MAX(v) FROM (VALUES (avg_cpu_percent), (avg_data_io_percent), (avg_log_write_percent)) AS value(v)), 0) / 100.0 as dtu_used
                 FROM sys.dm_db_resource_stats 
                 WHERE [end_time] >= @startTime AND [end_time] <= @endTime
                 ) t
             GROUP BY ((CONVERT(BIGINT, DATEDIFF(day, 0, [end_time])) * 24 * 3600 + DATEDIFF(second, DATEADD(day, DATEDIFF(day, 0, [end_time]), 0), [end_time])) / @timeGrain) * @timeGrain

A Future Blog Post

Now that we can calculate DTU we could trend this data and automatically make changes like moving up and down from the current performance tier.  I would love to wrap this into an Azure SQL Database Alert but a process in PowerShell might be a great starting point.

John Sterrett is a Microsoft Data Platform MVP and a Group Principal for Procure SQL. If you need any help with your on-premise or cloud SQL Server databases, he would love to chat with you. You can contact him directly at john AT ProcureSQL dot com or here.

Developers Make Your SQL Server Queries Go Faster!

Being that today is my birthday I thought I would share a nice gift to everyone who follows my blog. Below is a free link to a recorded video of one of my latest presentations that goes over multiple SQL Server developer anti-patterns I see repeated over and over again in the field. Most of these are patterns that any IT professionals can identify and fix just by identifying the pattern and changing the pattern to another one shown in the video that is optimizer friendly.  I promise no brain surgery is required 😉

Make Your SQL Server Queries Go Faster

I hope you enjoy the hour long video on making your SQL Server queries go faster!

Bonus Material

If you want to play along feel free to download the sample scripts.  If you want to learn more about being a proactive performance tuner check out my root cause performance analysis blog series which includes my What Is Running Script, Disk LatencyWait Stats monitor processes and more..

Extended Event Security Permissions

Recently, I gave my first presentation of the year at Austin .NET User Group on Writing Faster Queries and I got a great question about Extended Events. The question was, “What security permissions are required for creating and modifying Extended Event Sessions?”  I thought it would make a great blog post as I am sure others might be wondering the same thing.

In SQL Server 2008 to create an Extended Event Session you are going to need CONTROL SERVER  and  ALTER ANY EVENT SESSION permission on the instance that you need Extended Events created on. With SQL Server 2012 and 2014 If you want to create and modify Extended Events you are only going to need ALTER ANY EVENT SESSION.

If you like this post check out these other related blog posts:

SQL Server Performance Root Cause Analysis in 10 Minutes

This year I was honored to be selected by Dell Software to present a ten minute session in their booth (#200) at the 2013 SQL PASS Member Summit. I decided to share how I do a SQL Server Performance Root Cause Analysis in 10 minutes with the SQL Community.

The following is my blog series and it includes all the sample code:

If you are attending the 2013 SQL PASS Member Summit lets connect. You can catch my presentation in the Dell Software Theater at Booth #200 at the times listed below.

Dates and Times:

  • Wednesday – October 16th @ 11:45am
  • Thursday – October 17th @ 1:45pm
  • Thursday – October 17th @ 3:15pm
  • Friday – October 18th @ 12:45pm

Finding Top Offenders From Cache

When I start a SQL Server Performance Root Cause Analysis I like to find the top waits and then find the queries causing the top waits. Next, I like to understand what is running and monitor disk latency. Finally, I would like to probe the cache to see what are my top offenders since the plans were cached.

** DOWNLOAD SCRIPTS **

Today, in this blog post were going to focus on the last remaining item, probing the cache to get top offenders. I do this because I would like to know if my current problem is also a long term problem. If you have stored procedures that get accessed frequently there is a good chance they will stay in cache. This allows you to take advantage of sys.dm_exec_query_stats to get aggregated information about cpu, reads, writes, duration for those plans. My favorite tw0 columns in sys.dm_exec_query_stats is query_hash and query_plan_hash.

QUERY_HASH and QUERY_PLAN_HASH

In the field I see a lot of people pulling data from sys.dm_exec_query_stats without grouping by query_hash and/or query_plan_hash. I strongly recommend you group by the hash columns because they can identify statements that are only different by literal values and statements with similar execution plans. For example, in the real world I have seen stored procedures with duplicate code.  Basically, someone did a copy and paste when they created the a new  stored procedure. Therefore, these stored procedures would have the same query_hash and query_plan_hash even thought the code belongs to different stored procedures.

How Many Executions?

Personally, I also want to know my top offenders for a few different cases. For example, if my top I/O statement only executed once it might not be as important as another statement that is the 3rd highest offender with I/O but executed 100,000 times.  Therefore, I added a parameter into my stored procedures so I can filter by execution count. This allows me to find my sweet spot for top offenders for a resource vs execution counts. I also added another parameter so I can filter how many statements are returned. This quickly allows me to do TOP 10 or TOP 5 or TOP 20 on the fly.

Now, lets take a look at the code.

Total I/O

/****** Object:  StoredProcedure [dbo].[GetTopStatements_TotalIO]    Script Date: 10/14/2013 10:16:35 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:		John Sterrett (@JohnSterrett)
-- Create date: 6/4/2014
-- Description:	Gets Top IO statements based on execution count
-- Example: exec dbo.GetTopStatements_TotalIO @NumOfStatements = 5, @Executions = 100
-- =============================================
CREATE PROCEDURE [dbo].[GetTopStatements_TotalIO]
	-- Add the parameters for the stored procedure here
	@NumOfStatements int = 25,
	@Executions int = 5
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	--- top 25 statements by IO
	IF OBJECT_ID('tempdb..#TopOffenders') IS NOT NULL
			DROP TABLE #TopOffenders
	IF OBJECT_ID('tempdb..#QueryText') IS NOT NULL
			DROP TABLE #QueryText
	CREATE TABLE #TopOffenders (AvgIO bigint, TotalIO bigint, TotalCPU bigint, AvgCPU bigint, TotalDuration bigint, AvgDuration bigint, [dbid] int, objectid bigint, execution_count bigint, query_hash varbinary(8))
	CREATE TABLE #QueryText (query_hash varbinary(8), query_text varchar(max))

	INSERT INTO #TopOffenders (AvgIO, TotalIO, TotalCPU, AvgCPU, TotalDuration, AvgDuration, [dbid], objectid, execution_count, query_hash)
	SELECT TOP (@NumOfStatements)
			SUM((qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count) as [Avg IO],
			SUM((qs.total_logical_reads + qs.total_logical_writes)) AS [TotalIO],
			SUM(qs.total_worker_time) AS Total_Worker_Time,
			SUM((qs.total_worker_time) / qs.execution_count) AS [AvgCPU],
			SUM(qs.total_elapsed_time) AS TotalDuration,
			SUM((qs.total_elapsed_time)/ qs.execution_count) AS AvgDuration,
		qt.dbid,
		qt.objectid,
		SUM(qs.execution_count) AS Execution_Count,
		qs.query_hash
	FROM sys.dm_exec_query_stats qs
	cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
	GROUP BY qs.query_hash, qs.query_plan_hash, qt.dbid, qt.objectid
	HAVING SUM(qs.execution_count) > @Executions
	ORDER BY [TotalIO] DESC

--select * From #TopOffenders
--ORDER BY TotalIO desc

/* Create cursor to get query text */
DECLARE @QueryHash varbinary(8)

DECLARE QueryCursor CURSOR FAST_FORWARD FOR
select query_hash
FROM #TopOffenders

OPEN QueryCursor
FETCH NEXT FROM QueryCursor INTO @QueryHash

WHILE (@@FETCH_STATUS = 0)
BEGIN

		INSERT INTO #QueryText (query_text, query_hash)
		select MIN(substring (qt.text,qs.statement_start_offset/2, 
				 (case when qs.statement_end_offset = -1 
				then len(convert(nvarchar(max), qt.text)) * 2 
				else qs.statement_end_offset end -    qs.statement_start_offset)/2)) 
				as query_text, qs.query_hash
		from sys.dm_exec_query_stats qs
		cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
		where qs.query_hash = @QueryHash
		GROUP BY qs.query_hash;

		FETCH NEXT FROM QueryCursor INTO @QueryHash
   END
   CLOSE QueryCursor
   DEALLOCATE QueryCursor

		select distinct DB_NAME(dbid) DBName, OBJECT_NAME(objectid, dbid) ObjectName, qt.query_text, o.*
		INTO #Results
		from #TopOffenders o
		join #QueryText qt on (o.query_hash = qt.query_hash)

		SELECT TOP (@NumOfStatements) *
		FROM #Results
		ORDER BY TotalIO desc  

		DROP TABLE #Results
		DROP TABLE #TopOffenders
		DROP TABLE #QueryText
	END

GO

Total CPU

/****** Object:  StoredProcedure [dbo].[GetTopStatements_TotalCPU]    Script Date: 10/14/2013 10:21:32 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:		John Sterrett (@JohnSterrett)
-- Create date: 6/4/2013
-- Description:	Gets statements causing most CPU from cache based on executions.
-- Example: exec dbo.GetTopStatements_TotalCPU @Executions = 5, @NumOfStatements = 25
-- =============================================
CREATE PROCEDURE [dbo].[GetTopStatements_TotalCPU]
	-- Add the parameters for the stored procedure here
	@NumOfStatements int = 25,
	@Executions int = 5
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	--- top 25 statements by IO
IF OBJECT_ID('tempdb..#TopOffenders') IS NOT NULL
		DROP TABLE #TopOffenders
IF OBJECT_ID('tempdb..#QueryText') IS NOT NULL
		DROP TABLE #QueryText
CREATE TABLE #TopOffenders (AvgIO bigint, TotalIO bigint, TotalCPU bigint, AvgCPU bigint, TotalDuration bigint, AvgDuration bigint, [dbid] int, objectid bigint, execution_count bigint, query_hash varbinary(8))
CREATE TABLE #QueryText (query_hash varbinary(8), query_text varchar(max))

INSERT INTO #TopOffenders (AvgIO, TotalIO, TotalCPU, AvgCPU, TotalDuration, AvgDuration, [dbid], objectid, execution_count, query_hash)
SELECT TOP (@NumOfStatements)
        SUM((qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count) as [Avg IO],
        SUM((qs.total_logical_reads + qs.total_logical_writes)) AS [TotalIO],
        SUM(qs.total_worker_time) AS [TotalCPU],
        SUM((qs.total_worker_time) / qs.execution_count) AS [AvgCPU],
        SUM(qs.total_elapsed_time) AS TotalDuration,
		SUM((qs.total_elapsed_time)/ qs.execution_count) AS AvgDuration,
    qt.dbid,
    qt.objectid,
    SUM(qs.execution_count) AS Execution_Count,
    qs.query_hash
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
GROUP BY qs.query_hash, qs.query_plan_hash, qt.dbid, qt.objectid
HAVING SUM(qs.execution_count) > @Executions
ORDER BY [TotalCPU] DESC

--select * From #TopOffenders
--ORDER BY TotalIO desc

/* Create cursor to get query text */
DECLARE @QueryHash varbinary(8)

DECLARE QueryCursor CURSOR FAST_FORWARD FOR
select query_hash
FROM #TopOffenders

OPEN QueryCursor
FETCH NEXT FROM QueryCursor INTO @QueryHash

WHILE (@@FETCH_STATUS = 0)
BEGIN

		INSERT INTO #QueryText (query_text, query_hash)
		select MIN(substring (qt.text,qs.statement_start_offset/2, 
				 (case when qs.statement_end_offset = -1 
				then len(convert(nvarchar(max), qt.text)) * 2 
				else qs.statement_end_offset end -    qs.statement_start_offset)/2)) 
				as query_text, qs.query_hash
		from sys.dm_exec_query_stats qs
		cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
		where qs.query_hash = @QueryHash
		GROUP BY qs.query_hash;

		FETCH NEXT FROM QueryCursor INTO @QueryHash
   END
   CLOSE QueryCursor
   DEALLOCATE QueryCursor

		select distinct DB_NAME(dbid) DBName, OBJECT_NAME(objectid, dbid) ObjectName, qt.query_text, o.*
		INTO #Results
		from #TopOffenders o
		join #QueryText qt on (o.query_hash = qt.query_hash)

		SELECT TOP (@NumOfStatements) *
		FROM #Results
		ORDER BY TotalCPU desc  

		DROP TABLE #Results
		DROP TABLE #TopOffenders
		DROP TABLE #QueryText
	END

GO

Total Duration

/****** Object:  StoredProcedure [dbo].[GetTopStatements_TotalDuration]    Script Date: 10/14/2013 10:18:49 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:		John Sterrett (@JohnSterrett)
-- Create date: 6/4/2013
-- Description:	Get total duration from cache based on executions.
-- Example: exec dbo.GetTopStatements_TotalDuration @NumOfStatements = 25, @Executions = 5
-- =============================================
CREATE PROCEDURE [dbo].[GetTopStatements_TotalDuration]
	-- Add the parameters for the stored procedure here
	@NumOfStatements int = 25,
	@Executions int = 5
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	--- top 25 statements by IO
IF OBJECT_ID('tempdb..#TopOffenders') IS NOT NULL
		DROP TABLE #TopOffenders
IF OBJECT_ID('tempdb..#QueryText') IS NOT NULL
		DROP TABLE #QueryText
CREATE TABLE #TopOffenders (AvgIO bigint, TotalIO bigint, TotalCPU bigint, AvgCPU bigint, TotalDuration bigint, AvgDuration bigint, [dbid] int, objectid bigint, execution_count bigint, query_hash varbinary(8))
CREATE TABLE #QueryText (query_hash varbinary(8), query_text varchar(max))

INSERT INTO #TopOffenders (AvgIO, TotalIO, TotalCPU, AvgCPU, TotalDuration, AvgDuration, [dbid], objectid, execution_count, query_hash)
SELECT TOP (@NumOfStatements)
        SUM((qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count) as [Avg IO],
        SUM((qs.total_logical_reads + qs.total_logical_writes)) AS [TotalIO],
        SUM(qs.total_worker_time) AS Total_Worker_Time,
        SUM((qs.total_worker_time) / qs.execution_count) AS [AvgCPU],
        SUM(qs.total_elapsed_time) AS TotalDuration,
		SUM((qs.total_elapsed_time)/ qs.execution_count) AS AvgDuration,
    qt.dbid,
    qt.objectid,
    SUM(qs.execution_count) AS Execution_Count,
    qs.query_hash
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
GROUP BY qs.query_hash, qs.query_plan_hash, qt.dbid, qt.objectid
HAVING SUM(qs.execution_count) > @Executions
ORDER BY [TotalDuration] DESC

--select * From #TopOffenders
--ORDER BY TotalIO desc

/* Create cursor to get query text */
DECLARE @QueryHash varbinary(8)

DECLARE QueryCursor CURSOR FAST_FORWARD FOR
select query_hash
FROM #TopOffenders

OPEN QueryCursor
FETCH NEXT FROM QueryCursor INTO @QueryHash

WHILE (@@FETCH_STATUS = 0)
BEGIN

		INSERT INTO #QueryText (query_text, query_hash)
		select MIN(substring (qt.text,qs.statement_start_offset/2, 
				 (case when qs.statement_end_offset = -1 
				then len(convert(nvarchar(max), qt.text)) * 2 
				else qs.statement_end_offset end -    qs.statement_start_offset)/2)) 
				as query_text, qs.query_hash
		from sys.dm_exec_query_stats qs
		cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
		where qs.query_hash = @QueryHash
		GROUP BY qs.query_hash;

		FETCH NEXT FROM QueryCursor INTO @QueryHash
   END
   CLOSE QueryCursor
   DEALLOCATE QueryCursor

		select distinct DB_NAME(dbid) DBName, OBJECT_NAME(objectid, dbid) ObjectName, qt.query_text, o.*
		INTO #Results
		from #TopOffenders o
		join #QueryText qt on (o.query_hash = qt.query_hash)

		SELECT TOP (@NumOfStatements) *
		FROM #Results
		ORDER BY TotalDuration desc  

		DROP TABLE #Results
		DROP TABLE #TopOffenders
		DROP TABLE #QueryText
	END

GO

Average I/O

/****** Object:  StoredProcedure [dbo].[GetTopStatements_AvgIO]    Script Date: 10/14/2013 10:23:01 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:		John Sterrett (@JohnSterrett)
-- Create date: 6/4/2013
-- Description:	Get statements from cache causing most average IO based on executions.
-- Example: exec dbo.GetTopStatements_AvgIO @Executions = 5, @NumOfStatements = 25
-- =============================================
CREATE PROCEDURE [dbo].[GetTopStatements_AvgIO]
	-- Add the parameters for the stored procedure here
	@NumOfStatements int = 25,
	@Executions int = 5
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	IF OBJECT_ID('tempdb..#TopOffenders') IS NOT NULL
			DROP TABLE #TopOffenders
	IF OBJECT_ID('tempdb..#QueryText') IS NOT NULL
			DROP TABLE #QueryText
	CREATE TABLE #TopOffenders (AvgIO bigint, TotalIO bigint, TotalCPU bigint, AvgCPU bigint, TotalDuration bigint, AvgDuration bigint, [dbid] int, objectid bigint, execution_count bigint, query_hash varbinary(8))
	CREATE TABLE #QueryText (query_hash varbinary(8), query_text varchar(max))

	INSERT INTO #TopOffenders (AvgIO, TotalIO, TotalCPU, AvgCPU, TotalDuration, AvgDuration, [dbid], objectid, execution_count, query_hash)
	SELECT TOP (@NumOfStatements)
			SUM((qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count) as [Avg IO],
			SUM((qs.total_logical_reads + qs.total_logical_writes)) AS [TotalIO],
			SUM(qs.total_worker_time) AS [TotalCPU],
			SUM((qs.total_worker_time) / qs.execution_count) AS [AvgCPU],
			SUM(qs.total_elapsed_time) AS TotalDuration,
			SUM((qs.total_elapsed_time)/ qs.execution_count) AS AvgDuration,
		qt.dbid,
		qt.objectid,
		SUM(qs.execution_count) AS Execution_Count,
		qs.query_hash
	FROM sys.dm_exec_query_stats qs
	cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
	GROUP BY qs.query_hash, qs.query_plan_hash, qt.dbid, qt.objectid
	HAVING SUM(qs.execution_count) > @Executions		
	ORDER BY [Avg IO] DESC

		--select * From #TopOffenders
		--ORDER BY AvgIO desc

		/* Create cursor to get query text */
		DECLARE @QueryHash varbinary(8)

		DECLARE QueryCursor CURSOR FAST_FORWARD FOR
		select query_hash
		FROM #TopOffenders

		OPEN QueryCursor
		FETCH NEXT FROM QueryCursor INTO @QueryHash

		WHILE (@@FETCH_STATUS = 0)
		BEGIN

				INSERT INTO #QueryText (query_text, query_hash)
				select MIN(substring (qt.text,qs.statement_start_offset/2, 
						 (case when qs.statement_end_offset = -1 
						then len(convert(nvarchar(max), qt.text)) * 2 
						else qs.statement_end_offset end -    qs.statement_start_offset)/2)) 
						as query_text, qs.query_hash
				from sys.dm_exec_query_stats qs
				cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
				where qs.query_hash = @QueryHash
				GROUP BY qs.query_hash;

				FETCH NEXT FROM QueryCursor INTO @QueryHash
		   END
		   CLOSE QueryCursor
		   DEALLOCATE QueryCursor

		select distinct DB_NAME(dbid) DBName, OBJECT_NAME(objectid, dbid) ObjectName, qt.query_text, o.*
		INTO #Results
		from #TopOffenders o
		join #QueryText qt on (o.query_hash = qt.query_hash)

		SELECT TOP (@NumOfStatements) *
		FROM #Results
		ORDER BY AvgIO desc  

		DROP TABLE #Results
		DROP TABLE #TopOffenders
		DROP TABLE #QueryText
	END

GO

Average CPU

/****** Object:  StoredProcedure [dbo].[GetTopStatements_AvgCPU]    Script Date: 10/14/2013 10:31:47 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:		John Sterrett (@JohnSterrett)
-- Create date: 6/4/2013
-- Description:	Get Statements from cache that have highest average cpu utilization by executions
-- Example: exec dbo.GetTopStatements_AvgCPU @Executions = 5, @NumbOfStatements = 25
-- =============================================
CREATE PROCEDURE [dbo].[GetTopStatements_AvgCPU]
	-- Add the parameters for the stored procedure here
	@NumOfStatements int = 25,
	@Executions int = 5
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	--- top 25 statements by IO
IF OBJECT_ID('tempdb..#TopOffenders') IS NOT NULL
		DROP TABLE #TopOffenders
IF OBJECT_ID('tempdb..#QueryText') IS NOT NULL
		DROP TABLE #QueryText
CREATE TABLE #TopOffenders (AvgIO bigint, TotalIO bigint, TotalCPU bigint, AvgCPU bigint, TotalDuration bigint, AvgDuration bigint, [dbid] int, objectid bigint, execution_count bigint, query_hash varbinary(8))
CREATE TABLE #QueryText (query_hash varbinary(8), query_text varchar(max))

INSERT INTO #TopOffenders (AvgIO, TotalIO, TotalCPU, AvgCPU, TotalDuration, AvgDuration, [dbid], objectid, execution_count, query_hash)
SELECT TOP (@NumOfStatements)
        SUM((qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count) as [Avg IO],
        SUM((qs.total_logical_reads + qs.total_logical_writes)) AS [TotalIO],
        SUM(qs.total_worker_time) AS Total_Worker_Time,
        SUM((qs.total_worker_time) / qs.execution_count) AS [AvgCPU],
        SUM(qs.total_elapsed_time) AS TotalDuration,
		SUM((qs.total_elapsed_time)/ qs.execution_count) AS AvgDuration,
    qt.dbid,
    qt.objectid,
    SUM(qs.execution_count) AS Execution_Count,
    qs.query_hash
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
GROUP BY qs.query_hash, qs.query_plan_hash, qt.dbid, qt.objectid
HAVING SUM(qs.execution_count) > @Executions
ORDER BY [AvgCPU] DESC

--select * From #TopOffenders
--ORDER BY TotalIO desc

/* Create cursor to get query text */
DECLARE @QueryHash varbinary(8)

DECLARE QueryCursor CURSOR FAST_FORWARD FOR
select query_hash
FROM #TopOffenders

OPEN QueryCursor
FETCH NEXT FROM QueryCursor INTO @QueryHash

WHILE (@@FETCH_STATUS = 0)
BEGIN

		INSERT INTO #QueryText (query_text, query_hash)
		select MIN(substring (qt.text,qs.statement_start_offset/2, 
				 (case when qs.statement_end_offset = -1 
				then len(convert(nvarchar(max), qt.text)) * 2 
				else qs.statement_end_offset end -    qs.statement_start_offset)/2)) 
				as query_text, qs.query_hash
		from sys.dm_exec_query_stats qs
		cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
		where qs.query_hash = @QueryHash
		GROUP BY qs.query_hash;

		FETCH NEXT FROM QueryCursor INTO @QueryHash
   END
   CLOSE QueryCursor
   DEALLOCATE QueryCursor

		select distinct DB_NAME(dbid) DBName, OBJECT_NAME(objectid, dbid) ObjectName, qt.query_text, o.*
		INTO #Results
		from #TopOffenders o
		join #QueryText qt on (o.query_hash = qt.query_hash)

		SELECT TOP (@NumOfStatements) *
		FROM #Results
		ORDER BY AvgCPU desc  

		DROP TABLE #Results
		DROP TABLE #TopOffenders
		DROP TABLE #QueryText
	END

GO

Average Duration

/****** Object:  StoredProcedure [dbo].[GetTopStatements_AvgDuration]    Script Date: 10/14/2013 10:30:17 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:		John Sterrett (@JohnSterrett)
-- Create date: 6/4/2013
-- Description:	Get statements from cache causing most average duration by executions
-- Example: exec dbo.GetTopStatements_AvgDuration @NumOfStatements = 25, @Executions = 5
-- =============================================
CREATE PROCEDURE [dbo].[GetTopStatements_AvgDuration]
	-- Add the parameters for the stored procedure here
	@NumOfStatements int = 25,
	@Executions int = 5
AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

    -- Insert statements for procedure here
	--- top 25 statements by IO
IF OBJECT_ID('tempdb..#TopOffenders') IS NOT NULL
		DROP TABLE #TopOffenders
IF OBJECT_ID('tempdb..#QueryText') IS NOT NULL
		DROP TABLE #QueryText
CREATE TABLE #TopOffenders (AvgIO bigint, TotalIO bigint, TotalCPU bigint, AvgCPU bigint, TotalDuration bigint, AvgDuration bigint, [dbid] int, objectid bigint, execution_count bigint, query_hash varbinary(8))
CREATE TABLE #QueryText (query_hash varbinary(8), query_text varchar(max))

INSERT INTO #TopOffenders (AvgIO, TotalIO, TotalCPU, AvgCPU, TotalDuration, AvgDuration, [dbid], objectid, execution_count, query_hash)
SELECT TOP (@NumOfStatements)
        SUM((qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count) as [Avg IO],
        SUM((qs.total_logical_reads + qs.total_logical_writes)) AS [TotalIO],
        SUM(qs.total_worker_time) AS Total_Worker_Time,
        SUM((qs.total_worker_time) / qs.execution_count) AS [AvgCPU],
        SUM(qs.total_elapsed_time) AS TotalDuration,
		SUM((qs.total_elapsed_time)/ qs.execution_count) AS AvgDuration,
    qt.dbid,
    qt.objectid,
    SUM(qs.execution_count) AS Execution_Count,
    qs.query_hash
FROM sys.dm_exec_query_stats qs
cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
GROUP BY qs.query_hash, qs.query_plan_hash, qt.dbid, qt.objectid
HAVING SUM(qs.execution_count) > @Executions
ORDER BY [AvgDuration] DESC

--select * From #TopOffenders
--ORDER BY TotalIO desc

/* Create cursor to get query text */
DECLARE @QueryHash varbinary(8)

DECLARE QueryCursor CURSOR FAST_FORWARD FOR
select query_hash
FROM #TopOffenders

OPEN QueryCursor
FETCH NEXT FROM QueryCursor INTO @QueryHash

WHILE (@@FETCH_STATUS = 0)
BEGIN

		INSERT INTO #QueryText (query_text, query_hash)
		select MIN(substring (qt.text,qs.statement_start_offset/2, 
				 (case when qs.statement_end_offset = -1 
				then len(convert(nvarchar(max), qt.text)) * 2 
				else qs.statement_end_offset end -    qs.statement_start_offset)/2)) 
				as query_text, qs.query_hash
		from sys.dm_exec_query_stats qs
		cross apply sys.dm_exec_sql_text (qs.sql_handle) as qt
		where qs.query_hash = @QueryHash
		GROUP BY qs.query_hash;

		FETCH NEXT FROM QueryCursor INTO @QueryHash
   END
   CLOSE QueryCursor
   DEALLOCATE QueryCursor

		select distinct DB_NAME(dbid) DBName, OBJECT_NAME(objectid, dbid) ObjectName, qt.query_text, o.*
		INTO #Results
		from #TopOffenders o
		join #QueryText qt on (o.query_hash = qt.query_hash)

		SELECT TOP (@NumOfStatements) *
		FROM #Results
		ORDER BY AvgDuration desc  

		DROP TABLE #Results
		DROP TABLE #TopOffenders
		DROP TABLE #QueryText
	END

GO

If you enjoyed this blog post please check out my related blog posts like  benchmarking your top waits and finding queries that cause your top waits or  what is running and benchmarking disk latency.