Tag Archives: sqlpass

Three Reasons Why I Am Attending PASS Member Summit 2016

Three reasons why I am attending PASS Member Summit in 2016

Three reasons why I am attending PASS Member Summit in 2016

Over the past few weeks, I saw on social media that many of my #sqlfamily members were not attending the PASS Member Summit conference this year.  It made me want to blog about why I am attending this year.

Many years ago, I heard that the PASS Member Summit conference could change your career as a data professional. I thought that statement was a great marketing pitch until I attended for the first time in 2011.  These days, I get more excited for other people than myself. With that said, here are three reasons why I am attending this year’s PASS Summit.

Local Grass Roots

Being a chapter leader, I always want to do everything I can to help my local user members, and I love seeing them succeed and grow.  In fact, it’s been fun watching some of the members grow professionally and in the community. One of Austin’s Finest SQL Server presenters, Lance Tidwell will be doing a full session at 3:30 pm on Friday on Parameter Sniffing the Good, the Bad, and the Ugly. There is not another session I look forward to seeing more this year (Yes, I am also presenting but luckily not at the same time as Lance).

The Dream I Never Had

After coming back from my very first PASS Summit in 2011, I had all kinds of thoughts on how my career would evolve. I had dreams of making six figures, working from home, being my boss.  I even had mentors in the PASS community who helped me realize all those dreams were possible.  I never imagined that I would hire my first my first employee directly from conversations I had with a local user group member. I tried to help Angela Tidwell (Yes, Lance’s wife) break into the IT field a few times.  After several conversations, I learned we could help each other out so this week Angela became my second employee.  Angela is at the PASS Member Summit this week as a first timer.  I hope I can do a good job introducing her to everyone just like Tom LaRock did for me when I was a first timer.

If you are at the PASS Member Summit and you see Angela, please say hello. Just please don’t do it during the middle of Lance’s session on Friday.

The Speaker That Almost Never Presented

Many years ago, I had a boss who I knew would be a great speaker in the PASS Community. Like most people, he was afraid of public speaking. I had to dare him to go to the local Pittsburgh SQL Server User Group with me and co-present. When I say co-present, I meant just stand next to me and share some real-world stories while I do demos.  Now he is speaking at the PASS Member Summit for the second time in a row. I love being able to say I knew him when. Now he is a superstar, and I look forward to watching him succeed and continue to grow in the SQL Server Community.

[Update 9:48 PST]

Why are you attending this year’s PASS Member Summit? I would love to hear your reasons. If you couldn’t make it this year you can still watch parts of the conference live on the internet.

3 Reasons to Attend SQL Saturday Austin on Jan 30th

The Austin SQL Server User Group will host its third SQL Saturday on Saturday,

SQL Saturday Austin on January 30th, 2016

SQL Saturday Austin on January 30th, 2016

January 30th. SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Admittance to this event is free ($15 for lunch), all costs are covered by donations and sponsorships. This all-day training event includes multiple tracks of SQL Server training from professional trainers, consultants, MCM’s, Microsoft Employees and MVPs.

Here are three reasons why I am excited to attend the SQL Saturday in Austin.

PreCons

While the SQL Saturday is free, there is also two separate all-day classes on Friday, January 29th that are dirt cheap compared to the cost of attending these classes at your local training center.

Have you ever wanted to learn how to make SQL Server go faster?  In a single day, Robert Davis will show you Performance Tuning like a Boss.

Have you wondered how you can keep your data highly available when your servers go bump in the night?  Ryan Adams will be teaching a class on Creating a High Availability and Disaster Recovery Plan.  Having a solid recovery plan can make you a Rockstar DBA and also help keep your company in business.

Sessions

In Austin we are blessed to have some of the best teachers come to town to share their knowledge.  We will have Connor Cunningham from the SQL Server Product team talk about the new features coming in SQL Server 2016.  We will have several MVP’s and MCMs sharing their knowledge.  If you want to learn about SQL Server there is not a better venue to do so than a local SQL Saturday.

Networking

Are you the only DBA or data professional working at your company?  If not, are you interested in meeting people who are as passionate as you are about data? If so, SQL Saturday is a great place to meet and network with some of the best data professionals.  I will never forget my first SQL Saturday. I found some vendors that had tools that made my job easier.  I also built some friendships that have helped me thought out my career.

Benchmark SQL Server Disk Latency

Typically, I am a big advocate of performance monitor but one place I commonly see performance monitor being misused is with benchmarking disk counters for SQL Server.  Typically, you will see people applying best practices like having dedicated spindles for transactional log files and dedicated spindles for data files.  With that said, multiple database files and/or transactional log files are collocated on the same logical or physical drive(s). Therefore, when your looking a disk latency like reads per second or writes per second it can almost be impossible to determine which data file(s) is causing the disk latency. You just know which physical or logical drive has latency issues.

Meet THE  SYS.DM_IO_VIRTUAL_FILE_STATS DMV

Starting with SQL Server 2005 DBA’s were granted access to the sys.dm_io_virtual_file_stats dynamic management view. This DMV gives you access into how many physical I/O operations occurred, how much latency has occurred,  how much data was written and more.  The secret key is that this is for each independent database file and that this data is collected since the last time the instance started so we need to keep that in mind. Ideally, we would want to capture this data, wait for a period of time, capture this data again and then compare the results. This objective is completed in the code shown below. For this example we will wait five minutes between captures.

DECLARE @WaitTimeSec int
SET @WaitTimeSec = 300 -- seconds between samples.

/* If temp tables exist drop them. */
IF OBJECT_ID('tempdb..#IOStallSnapshot') IS NOT NULL
BEGIN
DROP TABLE #IOStallSnapshot
END

IF OBJECT_ID('tempdb..#IOStallResult') IS NOT NULL
BEGIN
DROP TABLE #IOStallResult
END

/* Create temp tables for capture baseline */
CREATE TABLE #IOStallSnapshot(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

CREATE TABLE #IOStallResult(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

/* Get baseline snapshot of stalls */
INSERT INTO #IOStallSnapshot (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.file_id
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id

/* Wait a few minutes and get final snapshot */
WAITFOR DELAY @WaitTimeSec

INSERT INTO #IOStallResult (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.[file_id]
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.[file_id] = b.[file_id]
AND a.database_id = b.database_id

/* Get differences between captures */
SELECT
inline.CaptureDate
,CASE WHEN inline.num_of_reads =0 THEN 0
ELSE inline.io_stall_read_ms / inline.num_of_reads END AS read_per_ms
,CASE WHEN inline.num_of_writes = 0 THEN 0
ELSE inline.io_stall_write_ms / inline.num_of_writes END AS write_per_ms
,inline.io_stall_read_ms
,inline.io_stall_write_ms
,inline.num_of_reads
,inline.num_of_writes
,inline.num_of_bytes_written
,(inline.num_of_reads + inline.num_of_writes) / @WaitTimeSec AS iops
,inline.database_id
,inline.[file_id]
FROM (
SELECT r.CaptureDate
,r.read_per_ms - s.read_per_ms AS io_stall_read_ms
,r.num_of_reads - s.num_of_reads AS num_of_reads
,r.write_per_ms - s.write_per_ms AS io_stall_write_ms
,r.num_of_writes - s.num_of_writes AS num_of_writes
,r.num_of_bytes_written - s.num_of_bytes_written AS num_of_bytes_written
,r.database_id AS database_id
,r.[file_id] AS [file_id]

FROM #IOStallSnapshot s
JOIN #IOStallResult r
ON (s.database_id = r.database_id and s.[file_id] = r.[file_id])
) inline

The next few questions you might have after capturing these metrics includes how do I automate capturing these disk metrics similar to perfmon? Can I setup a parameter to be used as the normal wait period and also supply an interval for how long I would like to capture data to establish my baseline? Or better yet, can I capture data when a workload is not performing as expected and compare it to the baseline established when the workload performance was good?

The answer to theses questions is YES! Below  is the code for my  stored procedure to capture disk latency, IOPs and bytes written.

Download SQL Server Disk Latency Stored Procedure.

/*
Author: John Sterrett (http://johnsterrett.com)
NOTICE: This code is provided as-is run it at your own risk. John Sterrett assumes no responsibility
for you running this script.

GOAL: Get latency and IOPS for each data file, keep meta data in lookup table, results in another table.

PARAM: @WaitTime - time in seconds to wait between baselines
@Length - Amount of time to baseline, if null then don't stop

VERSION:
1.0 - 01/03/2012 - Original release
Includes two lookup tables for datafiles and runs
1.1 - 02/08/2012 - Includes computed column to get IOPs per datafile.
1.2 - 11/1/2013 - Changes IOPs so its not computed and has right data.
Missing Features: If you would like something added please follow up at http://johnsterrett.com/contact
-- Code to pull and update file path as needed
*/

/* Create tables */
CREATE SCHEMA DiskLatency
Go

CREATE TABLE DiskLatency.DatabaseFiles (
[ServerName] varchar(500),
[DatabaseName] varchar(500),
[LogicalFileName] varchar(500),
[Database_ID] int,
[File_ID] int
)

CREATE CLUSTERED INDEX idx_DiskLatency_DBID_FILE_ID ON DiskLatency.DatabaseFiles (Database_ID, File_ID)

CREATE TABLE DiskLatency.CaptureData (
ID bigint identity PRIMARY KEY,
StartTime datetime,
EndTime datetime,
ServerName varchar(500),
PullPeriod int
)

CREATE TABLE DiskLatency.CaptureResults (
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
io_stall_read int,
io_stall_write int,
num_of_reads int,
num_of_writes int,
num_of_bytes_written bigint,
iops int,
database_id int,
file_id int,
CaptureDataID bigint
)

CREATE CLUSTERED INDEX [idx_CaptureResults_CaptureDate] ON [DiskLatency].[CaptureResults]
( [CaptureDate] DESC)

CREATE NONCLUSTERED INDEX idx_CaptureResults_DBID_FileID ON DiskLatency.CaptureResults (database_id, file_id)

CREATE NONCLUSTERED INDEX idx_CaptureResults_CaptureDataID ON DiskLatency.CaptureResults (CaptureDataId)

ALTER TABLE DiskLatency.CaptureResults ADD CONSTRAINT FK_CaptureResults_CaptureData FOREIGN KEY
( CaptureDataID) REFERENCES DiskLatency.CaptureData
( ID )
GO


CREATE PROCEDURE DiskLatency.usp_CollectDiskLatency
-- Add the parameters for the stored procedure here
@WaitTimeSec INT = 60,
@StopTime DATETIME = NULL
AS
BEGIN

DECLARE @CaptureDataID int
/* Check that stopdate is greater than current time. If not, throw error! */

/* If temp tables exist drop them. */
IF OBJECT_ID('tempdb..#IOStallSnapshot') IS NOT NULL
BEGIN
DROP TABLE #IOStallSnapshot
END

IF OBJECT_ID('tempdb..#IOStallResult') IS NOT NULL
BEGIN
DROP TABLE #IOStallResult
END

/* Create temp tables for capture baseline */
CREATE TABLE #IOStallSnapshot(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

CREATE TABLE #IOStallResult(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

DECLARE @ServerName varchar(300)
SELECT @ServerName = convert(nvarchar(128), serverproperty('servername'))

/* Insert master record for capture data */
INSERT INTO DiskLatency.CaptureData (StartTime, EndTime, ServerName,PullPeriod)
VALUES (GETDATE(), NULL, @ServerName, @WaitTimeSec)

SELECT @CaptureDataID = SCOPE_IDENTITY()

/* Do lookup to get property data for all database files to catch any new ones if they exist */
INSERT INTO DiskLatency.DatabaseFiles ([ServerName],[DatabaseName],[LogicalFileName],[Database_ID],[File_ID])
SELECT @ServerName, DB_NAME(database_id), name, database_id, [FILE_ID]
FROM sys.master_files mf
WHERE NOT EXISTS
(
SELECT 1
FROM DiskLatency.DatabaseFiles df
WHERE df.Database_ID = mf.database_id AND df.[File_ID] = mf.[File_ID]
)

/* Loop through until time expires */
IF @StopTime IS NULL
SET @StopTime = DATEADD(hh, 1, getdate())
WHILE GETDATE() < @StopTime
BEGIN

/* Get baseline snapshot of stalls */
INSERT INTO #IOStallSnapshot (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.file_id
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id

/* Wait a few minutes and get final snapshot */
WAITFOR DELAY @WaitTimeSec

INSERT INTO #IOStallResult (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.file_id
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id

INSERT INTO DiskLatency.CaptureResults (CaptureDataID,
CaptureDate,
read_per_ms,
write_per_ms,
io_stall_read,
io_stall_write,
num_of_reads,
num_of_writes,
num_of_bytes_written,
iops,
database_id,
[file_id])
SELECT @CaptureDataID
,inline.CaptureDate
,CASE WHEN inline.num_of_reads =0 THEN 0 ELSE inline.io_stall_read_ms / inline.num_of_reads END AS read_per_ms
,CASE WHEN inline.num_of_writes = 0 THEN 0 ELSE inline.io_stall_write_ms / inline.num_of_writes END AS write_per_ms
,inline.io_stall_read_ms
,inline.io_stall_write_ms
,inline.num_of_reads
,inline.num_of_writes
,inline.num_of_bytes_written
,(inline.num_of_reads + inline.num_of_writes) / @WaitTimeSec
,inline.database_id
,inline.[file_id]
FROM (
SELECT r.CaptureDate
,r.read_per_ms - s.read_per_ms AS io_stall_read_ms
,r.num_of_reads - s.num_of_reads AS num_of_reads
,r.write_per_ms - s.write_per_ms AS io_stall_write_ms
,r.num_of_writes - s.num_of_writes AS num_of_writes
,r.num_of_bytes_written - s.num_of_bytes_written AS num_of_bytes_written
,r.database_id AS database_id
,r.[file_id] AS [file_id]

FROM #IOStallSnapshot s
INNER JOIN #IOStallResult r ON (s.database_id = r.database_id and s.file_id = r.file_id)
) inline

TRUNCATE TABLE #IOStallSnapshot
TRUNCATE TABLE #IOStallResult
END -- END of WHILE

/* Update Capture Data meta-data to include end time */
UPDATE DiskLatency.CaptureData
SET EndTime = GETDATE()
WHERE ID = @CaptureDataID

END
GO

Now that we have our stored procedure ready to go. Here is a simple block of code that you can run or embed in a SQL Agent Job to collect your counters to measure disk latency for each individual database file. For this example, were going to collect for an hour and wait a minute between collections.

DECLARE @EndTime datetime, @WaitSeconds int
SELECT @EndTime = DATEADD(hh, 1, getdate()),
@WaitSeconds = 60

EXEC DiskLatency.usp_CollectDiskLatency
@WaitTimeSec = @WaitSeconds,
@StopTime = @EndTime

I hope you enjoyed this blog post on capturing the metrics needed to single out which database files you should focus on when you notice disk latency. Please check out my next blog post as I focus on some queries I might use to complete a review of the disk latency data that was collected from this blog post.

Performance Tuning Texas Style!

If your in Texas and interested in learning some tips to help you do SQL Server Performance Tuning with free tools then I highly suggest that you attend one of these presentations coming to a city near you!

If you cannot make it out no worries, you can catch a recorded version of my presentation.

SQL Server Performance Tuning with Free Tools!

The following is a recording of my Performance Tuning for Pirates session recorded by UserGroup.TV at SQL Saturday 125 in Oklahoma City.  I recommend that you check out UserGroup.TV as they provide some great content for free.  It was an honor to be selected for this recording.  I hope you enjoy the video.  If you have any questions and need some help feel free to contact me.

If the video doesn’t render correctly above try watching the video here (UserGroup.TV) or here (SQL PASS Performance Virtual Chapter).

Links to all tools and scripts plus reference material can be found  at my SQL Server Performance Tuning with Free Tools page.

12 Steps to Workload Tuning – Winter 2012 Performance Palooza!

 

[UPDATE] Video recording from PASS Virtual Performance Chapter’s Winter 2012 Performance Palooza can be found here.

I am a huge fan of the PASS Virtual Performance Chapter and I am excited that they pinged me to speak at their Winter 2012 Performance Palooza event tomorrow. This event is similar to 24 Hours of PASS but it will focus on Performance.

I will be sharing my 12 Steps to Workload Tuning at 1PM Central Time (1900 GMT). We will focus on the methodology and we will  use RML Utilities which is a free tool provided by Microsoft CSS to help you replay, add additional stress and compare results.  If you want to improve your performance tuning skills I strongly recommend you checkout out the schedule and attend as many sessions as possible.

5 recommendations for attending #sqlpass #summit12

The following are five recommendations I would like to share with anyone attending the #sqlpass member summit. I hope you have a blast!

GuideBook

If you have a smart phone I highly recommend grabbing the GuideBook application for your mobile device. This will give you direct access to the session schedule, general information and more. This is a very helpful tool for being flexible and changing which session you should attend on the fly.

Buy the conference sessions

There are so many great sessions its impossible to catch them all. I strongly recommend buying the USB drive that includes all the sessions so the pressure is off to try and make sure you see them all.  This will allow you to enjoy the conference without worrying about seeing the best session. Personally, this allows me to focus more on networking.

Networking

The SQL Server Community is awesome. There are so many people in the community willing help you accomplish your goals.  All you have to do is meet them. There really isn’t a better place to do that than PASS Member Summit. I would never be where I am today without some of the connections I have meet at the past PASS conferences.

SQL Server Clinic

Got a problem and don’t know how to solve it?  Your at the right place, not only are there tons of great DBA’s around to help, you also have the SQL team willing to help for free. This year I brought a couple interesting problems with me and I cannot wait to review them with the Microsoft support team to get their thoughts.

Vendors

Do you need more hours in the day? Do you wish there was a way to make a process easier? If so, most likely one of the many vendors at the conference has a solution for it. Time is money, so if one of these tools on display can free up your time wouldn’t you want to use it? All the major SQL Server vendors will be at the conference, hopefully spending a few minutes with them will save your hours of time next year.

Is #SQLPASS helping their speakers?

I love the SQL Community because it usually is a great environment to connect, share and learn. With that said, I am noticing that we can do a much better job with helping the people who share, learn how to share better.  For every single, Grant Fritchey, Thomas LaRock, Andy Leonard, Brent Ozar, Mike Walsh, Allen White (I could keep going..) there are several DBA’s who speak in the community who don’t get the feedback they need to get to the next level. The SQL PASS community does a great job of providing opportunities for people to speak but we fail as a group at giving speakers the proper feedback that is needed to help them succeed.

Being a speaker and regional mentor I have attended many user group meetings and SQL Saturday’s in the past few years. I have seen a lot of great changes in the community.  Recently, I motivated some friends into giving their first presentation at the local SQL Saturday. I was able to attend their sessions so I could directly give feedback. It makes me wonder how many speakers only get the feedback provided on the evaluation forms? How is it possible to use the limited information on these forms to make a presentation better? That is if you’re lucky enough to be presenting at an user group who uses speaker evaluation forms. I admit I was guilty of this while running the WVPASS User Group.  From firsthand knowledge I know running a user group can require a lot of time and dedication so I completely see how this important feedback is missed when there are several other important pieces to the user group leader puzzle.

My call to action (this is where the rant ends). What is the answer? I wish I knew, but I defiantly can provide some suggestions. It would be nice to have a consistent evaluation process during a SQL Saturday and other PASS events including virtual chapters and user group meetings. I would like to see questions that are open ended that provide constructive feedback to help speakers improve as they grow instead of hoping attendees provide feedback on the back of a form.  I think it would be nice to give attendees  an simple online tool that allows them to provide feedback during the session.  Once again, these are just suggestions. Maybe there all wrong as they are just ideas on how the process can be improved through my experience as a chapter leader, speaker, SQL Saturday organizer and regional mentor.

In closing, I look forward to hopefully finding the answer with some friends in my #sqlfamily.  Every year at the PASS  Member Summit there is a meeting where the community can meet and ask questions to the Board of Directors. This year, I plan to attend and ask, “As a volunteer, how can I be involved in improving our current system in place to provide speakers with better feedback to help them improve their public speaking skills and get them to the next level?”

First SQL Server UG Meeting in Harrisburg, PA

If you work with SQL Server and live in the Harrisburg area I have some great news for you. The newest SQLPASS User Group has been born in your neck of the woods. The Central PA SQL Server User Group (CPSSUG) will be having their first official meeting tomorrow on July 10 @ 5:30 – 7:30 pm at HACC’s Mid Town Campus 2 Facility in Room 105.

My fellow #sqlfamily and #sqlpeeps lets help CPSSUG get this party started. If you can help with sponsorship or are within driving distance from Harrisburg and would like to speak at an upcoming meeting please contact Dustin Jones (dwjones@riteaid.com)

July Agenda:

Meet and Greet

Greg Seidel from Microsoft will be doing a presentation on some of the new features in SQL 2012.

Brian Charles from Rite Aid will be covering a real world performance tuning exercise on mismatched data types.


When Will CPSSUG Meet?

Meetings will be held on the second Tuesday of every month at 7:30 pm. Make sure you go to the CPSSUG website and jump on the mailing list so you can be notified of their future events.

Where will CPSSUG Meet?

The meeting location is HACC Midtown Campus #2 , room 105.   You can park on the street or in the parking lot across the street from the Mid T building. http://www.hacc.edu/Harrisburg/Midtown/Directions.cfm.

Free Training: Performance Tuning with Free Tools!

This week I have two presentations on my schedule. I get to give my Performance Tuning for Pirates presentation twice this week.

Pittsburgh SQL User Group Recap

On Tuesday, I presented my Performance Tuning for Pirates presentation at the Pittsburgh SQL Server User Group. Actually, I had a little tweak that went well. One of my friends who does a lot of tuning has alway been interested in doing a presentation so I had him jump on stage and do the presentation with me. I think it was a great success and I think we will be seeing some really cool presentations from him in the future.

Performance Virtual Chapter

Today at 2PM (EST), I am also giving my Performance Tuning for Pirates presentation at the SQLPASS Virtual Performance Chapter. This will be done via LiveMeeting and is free for all PASS Members. If your not a member, I have good news for you. PASS Membership is free so signup and join in on the fun. Also, if you are not able to make it today make sure you come back to the virtual chapter as this session should be recorded for replay.

If you are looking for the resources, tools used in the Performance Tuning for Pirates presentation you can find them here.