Tag Archives: SQL Server

Why We’re Organizing SQL Saturday in Wheeling, WV on April 28th

Some of my friends know I am a huge fan of the song “Havana” by Camila Cabello. They also know I like to remix songs and if I was to remix the song I would just change the word “Havana” to “Wheeling, WV” because half of my heart is in the Ohio Valley.

Why?

Wheeling is where I grew up as an adult and to this day it is one of the special happy places I like to visit. On April 28th Wheeling will host their the third SQL Saturday. Anyone can attend for FREE! l look forward to sharing my favorite city with the SQL Community and my SQL Family.

Free SQL Server Training in Wheeling, WV

Free SQL Server Training on April 29th in Wheeling, WV

Procure SQL will be teaming up with the Wheeling Chapter of AITP (only IT group in Ohio Valley) to bring some expert SQL Server training from MVPs, MCTs, and community experts to the Ohio Valley. I hope Data Platform professionals in nearby cities like Columbus, Pittsburgh, Harrisburg, Cleveland and Washington DC join is un the fun as well.

Things to Do?

Check out this quick five-minute video to find out some of the great things you can do in Wheeling, WV and why I fell in love with Wheeling!

Food

Colemans Fish Market – It is #1 on TripAdvisor for a reason. Best-fried fish sandwiches.
Ye Old Alpha – The Bridgeport brownie is legendary good.
DeCarlos Pizza – Wheeling’s special version of Pizza. If you get it make sure to eat it quick. Locals typically will eat it on the hood of their cars.
Undos – My personal favorite Italian food restaurant.

Places to See:

Good Mansion Wine – If you like wine, the selection here is fantastic. They will also have an open wine tasting event April 27th at 6 pm. If you are looking for something fun to do the night before the event I would recommend this.
Suspension BridgeIf you like history. You have to check out one of the oldest suspension bridges in the USA. You can still walk and drive across it.
Wheeling Artisan CenterGreat small tour of the history of Wheeling, WV.
Center Market
– Historic part of town with a lot of shops and places to eat. Its an easy walk from the SQL Saturday venue.
Oglebay Resort – Depending on the weather the driving range or ski lift will be open. Seriously, a great five-star resort with epic holiday events including Christmas lights, ogalbayfest, and 4th of July.
Wheeling Island Casino – If you like to play cards and win money its a great location. Used to do it a lot on lunch breaks.

Benchmark SQL Server Disk Latency

Typically, I am a big advocate of performance monitor but one place I commonly see performance monitor being misused is with benchmarking disk counters for SQL Server.  Typically, you will see people applying best practices like having dedicated spindles for transactional log files and dedicated spindles for data files.  With that said, multiple database files and/or transactional log files are collocated on the same logical or physical drive(s). Therefore, when your looking a disk latency like reads per second or writes per second it can almost be impossible to determine which data file(s) is causing the disk latency. You just know which physical or logical drive has latency issues.

Meet THE  SYS.DM_IO_VIRTUAL_FILE_STATS DMV

Starting with SQL Server 2005 DBA’s were granted access to the sys.dm_io_virtual_file_stats dynamic management view. This DMV gives you access into how many physical I/O operations occurred, how much latency has occurred,  how much data was written and more.  The secret key is that this is for each independent database file and that this data is collected since the last time the instance started so we need to keep that in mind. Ideally, we would want to capture this data, wait for a period of time, capture this data again and then compare the results. This objective is completed in the code shown below. For this example we will wait five minutes between captures.

DECLARE @WaitTimeSec int
SET @WaitTimeSec = 300 -- seconds between samples.

/* If temp tables exist drop them. */
IF OBJECT_ID('tempdb..#IOStallSnapshot') IS NOT NULL
BEGIN
DROP TABLE #IOStallSnapshot
END

IF OBJECT_ID('tempdb..#IOStallResult') IS NOT NULL
BEGIN
DROP TABLE #IOStallResult
END

/* Create temp tables for capture baseline */
CREATE TABLE #IOStallSnapshot(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

CREATE TABLE #IOStallResult(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

/* Get baseline snapshot of stalls */
INSERT INTO #IOStallSnapshot (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.file_id
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id

/* Wait a few minutes and get final snapshot */
WAITFOR DELAY @WaitTimeSec

INSERT INTO #IOStallResult (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.[file_id]
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.[file_id] = b.[file_id]
AND a.database_id = b.database_id

/* Get differences between captures */
SELECT
inline.CaptureDate
,CASE WHEN inline.num_of_reads =0 THEN 0
ELSE inline.io_stall_read_ms / inline.num_of_reads END AS read_per_ms
,CASE WHEN inline.num_of_writes = 0 THEN 0
ELSE inline.io_stall_write_ms / inline.num_of_writes END AS write_per_ms
,inline.io_stall_read_ms
,inline.io_stall_write_ms
,inline.num_of_reads
,inline.num_of_writes
,inline.num_of_bytes_written
,(inline.num_of_reads + inline.num_of_writes) / @WaitTimeSec AS iops
,inline.database_id
,inline.[file_id]
FROM (
SELECT r.CaptureDate
,r.read_per_ms - s.read_per_ms AS io_stall_read_ms
,r.num_of_reads - s.num_of_reads AS num_of_reads
,r.write_per_ms - s.write_per_ms AS io_stall_write_ms
,r.num_of_writes - s.num_of_writes AS num_of_writes
,r.num_of_bytes_written - s.num_of_bytes_written AS num_of_bytes_written
,r.database_id AS database_id
,r.[file_id] AS [file_id]

FROM #IOStallSnapshot s
JOIN #IOStallResult r
ON (s.database_id = r.database_id and s.[file_id] = r.[file_id])
) inline

The next few questions you might have after capturing these metrics includes how do I automate capturing these disk metrics similar to perfmon? Can I setup a parameter to be used as the normal wait period and also supply an interval for how long I would like to capture data to establish my baseline? Or better yet, can I capture data when a workload is not performing as expected and compare it to the baseline established when the workload performance was good?

The answer to theses questions is YES! Below  is the code for my  stored procedure to capture disk latency, IOPs and bytes written.

Download SQL Server Disk Latency Stored Procedure.

/*
Author: John Sterrett (https://johnsterrett.com)
NOTICE: This code is provided as-is run it at your own risk. John Sterrett assumes no responsibility
for you running this script.

GOAL: Get latency and IOPS for each data file, keep meta data in lookup table, results in another table.

PARAM: @WaitTime - time in seconds to wait between baselines
@Length - Amount of time to baseline, if null then don't stop

VERSION:
1.0 - 01/03/2012 - Original release
Includes two lookup tables for datafiles and runs
1.1 - 02/08/2012 - Includes computed column to get IOPs per datafile.
1.2 - 11/1/2013 - Changes IOPs so its not computed and has right data.
Missing Features: If you would like something added please follow up at https://johnsterrett.com/contact
-- Code to pull and update file path as needed
*/

/* Create tables */
CREATE SCHEMA DiskLatency
Go

CREATE TABLE DiskLatency.DatabaseFiles (
[ServerName] varchar(500),
[DatabaseName] varchar(500),
[LogicalFileName] varchar(500),
[Database_ID] int,
[File_ID] int
)

CREATE CLUSTERED INDEX idx_DiskLatency_DBID_FILE_ID ON DiskLatency.DatabaseFiles (Database_ID, File_ID)

CREATE TABLE DiskLatency.CaptureData (
ID bigint identity PRIMARY KEY,
StartTime datetime,
EndTime datetime,
ServerName varchar(500),
PullPeriod int
)

CREATE TABLE DiskLatency.CaptureResults (
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
io_stall_read int,
io_stall_write int,
num_of_reads int,
num_of_writes int,
num_of_bytes_written bigint,
iops int,
database_id int,
file_id int,
CaptureDataID bigint
)

CREATE CLUSTERED INDEX [idx_CaptureResults_CaptureDate] ON [DiskLatency].[CaptureResults]
( [CaptureDate] DESC)

CREATE NONCLUSTERED INDEX idx_CaptureResults_DBID_FileID ON DiskLatency.CaptureResults (database_id, file_id)

CREATE NONCLUSTERED INDEX idx_CaptureResults_CaptureDataID ON DiskLatency.CaptureResults (CaptureDataId)

ALTER TABLE DiskLatency.CaptureResults ADD CONSTRAINT FK_CaptureResults_CaptureData FOREIGN KEY
( CaptureDataID) REFERENCES DiskLatency.CaptureData
( ID )
GO


CREATE PROCEDURE DiskLatency.usp_CollectDiskLatency
-- Add the parameters for the stored procedure here
@WaitTimeSec INT = 60,
@StopTime DATETIME = NULL
AS
BEGIN

DECLARE @CaptureDataID int
/* Check that stopdate is greater than current time. If not, throw error! */

/* If temp tables exist drop them. */
IF OBJECT_ID('tempdb..#IOStallSnapshot') IS NOT NULL
BEGIN
DROP TABLE #IOStallSnapshot
END

IF OBJECT_ID('tempdb..#IOStallResult') IS NOT NULL
BEGIN
DROP TABLE #IOStallResult
END

/* Create temp tables for capture baseline */
CREATE TABLE #IOStallSnapshot(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

CREATE TABLE #IOStallResult(
CaptureDate datetime,
read_per_ms float,
write_per_ms float,
num_of_bytes_written bigint,
num_of_reads bigint,
num_of_writes bigint,
database_id int,
file_id int
)

DECLARE @ServerName varchar(300)
SELECT @ServerName = convert(nvarchar(128), serverproperty('servername'))

/* Insert master record for capture data */
INSERT INTO DiskLatency.CaptureData (StartTime, EndTime, ServerName,PullPeriod)
VALUES (GETDATE(), NULL, @ServerName, @WaitTimeSec)

SELECT @CaptureDataID = SCOPE_IDENTITY()

/* Do lookup to get property data for all database files to catch any new ones if they exist */
INSERT INTO DiskLatency.DatabaseFiles ([ServerName],[DatabaseName],[LogicalFileName],[Database_ID],[File_ID])
SELECT @ServerName, DB_NAME(database_id), name, database_id, [FILE_ID]
FROM sys.master_files mf
WHERE NOT EXISTS
(
SELECT 1
FROM DiskLatency.DatabaseFiles df
WHERE df.Database_ID = mf.database_id AND df.[File_ID] = mf.[File_ID]
)

/* Loop through until time expires */
IF @StopTime IS NULL
SET @StopTime = DATEADD(hh, 1, getdate())
WHILE GETDATE() < @StopTime
BEGIN

/* Get baseline snapshot of stalls */
INSERT INTO #IOStallSnapshot (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.file_id
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id

/* Wait a few minutes and get final snapshot */
WAITFOR DELAY @WaitTimeSec

INSERT INTO #IOStallResult (CaptureDate,
read_per_ms,
write_per_ms,
num_of_bytes_written,
num_of_reads,
num_of_writes,
database_id,
[file_id])
SELECT getdate(),
a.io_stall_read_ms,
a.io_stall_write_ms,
a.num_of_bytes_written,
a.num_of_reads,
a.num_of_writes,
a.database_id,
a.file_id
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b ON a.file_id = b.file_id
AND a.database_id = b.database_id

INSERT INTO DiskLatency.CaptureResults (CaptureDataID,
CaptureDate,
read_per_ms,
write_per_ms,
io_stall_read,
io_stall_write,
num_of_reads,
num_of_writes,
num_of_bytes_written,
iops,
database_id,
[file_id])
SELECT @CaptureDataID
,inline.CaptureDate
,CASE WHEN inline.num_of_reads =0 THEN 0 ELSE inline.io_stall_read_ms / inline.num_of_reads END AS read_per_ms
,CASE WHEN inline.num_of_writes = 0 THEN 0 ELSE inline.io_stall_write_ms / inline.num_of_writes END AS write_per_ms
,inline.io_stall_read_ms
,inline.io_stall_write_ms
,inline.num_of_reads
,inline.num_of_writes
,inline.num_of_bytes_written
,(inline.num_of_reads + inline.num_of_writes) / @WaitTimeSec
,inline.database_id
,inline.[file_id]
FROM (
SELECT r.CaptureDate
,r.read_per_ms - s.read_per_ms AS io_stall_read_ms
,r.num_of_reads - s.num_of_reads AS num_of_reads
,r.write_per_ms - s.write_per_ms AS io_stall_write_ms
,r.num_of_writes - s.num_of_writes AS num_of_writes
,r.num_of_bytes_written - s.num_of_bytes_written AS num_of_bytes_written
,r.database_id AS database_id
,r.[file_id] AS [file_id]

FROM #IOStallSnapshot s
INNER JOIN #IOStallResult r ON (s.database_id = r.database_id and s.file_id = r.file_id)
) inline

TRUNCATE TABLE #IOStallSnapshot
TRUNCATE TABLE #IOStallResult
END -- END of WHILE

/* Update Capture Data meta-data to include end time */
UPDATE DiskLatency.CaptureData
SET EndTime = GETDATE()
WHERE ID = @CaptureDataID

END
GO

Now that we have our stored procedure ready to go. Here is a simple block of code that you can run or embed in a SQL Agent Job to collect your counters to measure disk latency for each individual database file. For this example, were going to collect for an hour and wait a minute between collections.

DECLARE @EndTime datetime, @WaitSeconds int
SELECT @EndTime = DATEADD(hh, 1, getdate()),
@WaitSeconds = 60

EXEC DiskLatency.usp_CollectDiskLatency
@WaitTimeSec = @WaitSeconds,
@StopTime = @EndTime

I hope you enjoyed this blog post on capturing the metrics needed to single out which database files you should focus on when you notice disk latency. Please check out my next blog post as I focus on some queries I might use to complete a review of the disk latency data that was collected from this blog post.

12 Steps to Workload Tuning – Winter 2012 Performance Palooza!

 

[UPDATE] Video recording from PASS Virtual Performance Chapter’s Winter 2012 Performance Palooza can be found here.

I am a huge fan of the PASS Virtual Performance Chapter and I am excited that they pinged me to speak at their Winter 2012 Performance Palooza event tomorrow. This event is similar to 24 Hours of PASS but it will focus on Performance.

I will be sharing my 12 Steps to Workload Tuning at 1PM Central Time (1900 GMT). We will focus on the methodology and we will  use RML Utilities which is a free tool provided by Microsoft CSS to help you replay, add additional stress and compare results.  If you want to improve your performance tuning skills I strongly recommend you checkout out the schedule and attend as many sessions as possible.

Is #SQLPASS helping their speakers?

I love the SQL Community because it usually is a great environment to connect, share and learn. With that said, I am noticing that we can do a much better job with helping the people who share, learn how to share better.  For every single, Grant Fritchey, Thomas LaRock, Andy Leonard, Brent Ozar, Mike Walsh, Allen White (I could keep going..) there are several DBA’s who speak in the community who don’t get the feedback they need to get to the next level. The SQL PASS community does a great job of providing opportunities for people to speak but we fail as a group at giving speakers the proper feedback that is needed to help them succeed.

Being a speaker and regional mentor I have attended many user group meetings and SQL Saturday’s in the past few years. I have seen a lot of great changes in the community.  Recently, I motivated some friends into giving their first presentation at the local SQL Saturday. I was able to attend their sessions so I could directly give feedback. It makes me wonder how many speakers only get the feedback provided on the evaluation forms? How is it possible to use the limited information on these forms to make a presentation better? That is if you’re lucky enough to be presenting at an user group who uses speaker evaluation forms. I admit I was guilty of this while running the WVPASS User Group.  From firsthand knowledge I know running a user group can require a lot of time and dedication so I completely see how this important feedback is missed when there are several other important pieces to the user group leader puzzle.

My call to action (this is where the rant ends). What is the answer? I wish I knew, but I defiantly can provide some suggestions. It would be nice to have a consistent evaluation process during a SQL Saturday and other PASS events including virtual chapters and user group meetings. I would like to see questions that are open ended that provide constructive feedback to help speakers improve as they grow instead of hoping attendees provide feedback on the back of a form.  I think it would be nice to give attendees  an simple online tool that allows them to provide feedback during the session.  Once again, these are just suggestions. Maybe there all wrong as they are just ideas on how the process can be improved through my experience as a chapter leader, speaker, SQL Saturday organizer and regional mentor.

In closing, I look forward to hopefully finding the answer with some friends in my #sqlfamily.  Every year at the PASS  Member Summit there is a meeting where the community can meet and ask questions to the Board of Directors. This year, I plan to attend and ask, “As a volunteer, how can I be involved in improving our current system in place to provide speakers with better feedback to help them improve their public speaking skills and get them to the next level?”

Dallas, TX to Pittsburgh, PA A #sqlsat171 Story!

 

First SQL Saturday in Pittsburgh

First SQL Saturday in Pittsburgh

It was an honor to speak at the first SQL Saturday in Pittsburgh. The company I

We even got some professional development done while I was in the office.

We even got some professional development done while I was in the office.

currently work for is based out of Pittsburgh so I flew in a week early and traded my work from home lifestyle this week for working in the office.  It was fun to do some training and mentoring in person this week.  One of my coworkers went all out and hosted a great BBQ dinner while my boss picked me up at 1:30am on a Monday morning and let me crash with him all week.  I am very lucky to work with some great people.

For this event I had two goals. One, help Gina anyway possible with finding speakers and sponsors. Second, I wanted to encourage and help some great local DBA’s in Pittsburgh give their first #sqlpass presentation. So, months before this event I gave myself a personal challenge to target three great DBA’s in Pittsburgh and I tried to get them to deliver their first presentation locally at the SQL Saturday.  I was very happy to help and see two of the three potential speakers give great presentations.  They even filled all the seats and had standing room only.  One of them has even caught the speaking bug and submitted abstracts to SQL Saturday in DC.

Speakers Dinner…

The SQL Saturday committee did a great job on picking a venue for the speakers dinner. We meetup at the Rivers Casino across the river from downtown. We had a great view Mt. Washington and the Point. I wish I could have stayed longer but I had a work item that needed to be completed.

Day of the Event..

I started the morning by giving my Performance for Pirates (code, tools and

Kon did a great job, with the room being packed!

Kon did a great job for his first presentation with the room being packed!

videos provided with link) presentation.  Next, I got the opportunity to catch Eric Keeps dive into ORM that make DBA’s happy.  I was surprised when Eric showed one that allowed developers and DBA’s to build the SQL statements.  Finally, I caught Konstantin Melamud who presented an “Introduction to Execution Plans” session.  He did a fantastic job, most of the people in the room wouldn’t know it was his first #sqlpass presentation.

One of the most important parts of attending SQL Saturday’s is to catchup and extend your network.  I love catching up with my #SQLFamily and extending it too.  I had a great time catching up with Karla Landrum,  Joey Dantoni, Rick Heiges, Craig Purnell, Eric Kepes and more (too many to list all!).  After some networking I had to cut out early to catch my flight. I was able to hitch a ride to the airport with Joey and get a sneak peek into traveling like a rockstar as he got me through priority checkin and the US Airways club. Thanks Joey!

The After Event Buzz…

Finally, its now close to a week after the event and I am still hearing good news about the event. We were able to round up about ten co-workers to join in and attend the event.  They are still talking about the event and cannot wait for the next one.  Many of them mentioned that they are going to start attending the Pittsburgh SQL Server User Group meetings, a few are considering going to the SQL Saturday in DC in December.  If this isn’t #SQLWinning I don’t know what is!

Presenting at OKC SQL Server User Group Tonight!

Tonight, October 8th at 6:30pm I am virtually speaking at the OKC SQL Server User Group.  I had a blast there earlier this year when I presented at their SQL Saturday event.  Tonight I will be talking about Performance Tuning for Pirates! There will be free knowledge and food for all who can make it down. I hope you can make it and I look forward to meeting you virtually!

Here are the meeting details!

The title for the talk is: Performance Tuning for Pirates and will cover:

If you follow baseball you will notice that the Pittsburgh Pirates (this speaker’s favorite team) has to do more with less to be successful. This trend can also exist within IT shops. If you need to improve the performance of your SQL Server instances and cannot purchase 3rd party tools this session is for you. Learn to improve your performance tuning process by using several free tools. This session will cover wait stats, dynamic management objects, perfmon, PAL, SQL Trace, ClearTrace, SQL Nexus and SQLDiag.

When: Tonight! Monday, October  08th

Time: 6:00 p.m.- 8:00 p.m.

Food and Drinks will be provided by GDH

Location: GDH Offices – Follow this link for information on location and parking!

1111 N Walker Ave
Oklahoma City, OK 73103

Directions link: Click Here
Our Website/MailingList: http://okcsql.org
Twitter/Facebook:  @OKCSQL and http://facebook.com/okcsql

Speaking at Pittsburgh Powershell User Group

Tomorrow, May 15th at 6pm I will be joining the script club and give a presentation on how DBA’s can leverage powershell to do their job quicker and automate processes.  It’s not too late to sign up. If you will be near the north shore I highly recommend checking it out. You can also follow the Pittsburgh PowerShell User Group to checkout their future meetings.

 

 

Let’s Meetup [Updated]!

I have a confession to make. I like to meet people who also work with SQL Server. There are so many different ways to use SQL Server. I love to hear how people are using it differently than me.  One of the easiest ways to accomplish this is to connect with people and start a conversation at SQLSaturday’s, User Group Meetings and PASS events like SQL Rally coming up in Dallas in May. Once upon a time, I would attend similar training events and was so shy I didn’t connect with anyone. Why didn’t I connect with anyone? Mainly, because I was afraid. If this is you, you are not alone I was in the same place as you. In fact, let’s meetup at one of the events listed below.

NOTE: The following schedule has been updated on May, 17th 2012

SQL Saturday #107 (Houston, TX) on April 21st

I will be presenting my Evaluating your Daily Checklist with PBM and CMS presentation. I always look forward to SQL Saturday’s in Houston as this gives me the opportunity to visit my family and my #sqlfamily in the same trip. If your in Houston its not too late to sign up.

WVPASS (Wheeling, WV) May 24th

I will be presenting “DBA’s use RML Utilities to help get your next raise!” If your in Pittsburgh or Morgantown area make the drive out. Anyone, can still signup for this free event. Not only will you learn how to prove that your new indexes or application upgrade will improve performance but you will get free pizza. As a bonus, you will also get to meet some cool DBA’s in Wheeling, WV.

SQL Satuday #117 IN COLUMBUS, OH ON June 2nd

I am looking forward to my first SQL Saturday in Columbus, OH. I will be giving my Performance Tuning for Pirates presentation. Columbus is only a two hour drive and I hope to bring #babybeluga and the wife out with me. I think they would have a blast at the Columbus Zoo.

SQL Saturday #121 in Philadelphia, PA on June 9th

Do we really need to say more?  I cannot wait for SQL Saturday in the city of brotherly love.  The schedule hasn’t  has been posted and I will be giving my Performance Tuning for Pirates presentation. I look forward to learning, connecting and sharing.

Pittsburgh SQL Server User Group (Pittsburgh, PA) on June 26th

I will be presenting, “Performance Tuning for Pirates!” in Pittsburgh. Pittsburgh has a great SQL Server User Group and it’s always an honor to present for this group.

Performance Virtual Chapter – June 28th

I will be presenting, “Performance Tuning for Pirates!” Being that this is a virtual chapter I will not be presenting in person. This doesn’t mean that we cannot connect and chat.  We always have twitter so send me shout out before or after the session.

24 Hours of PASS – Get your free downloads!

I am speaking at 24 Hours of PASS

I am speaking at 24 Hours of PASS

Thank you to everyone who attended my #24HOP session on Performance Tuning for Pirates! You can now download my slide deck, t-sql queries, and view my reference links.  If you have any questions about the tools presented feel free to contact me and I will  try to help you out or point you in the right direction.

 

8 steps to moving database users from SQL 2005 to SQL 2008

Have you ever had to complete an upgrade where you had to uninstall SQL Server 2005 and install SQL Server 2008 on the same box? If so, how do you make sure the new install had the same security objects?  For me, the answer was lets be safe instead of being sorry.  I would script out all the existing server and database security objects so they could be verified and implemented if needed.

Not only is this a good practice for migrating your servers but the following post could be useful for a security audit.   I also want to mention if you are allowed to install tools on the box you can download SQLPermissions its a free tool provided by Idera to generate a script to move security objects during a migration.
[Update 1/28/2012 – USER_NAME() was replaced with SCHEMA_NAME() in the Explicit Permissions section]
** Run scripts at your own risk. I don’t assume any responsibility for these scripts on being ran in your environment. As always, I recommend testing in development before running in production 🙂 **

Logins

The very first step to connecting to an instant of SQL Server is to connect or login. In order to connect you must have a login. Therefore, our first step will be to actually copy the logins. I know, you might already be wondering how do I get the passwords moved over for SQL authenticated logins? Microsoft has a KB article (918922) that walks you through this process. Make sure you checkout the remarks section before running the code on a production box.

Server Level Security

Logins are nice but actually useless unless they have access to connect. Therefore, we might want to allow these logins to connect. You might also want to make sure a subset of the accounts  have access to view the DMV’s. This and other server level permissions can be scripted out with the following code below.


SELECT	sp.state_desc,
	sp.permission_name,
	principal_name = QUOTENAME(spl.name),
spl.type_desc,
	sp.state_desc + N' ' + sp.permission_name + N' TO ' + cast(QUOTENAME(spl.name COLLATE DATABASE_DEFAULT) as nvarchar(256)) AS "T-SQL Script"
FROM sys.server_permissions sp
inner join sys.server_principals spl on (sp.grantee_principal_id = spl.principal_id)
where spl.name not like '##%' -- skip PBM accounts
and spl.name not in ('dbo', 'sa', 'public')
order by sp.permission_name, spl.name

Server Level Roles

Now that we can login we might want to take this to the next step. We may actually like to perform some actions. Well, if your instance includes third party application databases there could be a good chance that it depends on its login having sysadmin access. Yes, in case you were wondering this is poor security design but the point is your logins may depend on server level roles to have access to the database objects.  Here is a script you can use to script out server level roles.

SELECT  DISTINCT
QUOTENAME(sp.name) AS "ServerRoleName",
sp.type_desc AS "RoleDescription",
QUOTENAME(m.name) AS "PrincipalName",
m.type_desc AS "LoginDescription",
'EXEC master..sp_addsrvrolemember @loginame = N''' + m.name + ''', @rolename = N''' + sp.name + '''' AS "T-SQL Script"
FROM sys.server_role_members AS srm
inner join sys.server_principals sp on (srm.role_principal_id = sp.principal_id)
inner join sys.server_principals m on (srm.member_principal_id = m.principal_id)
where sp.is_disabled = 0
and m.is_disabled = 0
and m.name not in ('dbo', 'sa', 'public')
and m.name <> 'NT AUTHORITY\SYSTEM'

Database Level Security

Now that your logins can connect you need to make sure that they can connect to the databases you migrated over.  Remember those third party apps that skipped the process of doing a  security design?  They will use sysadmin access so database level security will be included.  Hopefully, your applications are not one of those. Just like server level security you can also have database level security. Make sure you run the script below and save your results.

SELECT  dp.state_desc,
dp.permission_name,
QUOTENAME(dpl.name)  AS 'principal_name',
 dpl.type_desc,
 dp.state_desc + N' ' + dp.permission_name + N' TO ' + cast(QUOTENAME(dpl.name COLLATE DATABASE_DEFAULT) as nvarchar(500))  AS "T-SQL Script"
FROM sys.database_permissions AS dp
INNER JOIN sys.database_principals AS dpl ON (dp.grantee_principal_id = dpl.principal_id)
WHERE dp.major_id = 0
and dpl.name not like '##%' -- excluds PBM accounts
and dpl.name not in ('dbo', 'sa', 'public')
ORDER BY dp.permission_name ASC, dp.state_desc ASC

Database Level Roles

Just like the SQL Server instance your database also has roles.  Many times I will see applications that don’t require sysadmin on the instance but do require db_owner access in the database.   This is also a bad security design practice as db_owner can do anything against that database. Hopefully, your application has object level explicit permissions as needed. Once again, I would recommend pulling this information in case its needed after your migration.

SELECT DISTINCT
QUOTENAME(drole.name) as "DatabaseRoleName",
drole.type_desc,
QUOTENAME(dp.name) as "PrincipalName",
dp.type_desc,
'EXEC sp_addrolemember @membername = N''' + dp.name COLLATE DATABASE_DEFAULT + ''', @rolename = N''' + drole.name + '''' AS "T-SQL Script"
FROM sys.database_role_members AS drm
inner join sys.database_principals drole on (drm.role_principal_id = drole.principal_id)
inner join sys.database_principals dp on (drm.member_principal_id = dp.principal_id)
where dp.name not in ('dbo', 'sa', 'public')

Database Level Explicit Permissions

Finally, we will get to the area of security where I hope to see many entries. If someone, tasked me with designing their the security model for their application I would make sure stored procedures controlled the data access and that application users could only execute the stored procedures. This way there would be very minimal risk of data loss as the users wouldn’t have direct access to the tables. They would only access data needed by the business logic provided in the database objects. You can pull this information from the code below.

SELECT    dp.state_desc AS "StateDescription" ,
dp.permission_name AS "PermissionName" ,
SCHEMA_NAME(obj.schema_id) AS [Schema Name],
obj.NAME AS [Object Name],
QUOTENAME(SCHEMA_NAME(obj.schema_id)) + '.' + QUOTENAME(obj.name)    + CASE WHEN col.column_id IS NULL THEN SPACE(0)           ELSE '(' + QUOTENAME(col.name COLLATE DATABASE_DEFAULT) + ')'      END AS "ObjectName" ,
QUOTENAME(dpl.name COLLATE database_default) AS "UserName" ,
dpl.type_Desc AS "UserRoleType" ,
obj.type_desc AS "ObjectType" ,
dp.state_desc + N' ' + dp.permission_name + N' ON '    + QUOTENAME(SCHEMA_NAME(obj.schema_id)) + '.' + QUOTENAME(obj.name)    + N' TO ' + QUOTENAME(dpl.name COLLATE database_default) AS "T-SQL Script"
FROM    sys.database_permissions AS dp
INNER JOIN sys.objects AS obj ON ( dp.major_id = obj.[object_id] )
INNER JOIN sys.database_principals AS dpl ON ( dp.grantee_principal_id = dpl.principal_id )
LEFT JOIN sys.columns AS col ON ( col.column_id = dp.minor_id  AND col.[object_id] = dp.major_id)
WHERE    obj.name NOT LIKE 'dt%'
AND obj.is_ms_shipped = 0
AND dpl.name NOT IN ( 'dbo', 'sa', 'public' )
ORDER BY    dp.permission_name ASC ,    dp.state_desc ASC

Orphaned users

If logins are not brought over correctly you may notice that they are not mapped correctly with the user account that exists in the database security (Logins section above should catch this).  The following is an explination that comes from BOL.

A database user for which the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the instance. Such a user is said to be anorphaned user of the database on that server instance.

You can find and fix orphaned users by following the instructions provided here.

Validate User Accounts

Finally, yes when possible lets make sure the user accounts work.  I know this might be common sense but you don’t want to cut something over with out testing and verifying connectivity.