Azure Managed Instances are provisioned by default with a yourname.uniqueid.databases.windows.net DNS fully qualified domain name (FQDN). Even on your private virtual network you will still have to use this FQDN.
If you want your Azure Managed Instance to connect with your DNS prefix like yourname.domain.com this blog post is for you.
Azure Managed Instance Virtual Network
Your first step is to make sure you are either using Azure DNS service or providing one on your own.
In my example we have hybrid setup with Active Directory and DNS Servers in both our virtual network and also on-premises so we will be utilizing an VM in the virtual network to provide DNS.
DNS CNAME Alias Configuration
To allow private users to connect utilizing your DNS Zone you need to create a CName alias in DNS. The alias needs to have the same name as your managed instance. In this example, I created a Azure Managed Instance named “procuresql01mi”. Its FQDN is procuresql01mi.b6d698c00851.database.windows.net. The Domain name of my lab is PASS2020.com. I will configure a CName alias so all requests to procuresql01mi.pass2020.com internally will be routed to procuresql01mi.b6d698c00851.database.windows.net.
Azure Managed Instance DNS Zone Change Wrap Up
Now all you have to do is connect with the new DNS name used with the alias and you are good to go! If you use Azure AD to connect make sure enable “Trust Server Certificate” on your connection.
This is your Austin SQL Server Consultant and I will be speaking at SQL Saturday Jacksonville. I am happy to get back out and, on the road, again. I know COVID has impacted everyone in multiple different ways. While I have enjoyed my time alone with the people who matter the most to me, I am also excited to get back out on the road and reconnect with friends and make new ones as well.
This weekend, I am happy to get do my first in-person conference in the past two years. I will be at SQL Saturday in Jacksonville talking about my favorite topic Making SQL Server Queries go Faster! My SQL Server Consulting company will be sponsoring as well. Make sure to come on over as we look forward to connect, share and learn all day long!
Hi Everyone, this is John Sterrett. I am a SQL Server Consultant in Austin, TX. Last year I blogged about a feature called Persist Sample Percent. It had a nasty bug that could negatively impact performance. I have great news to share. The fix is now rolled into SQL 2016 SP2 CU17 and SQL 2019 CU10. Pedro Lopes let me know that with the fix now queued for SQL 2017 CU26, this becomes fixed in all versions.
Kudos to Pedro Lopes and the MSSQL Development team for this update. Make sure you are applying the latest updates so you can leverage all great enhancements, updates, and fixes.
Persist Sample Percent Matters
Okay, you might be wondering why should I consider utilizing persist sample percent? If you have large tables, auto update statistics might be hurting you instead of helping. Yup, that is not a typo. Also, if you update statistics and don’t provide a sample percent you can have the same problem. Worst case, you have a job that updates statistics with a good sample percent, data changes and auto-update uses a subpar percent.
By default, modern versions of SQL Server will utilize a smaller sample percent as your table row count grows. This can potentially give you bad execution plans.
Let’s take a look at the example below. It doesn’t take a whole lot of rows to get a sample percent under 10%
If you want to identify if this is a potential problem in your environment I am including the script below that we utilize in our SQL Server Health Check.
;with cte as (SELECT CAST(((rows_sampled * 1.00)/ [rows] )*100.00 AS NUMERIC(5,2)) AS SamplePCT, OBJECT_NAME(s.object_id) as TableNAME, s.name StatsName,
sp.*
FROM sys.stats AS s
OUTER APPLY sys.dm_db_stats_properties (s.[object_id], s.stats_id) AS sp
JOIN sys.objects o on s.object_id = o.object_id and o.is_ms_shipped = 0
WHERE 1=1)
select * from cte where SamplePCT IS NULL or SamplePCT < 10
order by SamplePCT
The following is an example of this occurring. The only change we made was updating stats with a fixed sample rate.
Persist Sample Percent Is Your Friend
You can utilize persist sample percent as long as you are utilizing one of the cumulative updates (CU’s) provided above or a newer CU. Persist sample percent will lock in your sample percent. You will no longer need to worry about an index rebuild removing the persisted sample percent which puts you back at the default sample percent.
Want to make sure you don’t have errors, validate performance, and save money while making changes with Azure SQL Database, Azure SQL Managed Instance, SQL Server RDS in Amazon AWS? In this video, you will learn how to use the Data Experimentation Assistant to perform workload replay and compare your on-premise or cloud SQL Server workloads on-demand.
When a bug jumps out and surprises me I like to share it so others do not run into the same unexpected result. I don’t think there is any reason for a bug to bite multiple people in the butt. Therefore, I want to show you why persist sample percent IS NOT PERSISTED!
Why should I use Persist Sample Percent?
When your table grows and the rows multiply the default statistics sample percent used by SQL Server gets smaller and smaller. In theory, persist sample percent lets you update your statistic once specifying the percent it should use going forward when a sample percent is not specified. Unfortunately, this feature is broken, IMHO.
Persist Sample Percent Setup
To set up the scene so you can reproduce and learn. Below we create a single-column table that is an identity and also primary key. Therefore, an index is created which also will create statistics on our column.
DROP TABLE dbo.Test
CREATE TABLE dbo.Test (ID INT IDENTITY NOT NULL CONSTRAINT TestPK PRIMARY KEY);
INSERT INTO dbo.Test DEFAULT VALUES
GO 10000000 --100 million rows..
CREATE PROCEDURE dbo.DemoStatsReview
AS BEGIN
SELECT CAST(((rows_sampled * 1.00)/ [rows] )*100.00 AS NUMERIC(5,2)) AS SamplePCT,
sp.*
FROM sys.stats AS s
OUTER APPLY sys.dm_db_stats_properties (s.[object_id], s.stats_id) AS sp
WHERE s.[name] = N'TestPK';
END
First, we will rebuild our index utilizing a fullscan. This is expected and normal activity for an index rebuild.
/* Index Rebuild uses 100% rows for sampling */
ALTER INDEX TestPK ON dbo.Test REBUILD WITH (STATISTICS_NORECOMPUTE = OFF)
EXEC dbo.DemoStatsReview
Now, we will update statistics utilizing the new Persist Sample Percent feature. This should give us two benefits. One, auto stats updates will use this sample rate going forward on this statistic. Two, we no longer need to supply an sample percent if we update statistics manually or with our maintenance jobs.
/* Now lets update stats by using the PERSIST_SAMPLE_PERCENT */
UPDATE STATISTICS dbo.Test TestPK WITH SAMPLE 60 PERCENT, PERSIST_SAMPLE_PERCENT = ON;
EXEC dbo.DemoStatsReview
Let’s go ahead and update statistics now without any sample percent specified. We will see that the persist sample percent is applied as expected.
/* Update stats to validate sample size is persisted */
UPDATE STATISTICS dbo.Test TestPK
EXEC dbo.DemoStatsReview
Let’s see what happens when we rebuild the index. We expect that a FULLSCAN is used to update the statistics behind the index. Did anything else change? OH THE SUSPENSE!
/* What happens if we rebuild an index that has its stats persisted?
Do we still use 100% rows for sample? */
ALTER INDEX TestPK ON dbo.Test REBUILD WITH (STATISTICS_NORECOMPUTE = OFF)
EXEC dbo.DemoStatsReview
/* Lets update Stats again.
Remember presisted a sample size is 60%. */
UPDATE STATISTICS dbo.Test TestPK
EXEC dbo.DemoStatsReview
There you have it. The persist sample percent not only went away on the Index Rebuild but because we updated statistics without forcing a sample percent on 100 million rows the sample percent went to 1%. I will add another blog post that focuses on this later. For now, if the late Notorious BIG was a DBA he would say, “More rows,more problems with stats you get!” If this didn’t make any sense. The more row the lower there sample rate when statistics get updated.
How do we fix this?
This is a bug inside of SQL Server. There is a feedback item that hasn’t received any feedback from Microsoft in two years since the bug was reported. Please upvote so this can get the focus of Microsoft so persist sample percent is actually persisted!
I am a consultant in Austin who can help make your data go fast, be secure and highly available. When I am engaged in a performance tuning project priority #1 isn’t to make sure your data go faster. Priority #1 is to make sure we get the same result sets while making your data go faster.
Free SQL Data Compare with T-SQL?
There are several tools out there that can be used to compare data. Today, I want to share how you can quickly do this on your own with T-SQL!
Let’s simplify the process. Our goal is to check two temp tables and validate if any of the data is different. This would include inserts, updates, and deletes. For this example, I will just do a dump of Sales.SalesOrderDetail in AdventureWorks into two temp tables as shown below.
SELECT *
INTO #Tmp1
FROM Sales.SalesOrderDetail
SELECT *
INTO #Tmp2
FROM Sales.SalesOrderDetail
Now we shouldn’t see any differences since we used the same table to create both temp tables. We are going to use two different SQL operators to compare these two temp tables while applying some data changes. We will focus on the UNION ALL and EXCEPT operators.
The Power of EXCEPT
Except is an underrated and underused SQL operation. In a nutshell, it will give you the results of the first query that are different from the next query. So, if the data of any column in #tmp1 is different from #tmp2 or if the row doesn’t exist in #tmp2 but is in #tmp1 it will get returned.
SELECT * FROM #Tmp1
EXCEPT
SELECT * FROM #Tmp2
Let’s go ahead and modify a column in #Tmp1 so you can see how this works. We are going to set OrderQty to five when SalesOrderId is 45313 and SalesOrderDetailId is 6210. This will change just one column in one row. We will then select these columns from both temp tables to see the change.
This is how most people would start using T-SQL to identify changes in data.
UPDATE #Tmp1 SET OrderQty = 5
WHERE SalesOrderID = 45313
AND SalesOrderDetailID = 6210
SELECT SalesOrderId, SalesOrderDetailID,
OrderQty FROM #Tmp1
WHERE SalesOrderID = 45313
AND SalesOrderDetailID = 6210
SELECT SalesOrderId, SalesOrderDetailID,
OrderQty FROM #Tmp2
WHERE SalesOrderID = 45313
AND SalesOrderDetailID = 6210
Finding Data Changes The Easy Way
Selecting the two tables is easy if we know what change occurred and there aren’t many changes. This can get complicated quickly. Therefore, if we just want to quickly know if we have differences lets take a look at my goto method using EXCEPT. To make this example easier to read instead of using “SELECT *” I will just focus on columns that are changing. In a real example, I would want to know if any columns changed.
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp1
EXCEPT
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp2
If an insert or a column change occurs in #tmp1 we will see it in our EXCEPT SQL statement. This isn’t true if the change is only in #tmp2.
For example, an insert in #tmp2 or delete in #tmp1 would not be shown. To see this we would have to switch the temp tables in the EXCEPT clause as shown below.
INSERT INTO #tmp2 (SalesOrderId, ProductID,
SpecialOfferID, OrderQty, UnitPrice,
UnitPriceDiscount,LineTotal,
rowguid, ModifiedDate)
VALUES (45313, 1, 3, 1,1.25,0,
1.25*1, NEWID(), GETDATE())
DELETE FROM #Tmp1
WHERE SalesOrderID = 45313
AND SalesOrderDetailID = 6211
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp1
EXCEPT
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp2
/* We will now see our insert and delete */
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp2
EXCEPT
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp1
Our first except shows us data in #tmp1 that is not in #tmp2 because the OrderQty column changed in #tmp1. The second EXCEPT shows us data in #tmp2 that isn’t in #tmp1 because of our insert into #tmp2 and also our delete from #tmp1 would be found in #tmp2 but not #tmp1.
UNION ALL for the Win!
To wrap this up now we can include a UNION ALL operation between the two EXCEPT operations. This would get us any data changes to the columns selected from the temp tables.
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp1
EXCEPT
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp2
UNION ALL
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp2
EXCEPT
SELECT SalesOrderId, SalesOrderDetailID, OrderQty
FROM #Tmp1
Typically, I need to verify is the data before and after is the same. This is a quick and easy way to get that answer. Now I know you might want to take this to the next level. You might be thinking how do I just get the unique key for the table and columns that changed. I will leave that as an exercise for you.
UPDATE: 3/16/2020 The first script below was updated from setting QDS to “Read_Write” instead of “Read_Only”. Martin thank you for pointing this out in your comment below.
This month’s T-SQL Tuesday is hosted by Tracy Boggiano. Tracy invites us all to write about adopting Query Store. Today, I wanted to share my favorite but a very unique way I use the Query Store for Workload Replays.
You can read more about the invite in detail by clicking on the T-SQL Tuesday logo in this post.
Today, I wanted to talk about my least favorite part of replaying workloads. It’s having an extended event or server-side trace running during a workload replay only so we can compare the results at a query-level when the replay is finished. Now, this might seem like a trivial thing but when you have workloads over 10k batch requests/sec this can consume terabytes of data quickly. The worst part is waiting to read all the data, slice and dice the data for analysis.
Starting with SQL Server 2016 there is a better and faster way to go! You can replace your extended event or server-side trace with Query Store captured data. Today, I will show you how to use the Query Store for the same purpose.
Different Settings
Keep in mind our goal here is very different from the typical use case for using the Query Store. We want to capture metrics for all the queries executed during a workload replay. Nothing more and nothing less.
If we have the runtime results for multiple replays we can then easily compare the workload performance between the workload replays.
Most of our changes from the regular Query Store best practices are shown below:
Max Size (MB) – Need to make sure there is enough space to capture your whole workload. This size will vary by how much workload is being replayed.
Query Store Capture Mode set to All. Normally, not ideal, but remember we want to capture metrics for our whole workload being replayed.
Size Based Cleanup Mode set Off – Yup, we don’t want to lose our workload data that is capture until we persist in our ideal form. More on this later.
The Capture Process
Now, this is where you would use Database Experimentation Assistant (DEA), Distributed Replay or some other process to replay your consistent workload in an isolated non-production environment. This subject we will cover in another future post. For now, we will just have two replays called “Baseline” and “Change”. This simulates a baseline replay with no schema changes and then another change replay with a change introduced in the schema.
To capture our workload we just enable the Query store with our settings mentioned above and also clear out the query store right before our workload replay starts to help ensure we are just capturing our workload.
USE [master]
GO
ALTER DATABASE [YourDatabase] SET QUERY_STORE = ON
GO
ALTER DATABASE [YourDatabase]
SET QUERY_STORE (OPERATION_MODE = READ_WRITE,
MAX_STORAGE_SIZE_MB = 10000,
QUERY_CAPTURE_MODE = AUTO,
SIZE_BASED_CLEANUP_MODE = OFF)
GO
ALTER DATABASE [YourDatabase] SET QUERY_STORE CLEAR
GO
Stop Capturing Query Store Data
Once your replay is finished we will want to disable the query store from writing data into the query store. We want the least amount of non-workload data inside of the Query Store when we are using it for the sole purpose of comparing workloads.
USE [master]
GO
ALTER DATABASE [YourDatabase]
SET QUERY_STORE (OPERATION_MODE = READ_ONLY)
GO
Prepare Query Store Data for Long-Term Analysis
Now for smaller workloads, one might be happy with utilizing DBCC CLONEDATABASE to have a schema-copy of their workload with Query Store data persisted. This is perfectly fine. With bigger workloads being captured I have noticed there are ways to improve the performance of query store when doing analysis of the query store data. For example, clustered columnstore indexes can be very helpful for performance and compacity. Therefore, I like to have a schema for each replay and import the data. The following is a quick example of setting up a schema for a “baseline” replay and a “change” replay.
CREATE DATABASE [DBA]
GO
use [DBA]
GO
CREATE SCHEMA Baseline;
GO
CREATE SCHEMA Compare;
GO
Next, we will import our captured data from our baseline replay that’s in our read-only query store database. I also like to have a baked-in aggregate of metrics for reads, writes, duration and CPU at the query level.
use [YourDatabase]
GO
/* Load Data */
SELECT * INTO DBA.Baseline.query_store_runtime_stats
FROM sys.query_store_runtime_stats;
SELECT * INTO DBA.Baseline.query_store_runtime_stats_interval
from sys.query_store_runtime_stats_interval;
select * INTO DBA.Baseline.query_store_plan
from sys.query_store_plan;
select * INTO DBA.Baseline.query_store_query
from sys.query_store_query;
select * INTO DBA.Baseline.query_store_query_text
from sys.query_store_query_text;
/* Addition for SQL 2017 */
select * INTO DBA.Baseline.query_store_wait_stats
from sys.query_store_wait_stats
use [DBA]
GO
SELECT SUM(Count_executions) AS TotalExecutions,
SUM(Count_executions*avg_duration) AS TotalDuration,
SUM(Count_executions*avg_logical_io_reads) AS TotalReads,
SUM(Count_executions*avg_logical_io_writes) AS TotalWrites,
SUM(count_executions*avg_cpu_time) AS TotalCPU,
query_hash
INTO Baseline.QueryResults
FROM Baseline.query_store_runtime_stats rs
JOIN Baseline.query_store_plan p ON rs.plan_id = p.plan_id
JOIN Baseline.query_store_query q ON p.query_id = q.query_id
GROUP BY q.query_hash
Next, we would reset the database to our starting position and add our query store settings as mentioned above in this blog post and replay or workload again. This time, we would dump our data into the “change” schema
use [YourDatabase]
GO
/* Load Data */
SELECT * INTO DBA.Compare.query_store_runtime_stats
FROM sys.query_store_runtime_stats;
SELECT * INTO DBA.Compare.query_store_runtime_stats_interval
from sys.query_store_runtime_stats_interval;
select * INTO DBA.Compare.query_store_plan
from sys.query_store_plan;
select * INTO DBA.Compare.query_store_query
from sys.query_store_query;
select * INTO DBA.Compare.query_store_query_text
from sys.query_store_query_text;
select * INTO DBA.Compare.query_store_wait_stats
from sys.query_store_wait_stats
use [DBA]
GO
SELECT SUM(Count_executions) AS TotalExecutions,
SUM(Count_executions*avg_duration) AS TotalDuration,
SUM(Count_executions*avg_logical_io_reads) AS TotalReads,
SUM(Count_executions*avg_logical_io_writes) AS TotalWrites,
SUM(count_executions*avg_cpu_time) AS TotalCPU,
query_hash
INTO Compare.QueryResults
FROM Compare.query_store_runtime_stats rs
JOIN Compare.query_store_plan p ON rs.plan_id = p.plan_id
JOIN Compare.query_store_query q ON p.query_id = q.query_id
GROUP BY q.query_hash
Comparing Workload Results
Now that we have our two workloads imported we can now compare to see how the workload changed per query. I will break this down into two quick steps. First, get deltas per query. Second, get totals for how many times a query might be different in the query store. More on this a little later in the post.
/* Query Store Results */
use [DBA]
GO
SELECT DISTINCT
c.TotalExecutions - b.TotalExecutions AS ExecutionDelta,
c.TotalExecutions AS CompareExecutions,
b.TotalExecutions AS BaselineExecutions,
c.TotalDuration - b.TotalDuration AS DurationDelta,
c.TotalCPU - b.TotalCPU AS CPUDelta,
c.TotalReads - b.TotalReads AS ReadDelta,
c.TotalWrites - b.TotalWrites AS WriteDelta,
c.TotalReads AS CompareReads,
b.TotalReads AS BaselineReads,
c.TotalCPU AS CompareCPU,
b.TotalCPU AS BaselineCPU,
c.TotalDuration AS CompareDuration,
b.TotalDuration AS BaselineDuration,
c.query_hash
--q.query_sql_text
INTO #CTE
FROM Baseline.QueryResults b
JOIN Compare.QueryResults c ON b.query_hash = c.query_hash
select COUNT(query_sql_text) AS QueryCount,
MAX(query_sql_text) query_sql_text,
MIN(query_id) MinQueryID,
qsq.query_hash
INTO #Compare
from Compare.query_store_query qsq
JOIN Compare.query_store_query_text q
ON qsq.query_text_id = q.query_text_id
where qsq.is_internal_query = 0
GROUP BY query_hash
select COUNT(query_sql_text) AS QueryCount,
MAX(query_sql_text) query_sql_text,
MIN(query_id) MinQueryID,
qsq.query_hash
INTO #Baseline
from Baseline.query_store_query qsq
JOIN Baseline.query_store_query_text q
ON qsq.query_text_id = q.query_text_id
where qsq.is_internal_query = 0
GROUP BY query_hash
select cte.*
, a.QueryCount AS Compare_QueryCount
, b.QueryCount AS Baseline_QueryCount
, a.MinQueryID AS Compare_MinQueryID
, b.MinQueryID AS Baseline_MinQueryID
, a.query_sql_text
FROM #CTE cte
JOIN #Compare a on cte.query_hash = a.query_hash
JOIN #Baseline b on cte.query_hash = b.query_hash
WHERE 1=1
AND ExecutionDelta = 0
ORDER BY ReadDelta ASC
Query Store for Workload Replays
Lessons Learned Along the Way!
Initially, working with the query store I thought query_id was going to be my best friend. I quickly learned that my old friend query_hash is more helpful for multiple reasons. One, I can easily compare queries between different replays. That’s right now all workload replays get you the same query_id even when the workload is the exact same being replayed. Two, I can compare them with different databases as well. Finally, query_hash is very helpful with ad-hoc workloads as I can aggregate all the different query_ids that have the same query hash.
I recently spoke at a conference and was asked what is the easiest way to import databases to Azure SQL Database. Therefore, I wanted to share how I do this with DBATools.io. You can use the same code to just export if you need a local copy of an Azure SQL database as well.
Import-Module dbatools -Force
<# Variables #>
$BackupPath = "C:\Demo\AzureSQL\Bacpac" #folder location for backups
$SourceInstance = "sql2019\sql2016"
$DBName = "AdventureWorksLT2012"
$AzureDestInstance = "procuresqlsc.database.windows.net"
$DBNameDest = $DBName
<# backpac options for import and export #>
$option = New-DbaDacOption -Type Bacpac -Action Export
$option.CommandTimeout = 0
$option2 = New-DbaDacOption -Type Bacpac -Action Publish
$option2.CommandTimeout = 0
<#
The following assums Azure SQL Database exists and is empty
Azure will create database by default if it doesn't exist
#>
$bacpac = Export-DbaDacPackage -Type Bacpac -DacOption $option -Path `
$BackupPath -SqlInstance $SourceInstance -Database $DBName
Publish-DbaDacPackage -Type Bacpac -SqlInstance `
$AzureDestInstance -Database $DBNameDest -Path $bacpac.path `
-DacOption $option2 -SqlCredential username
What Is my Performance Tier?
Great question, as of 3/3/2020 if the database in Azure SQL Database does not exist then it will be created. When its created the following database uses the default performance tier. This is General Purpose (Gen5) with 2 vCores.
How to create cheaper databases
Great question, you can import databases to Azure SQL Database cheaper using PowerShell. It is as simple as using the Azure PowerShell Module. The following example below I use my existing Azure SQL Database server and I end up creating a new database with the “S0” tier.
You got all the nuts and bolts to generate a script that can migrate all your databases on an instance. Then you can import databases to Azure SQL Database in one loop.
Today I wanted to cover how you can grant the least privilege required to stop, start or restart an Azure VM. This is also a fun great example of how you can create custom Azure Security Roles too. That’s right, we are killing two birds with one stone today.
Why Should you create a custom Role?
Where possible I like to grant security towards resource groups. Therefore, let’s assume we got multiple VM’s built for the developer group to do some testing. You want to grant people access to start, restart or stop any VM in that group. We can then grant access to the resource group to our custom role. As VMs come in and out of the resource group they would inherit our custom group.
Now, you might be wondering why don’t I just give them the “Contributor” role or the “Virtual Machine Contributor” role and be on our way? Well, if you were to do this on a resource group you just gave access to create VM’s and a whole lot more.
Least privileged roles are your best friend. Today, you will see they are also not that hard to create either.
How do we create custom roles?
Great question, first you need to identify what tasks do we need the role to complete. In our case, you have to be able to see a VM in order to take any action against the VM. Then we want to start, stop (deallocate) and restart the VM. Digging through IAM. I found the following security options.
Now, we can create our custom JSON text file that we will then import using Azure CLI. Below you will find a sample JSON file to build our custom security role. You will need to add your subscription id(s). You can also change your name and description you would see in the Azure Portal.
Now that we are ready to go with our custom security role in a JSON file. We can then utilize Azure CLI to log in to the tenant and import our security role. First, we will log in to Azure with CLI as shown below.
az login --username <myEmailAddress> -t <customerTenantId-or-Domain>
Now we will load our saved JSON file. After a few minutes, we should then see our new security role in the Azure portal.
az role definition create --role-definition IAMRole-VMOperator.json
Now you can grant access to your custom role just like you would with any other role in Azure.
This site uses functional cookies and external scripts to improve your experience. Which cookies and scripts are used and how they impact your visit is specified on the left. You may change your settings at any time. Your choices will not impact your visit.
NOTE: These settings will only apply to the browser and device you are currently using.