Friday, 12 August 2016

My new book hit No 1 hot new release

I am very excited that my new book, Expert Scripting and Automation has hit No 1 hot new release on

Even more pleasing is that today I have seen the print copy for the first time!

Saturday, 6 August 2016

Expert Scripting & Automation

I'm very please to announce that my new book, Expert Scripting and Automation for SQL Server DBAs has now been published.

It is already available on as both an e-book and a print book, and will hit the UK shelves in the coming week.

I don't believe that there is any other book quite like this one on the market, so I really hope you all enjoy the read!

Tuesday, 12 July 2016

Using Row-Level Security with HierarchyID

When I first heard about Row-Level Security (RSL), one of the first use cases I though of, was to satisfy queries against a hierarchical table. For example, imagine that you had a table of employees details; you could use RSL to limit the rows that each employee could view, to include only those employees who report to them.

So lets have a look at how you could achieve this, by using the HumanResources.Employee table in the AdventureWorks2016CTP3 database.

In order to implement RSL, we will need two objects. The first is a Security Predicate. This consists of an inline table-valued function, which determines if a row should be accessible. The second, is a Security Policy. The Security Policy is a new artifact type in SQL Server 2016, and binds the Security Predicate to a table.

The script below creates the Security Predicate in a schema named Security.

CREATEFUNCTION Security.fn_securitypredicate(@OrganizationNode HIERARCHYID)
RETURN SELECT 1 AS fn_securitypredicate_result
FROM HumanResources.Employee e1
WHERE @OrganizationNode.IsDescendantOf(OrganizationNode) = 1
AND LoginID = 'adventure-works\' + SUSER_SNAME() ;

The function accepts a single parameter, of the type HIERARCHYID. This is a complex data type, implemented through SQLCLR, which provides a number of methods, which can be used to determine position within a hierarchy, as well as hierarchy ancestry. A full method reference for the data type can be found at

Using SCHEMABINDING on the function means that columns refferenced by the Security Predicate cannot be altered, but simplifies security administration, as the user who implicitly calls the function, will not need permissions to any tables or functions that are referenced by the Security Predicate.

The query uses the IsDecendantOf method, against the @OrganizationNode parameter (which will represent the OrganizationNode column of each row within the Employees table, to find all descendants of the row, where the LoginID column corresponds with the User ID of the user that has run the query. The concatenation of 'adventure-works\' is used to make the value returned by the SUSER_SNAME() function match the values stored in the table, where the domain of the user is also recorded. 1 is returned, for each row that matches the criteria, which tells the Security Policy that the row can be accessed.

The script below creates the Security Policy.

CREATE SECURITY POLICY Security.EmployeeSecurityPolicy
ADD FILTER PREDICATE Security.fn_securitypredicate(OrganizationNode) ON HumanResources.Employee

The Security Policy is also created in the Security schema. Creating RSL objects in a separate schema is a best practice, as it simplifies security administration. The ADD FILTER PREDICATE syntax performs several functions.  

Firstly, it specifies that the predicate should be used to silently filter rows, as opposed to a BLOCK predicate, which will stop DML statements beling issued against rows, and return an error message.

Secondly, it binds the Security Predicate to the HumanResources.Employee table.

Thirdly, it passes the OrganizationNode column, from the HumanResources.Employee table, to the Security Predicate function.

The WITH statement specifies that both STATE and SCHEMABINDING are ON. STATE will determine if the Policy is enabled on creation. SCHEMABINDING will determine if a Security Predicate MUST use SCHEMABINDING, or if it is optional.

I talk more about RSL, in my upcoming book, Securing SQL Server, which will be published by Apress, in early 2017.

Thursday, 7 July 2016

Regular Expressions in SQL Server

In my last post ddl triggers as work-around for policy limitations, I discussed how triggers can be used to enforce database settings are not changed. This cannot be achieved through Policy Based Management, because the Database facet does not support On Change: Log or On Change: Prevent modes.

I also noted that I was not happy with the pattern matching that I originally used within the trigger, to resolve the issue. Therefore, in this post, I will talk about how we can use CLR to gain the benefits of regular expressions in SQL Server.

So, the challenge is....We want a trigger that will ensure that no statement can turn off the AUTO CREATE STATISTICS, AUTO UPDATE STATISTICS or AUTO UPDATE STATISTICS ASYNC  settings of any database within the instance. The pattern matching offered by the LIKE predicate in T-SQL works well for changes made through the GUI, but if a change is scripted, then a developer may have used an extra space, a tab, or a line break, etc.

The first thing that we need to do to work around this, is create a CLR function, that will accept the SQL Statement from the EVENTDATA() function as a parameter and check the patter against the regex libraries. Below is the c# method to achieve this.

using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Text.RegularExpressions;

public partial class UserDefinedFunctions
    public static int CheckMatch(SqlString statement)
        Regex rgxUpdate = new Regex("AUTO_UPDATE_STATISTICS[_ASYNC]{0,}[\r\n\t{0,}[ \n\t\t]{0,}OFF", RegexOptions.IgnoreCase | RegexOptions.Singleline);

        Int16 retcounter = 0;
        string stringstatement = Convert.ToString(statement);

        Match matchUpdate = rgxUpdate.Match(stringstatement);
        if (matchUpdate.Success)

        Regex rgxCreate = new Regex("AUTO_CREATE_STATS[\r\n\t{0,}[ \n\t\t]{0,}OFF", RegexOptions.IgnoreCase | RegexOptions.Singleline);

        Match matchCreate = rgxCreate.Match(stringstatement);
        if (matchCreate.Success)

        if (retcounter > 0)
        { return 1; }
        { return 0; }


There are a few things to pay particular attention to in this code. Firstly, notice that we are importing the following .NET base libraries:


The first of these libraries, is used because we are working with the sqlstring data type, The second library is required because we are creating a function that will be used by SQL Server and the third library is required to use the regex classes.

The next thing to note in this code is the regex options. We are using regex options to treat the SQL Statement as a single line and to make the pattern match case insensitive.

Once we have built the project, we will need to create an assembly within SQL Server. This can be created using the CREATE ASSEMBLY DDL statement, as demonstrated below.

FROM 'c:\sql_assemblies\checkstats.dll'  

The script assumes that we have built the project as CheckStats.dll to the c:\sqlassemblies folder.

Now that the assembly has been built, we can create the CLR function, which will call the c# method within the assembly. The script below demonstrates how to create the function.

@Statement NVARCHAR(MAX)
AS EXTERNAL NAME CheckStats.UserDefinedFunction.CheckMatch;

Notice that we reference the method, using the 3-part name Assembly.Class.Method

Finally, we can create the server scoped DDL trigger that will respond to ALTER DATABASE statements and call the CLR function. This is demonstrated below.

CREATE TRIGGER StatisticsManagement
IF (SELECT dbo.CheckMatch(UPPER(EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')))) = 1
PRINT 'Auto Stats Settings MUST Be Configured As ON';

You can read more about DDL Triggers in my book Pro SQL Server Administration, available from, Amazon and most book shops.

Tuesday, 5 July 2016

DDL Triggers as a Work Around for Policy Based Management Limitations

A college recently asked for my help in solving a problem with SQL Server Policy Based Management. He had a requirement to enforce a policy that prevent the auto update and auto create statistics settings being changed on production databases during code deployments. The problem is, that these settings are in the Database facet and this facet does not support On Chage:Prevent or On Change Log.

You can determine which actions are supported by which facets, by using the below query:

    name ,
    'Yes' AS on_demand,
        WHEN (CONVERT(BIT, execution_mode & 4)) = 1
            THEN 'Yes'
        ELSE 'No'
    END AS on_schedule,
        WHEN (CONVERT(BIT, execution_mode & 2)) = 1
            THEN 'Yes'
        ELSE 'No'
    END AS on_change_log,
        WHEN (CONVERT(BIT, execution_mode & 1)) = 1
            THEN 'Yes'
        ELSE 'No'
    END AS on_change_prevent
FROM msdb.dbo.syspolicy_management_facets ;
Instead, I suggested that he use a DDL trigger to ensure that these settings are not changed, and helped him write the below trigger definition.

CREATE TRIGGER DBA_StatisticsManagement 
       IF (SELECT UPPER(EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')))  LIKE '%AUTO_UPDATE_STATISTICS_ASYNC OFF%'
             PRINT 'Auto Update Stats Async MUST be configured as ON'
             ROLLBACK ;
       IF (SELECT UPPER(EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')))  LIKE '%AUTO_CREATE_STATISTICS OFF%'
             PRINT 'Auto Create Stats MUST be configured as ON'
             ROLLBACK ;
       IF (SELECT UPPER(EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)')))  LIKE '%AUTO_UPDATE_STATISTICS OFF%'
             PRINT 'Auto Update Stats MUST be configured as ON'
             ROLLBACK ;
This trigger has been created on the instance, which means that it will reside in the Master database and is configured to fire when an ALTER DATABASE statement is executed. There is no need for it to fire when a CREATE DATABASE statement is executed, because even when using the GUI to create a database, the auto update and auto create statistics settings are configured with ALTER DATABASE statements, after the initial database creation.

The trigger pulls the statement executed from the EVENTDATA() function and then uses the LIKE predicate to pattern match the strings that would turn off the auto update and auto create statistics options. The event data is returned as an XML document and the schema can be found at Therefore, we need to use the value() Xquery method, to extract the data.

After writting this trigger, it played on my mind. Whilst this trigger will always work for changes made through the GUI, there is a chance that it will not work, if the change is made through a script. This is because of the way the LIKE predicate pattern matches the statement. For example, what is a script included an additional space before OFF?

In my next post, I will talk about how I resolved the this, by using a CLR function.

I talk more about DDL triggers and Policy Based Management, in my book, Pro SQL Server Administration, available from, Amazon and most book shops.

Tuesday, 7 June 2016

How to Calculate Binary Bit Masks

Ok, so I know this post is a bit geeky, but there is nothing wrong with a little geeky fun from time to time.

SQL Server uses bit maps for various thinks, such as CPU affinity and Server Agent Operator's pager availability, but how do you work with bit maps? Well, it is a case of calculating the integer expresentation of the bit map value. This is made more complex because the INT data type is a 32-bit signed integer, meaning that some of the representations will be negative numbers.

Using CPU affinity as an example, the table below shows the processor number, bit mask and signed integer representation for processors 0 - 31.

Processor Number Bit Mask                                                               Signed Integer Representation
0                             0000 0000 0000 0000 0000 0000 0000 0001      1
1                             0000 0000 0000 0000 0000 0000 0000 0010      2
2                             0000 0000 0000 0000 0000 0000 0000 0100      4
3                             0000 0000 0000 0000 0000 0000 0000 1000      8
4                             0000 0000 0000 0000 0000 0000 0001 0000     16
5                             0000 0000 0000 0000 0000 0000 0010 0000     32
6                             0000 0000 0000 0000 0000 0000 0100 0000     64
7                             0000 0000 0000 0000 0000 0000 1000 0000     128
8                             0000 0000 0000 0000 0000 0001 0000 0000     256
9                             0000 0000 0000 0000 0000 0010 0000 0000     512
10                           0000 0000 0000 0000 0000 0100 0000 0000     1024
11                           0000 0000 0000 0000 0000 1000 0000 0000     2028
12                           0000 0000 0000 0000 0001 0000 0000 0000     4096
13                           0000 0000 0000 0000 0010 0000 0000 0000     8192
14                           0000 0000 0000 0000 0100 0000 0000 0000     16384
15                           0000 0000 0000 0000 1000 0000 0000 0000     32768
16                           0000 0000 0000 0001 0000 0000 0000 0000     65536
17                           0000 0000 0000 0010 0000 0000 0000 0000     131072
18                           0000 0000 0000 0100 0000 0000 0000 0000     262144
19                           0000 0000 0000 1000 0000 0000 0000 0000     524288
20                           0000 0000 0001 0000 0000 0000 0000 0000     1048576
21                           0000 0000 0010 0000 0000 0000 0000 0000     2097152
22                           0000 0000 0100 0000 0000 0000 0000 0000     4194304
23                           0000 0000 1000 0000 0000 0000 0000 0000     8388608
24                           0000 0001 0000 0000 0000 0000 0000 0000     16777216
25                           0000 0010 0000 0000 0000 0000 0000 0000     33554432
26                           0000 0100 0000 0000 0000 0000 0000 0000     67108864
27                           0000 1000 0000 0000 0000 0000 0000 0000     134217728
28                           0001 0000 0000 0000 0000 0000 0000 0000     268435456
29                           0010 0000 0000 0000 0000 0000 0000 0000     536870912
30                           0100 0000 0000 0000 0000 0000 0000 0000     1073741824
31                           1000 0000 0000 0000 0000 0000 0000 0000     -2147483648

On a 32-core server, there are 2.631308369336935e+35 possible combinations for processor affinity, but a few examples are included below.

Aligned Processors Bit Mask Signed                                                  Integer Representation
0 and 1                    0000 0000 0000 0000 0000 0000 0000 0011      3
0, 1, 2, and 3           0000 0000 0000 0000 0000 0000 0000 1111      15
8 and 9                    0000 0000 0000 0000 0000 0011 0000 0000      768
8, 9, 10, and 11       0000 0000 0000 0000 0000 1111 0000 0000      3840
30 and 31                1100 0000 0000 0000 0000 0000 0000 0000      -1073741824
28, 29, 30, and 31   1111 0000 0000 0000 0000 0000 0000 0000      -268435456

Because an integer has a a maximum range of 2^32 then the Affinity Mask can only accommodate values for up to 32 CPUs. If your server has between 33 and 64 processors, then SQL Server relies on a 2nd value, called the 64 Bit Affinity Mask, to store the details of the next 32 CPUs.

There are a number of calculators available online, for calculating the signed integer representation of a bit mask. I tend to use

SQL Enterprise Edition vs SQL Enterprise Core Based Licensing Edition

There is a common point of confusion, when DBAs decide what version of SQL Server to install. Until SQL Server 2012, if you needed Enterprise Edition, then things were straight forward. When SQL Server 2012 was released, however, things changed, with the release of core-based licensing.

Subsequently, SQL Server Enterprise Edition should not be installed in most cases. It is designed as a "tide-me-over" edition, to see companies through, as they move from CAL based, to core based licenses.

What a lot of people do not realize, is there is actually a technical difference between the two editions. The two products have different SKUs and SQL Server Enterprise Edition will only support a maximum of 20 cores.

If you have a server large enough to test this out, you can see for yourself, by running the following query:

SELECT COUNT(*) FROM sys.dm_os_schedulers WHERE status = 'VISIBLE ONLINE'

If you are using Enterprise Edition, then the maximum value returned will be either 20, or 40 if the server has hyper-threading turned on.

Check out my books on

Pro SQL Server Administration

SQL Server AlwaysOn Revealed

Expert Scripting and Automation for SQL Server DBAs 

Use SQL 2016 Query Store to Remove Ad-hoc Plans

Ad-hoc query plans consume memory and can be of limited use. It is a good idea to remove ad-doc query plans if they are not being recused. The query below demonstrates how to use Query Store metadata to identify and remove unwanted ad-hoc query plans from the cache. 


        SELECT 'EXEC sp_query_store_remove_query ' 
                + CAST(qsq.query_id AS NVARCHAR(6)) + ';' AS [data()]
        FROM sys.query_store_query_text AS qsqt
        JOIN sys.query_store_query AS qsq
               ON qsq.query_text_id = qsqt.query_text_id
        JOIN sys.query_store_plan AS qsp
               ON qsp.query_id = qsq.query_id
        JOIN sys.query_store_runtime_stats AS qsrs
               ON qsrs.plan_id = qsp.plan_id
        GROUP BY qsq.query_id
        HAVING SUM(qsrs.count_executions) = 1
               AND MAX(qsrs.last_execution_time) < DATEADD (HH, -24, GETUTCDATE())
        ORDER BY qsq.query_id
        FOR XML PATH('')
) ;


You can find out more about the Query Store in SQL Server 2016, in my new book, Expert Scripting and Automation for SQL Server DBAs, available from Apress at or at Amazon at

Navigating a SQL Server Instance with PowerShell

As well as navigating a folder structure with commands such as get-childitem, and set-location, PowerShell can also be used to navigate the SQL Server object hierarchy of an instance. You can connect PowerShell to the SQL Server database engine provider by using set-location to navigate to SQLSERVER:\SQL The information returned by get-childitem is dependent on the current location of the object hierarchy. The table below shows what information is returned from each level of the hierarchy.

Information Returned
The name of the local machine
The names of the database engine instances installed on the local machine
Instance level object types
Lower levels
Object types or objects contained within the current location

Once you have navigated to the level of the hierarchy, in which you are interested, you are able to use PowerShell to perform basic operations against objects at that level. For example, the script below will navigate to the "tables" namespace within the AdventureWorks2016 database and rename the dbo.DatabaseLog table, to dbo.DatabaseLogPSThe dir commands will display the original name and new name of the table.


dir | where{$ -like "*DatabaseLog*"}

rename-item -LiteralPath dbo.DatabaseLog -NewName DatabaseLogPS

dir | where{$ -like "*DatabaseLog*"}

More complex tasks that do not have a specific cmdlet, associated with them can be performed by using the invoke-sqlcmd cmdlet. 

You can also start PowerShell from within SSMS. This is done by selecting "Start PowerShell" from the context menu of an object folder, within Object Explorer. This will cause the PowerShell CLI to be invoked, with the initial location being set to the object folder that you used to invoke the CLI.

My new book, Expert Scripting and Automation for SQL Server DBAs contains extensive details and examples of how to use PowerShell to administer SQL Server. The book is available from Apress at or from Amazon, at

Monday, 6 June 2016

Running Administrative Queries Across Multiple Servers

As a DBA, sometimes we are asked questions, such as "What is total size of databases used by Application A?" or "Does Application A have any databases with a mismatched collation?"

If Application A has a single database server, then these questions are straight forward to answer. If Application A has databases dispersed across multiple instances, however, then the question becomes more time consuming to answer. The temptation is to connect to each instance individually, and copy/pasting the query.

It is possible to make use of Central Management Servers, to simplify answering this kind of question, however. For example, if you are supporting a large, dispersed data-tier application, you can create a server group, containing each of the instances that support the application.

For example, in the image below, you can see that I have created two folders; one for the EMEA region and another for the US.

We can now register our servers\instances under the relevant group. Below, you can see that I use the New Server Registration dialog box, to add servers. In this example, I will add two servers, nammed ESPROD1 and ESPROD2, to the  Production | NTAM server group. 

Imagine a scenario where you have a number of SQL Server instances, where the database files are hosted on a SAN (Storage Area Network). You come into work one morning, to find that there has been an issue with the production SAN and you need to check all databases, on all instances, for corruption. This can be easily achieved, using central management servers.
Because you have a separate SAN for each environment, in each data center, you will only want to check the servers in the NTAM server group, as this is where the SAN issue occurred. Therefore, select New Query, from the context menu of the NTAM server group. This will cause a new query window to be displayed and we will use DBCC CHECKDB to check the consistency of all databases on both instances. This is demonstrated below:

EXEC sp_MSforeachDB 'DBCC CHECKDB (?)' ;

This command executes the undocumented sp_MSForeachDB system stored procedure. We pass the command that we want to run against each database, into this procedure and use ? as a placeholder for the database name. This is automatically populated by sp_MSForeachDB, for each database in turn.

Below is a partial output. You can easily see which results relate to each server, along with the execution context. You may also notice the query status bar is pink instead of yellow and displays the name of the server group that we can the command against.

To learn much more about Central Management Servers, including further examples of useful queries to run across instances, full details of how to configure them and a demonstration of how to evaluate Policies against Central Management Servers or Server Groups, please refer to my new book, "Expert Scripting and Automation for SQL Server DBAs"

Expert Scripting and Automation for SQL Server DBAs

My new book "Expert Scripting and Automation for SQL Server DBAs" is now available to pre-order on Amazon and

The link for is

It is worth noting that the expected release date on Amazon is not correct. Publication is actually expected in August.

Monday, 18 January 2016

SSIS Toolbox in SSDT for Visual Studio 2015

I am currently writing my latest book - Expert Scripting and Automation for DBAs, which Apress will be publishing later this year. The book is based around SQL Server 2016 and I am currently writing a chapter that is based around SSIS.

I needed to use SSDT for Visual Studio 2015, so that I could use the pre-release version of the Azure Feature Pack, for SSIS 2016. When creating a project, however, I was rather disturbed to find that I had no controls in my Toolbox!

Apparently, there are two Toolbox windows, and SSDT defaults to the wrong one. You should navigate to View | Other Windows and open the SSIS Toolbox window. Normality will at once be restored.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom