Monday, 11 July 2011

Finding Object Dependencies In SQL Server

In a sprawling database application, it can sometime be a nightmare if you need to alter the definition of a table, for fear that you will break an object that references it, that you were unaware of. (Although this will all be detailed in your documentation of course! ;-) )

But recently I found a great DMF, which is documented on MSDN, but I had never come across, that given the name of a table, will list all of the Procedures, Functions, Views and Triggers that reference it, and you can join it back to other system object, such as sys.sql_modules to pull back other useful information, such as is the object schema_bound (which will potentially stop you altering the table, and throw an error when you run the ALTER script.)

The DMF is called sys. dm_sql_referencing_entities and the example below will list the details of all objects that depend on a table called mySchema.myTable, along with if the object is schema bound, and even the object’s definition (providing it is not encrypted).

SELECT referencing_schema_name, referencing_entity_name, is_schema_bound, [definition]
FROM sys.dm_sql_referencing_entities ('mySchema.myTable', 'OBJECT') r
INNER JOIN sys.sql_modules m
                ON r.referencing_id = m.object_id

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

SSIS - Execute Process Task Hangs When Run From Server Agent

I was faced with an interesting problem recently. I had a SSIS Package that was stored in the Package Store on an ETL Server, and run from a Server Agent job on a Clustered SQL Instance.

If the package was run in Debug mode, it worked perfectly. If it was run from Windows Scheduled Tasks on the Cluster, it also ran perfectly, but when run from Server Agent on the Cluster, it failed.
Or to be more accurate, it didn’t fail! In fact it didn’t do anything! It reached an execute process task that was calling a decompression utility, and sat there, not working, not failing, not doing anything!

Well, I tried the usual things, such as:

Adding SSIS Server to Local Intranet Zone
Turning on Allow Unsafe Files
Using "God Account" to run job
Using "God Account" to Run As on the execute process
Making the owner of the .exe the same account as the execution context of the process
Disable attachment manager

I knew the package and executable worked, so I was sure that it was a OS level setting, but what? Well, I struggled with this, with the aid of an infrastructure team and also MS Technical Support, but the answer finally came from the MSDN Forums, who suggested creating an Environment Variable called SEE_MASK_NOZONECHECKS and setting the Value to 1.

This worked perfectly, but it was an interesting situation, and wanted to share the resolution with you all.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Sunday, 24 April 2011

Including Current Date In a Flat File Name - SSIS

There was a thread on MSDN Forums, which paraphrased, asked "How can I include the current Year and Month in a file name when I create a Flat File in SSIS"

Although this is a relatively straight forward task, because SSIS is so visual, it can be difficult to explain how to do things in Forum threads, so I decided to produce this short walk through video. Hope you enjoy, and find it useful...

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Find a User's Effective Permissions

There was a recent post on the MSDN forums, that paraphrased, asked, how do I find out what permissions a User has to all Objects, at all levels, including those granted by their role memberships, etc.

Now, I am assuming that this is a fairly common security auditing task, so I scanned Google to dig out a script that would do it, but to my amazement, I could not find one, so I decided to write one. It is a pretty simple script, using the fn_my_permissions function and a couple of well known system tables.

In order to run it, you will need to be able to impersonate the login that you are running it for, and of course, you will also need the appropriate permissions to the system tables.

Feel free to modify the script, but please give due credit. This will work with 2005 and 2008, but I actually wrote it in Denali, so it is defiantly forward-compatible. Enjoy...

 SELECT,  a.subentity_name, a.permission_name
 FROM sys.objects t
 CROSS APPLY fn_my_permissions(QUOTENAME(, 'OBJECT') a
 SELECT, a.subentity_name, a.permission_name
 FROM sys.databases d
 CROSS APPLY fn_my_permissions(QUOTENAME(, 'DATABASE') a
 SELECT @@SERVERNAME, a.subentity_name, a.permission_name
 FROM fn_my_permissions(NULL, 'SERVER') a


Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Saturday, 23 April 2011

Head In The Clouds

In Paul Randal's recent SQLSkills Insider e-mail, he talked briefly about the SQL Azure, and how he had not seen his Clients express a large appetite for it "yet", and that inspired me to write this post, (which I have been meaning to write for a couple of weeks), to share my thoughts on Azure.

So firstly, I just want to state my position on Cloud Computing in general, and that for the most part is very positive. The opportunity to access your applications from any part of the world, with consistent performance and a massively reduced Total Cost Of Ownership has to be exciting, and there are many applications, such as Office and even Exchange, that I feel are a perfect fit for the Cloud.

SQL on the other hand is a different matter. Does it really fit in the Cloud, and if it does, what place does it hold? Will it ever be able to host Enterprise class systems, or will in fall into the category of Virtulization, which is great in the most part, but does not fit well for SQL, especially for Enterprise Solutions?

Well, because of these concerns, and because of the current immaturity of Azure, until recently, I have shyed away from playing around with this app, and kept my focus on traditional SQL, until...

I recently spent several months working on a project to plan the up-scaling of an established 2TB local system to global 32TB Enterprise class system. The obvious choice for this system was a FastTrack Data warehouse Appliance, and I had a fantastic opportunity to do some POC work at Microsoft HQ, (Which I will blog about soon). The POC was successful, and I thought it was a done-deal, but then an instruction came from a very senior level, that we should be investigating putting the application in the Cloud.

Now my gut reaction here was wow! An opportunity to build an Enterprise class database in the Cloud, but within seconds (and luckily before speaking) my professional head suppressed my geek head and I articulated the answer that it would not be practical because of the 50GB limit in Azure. This led to the suggestion that we could shard the data across multiple instances, which is true, but seemed like trying to plug a square hole with a round peg. I knew at that point that the idea was unlikely to go further, and we would probably stay with FastTrack Solution, but I felt it prudent to subscribe to Azure and have a play with it.

So when playing with Azure, the thing that surprised me was how limited the product currently is. A lot of the limitations surround Administration, and make perfect sense, for example, when your database is in the Cloud, why would you need to worry about High Availability? There is already a 4 9s SLA with MS, just by signing up. Some of the other limitations, however, are just a bit strange. For example, you can not create Heap Tables, only Tables with Clustered Indexes. Don't get me wrong, I'm all in favour of having a CI on all tables, but it just felt like a strange limitation. I also found that quite a few useful tools, such as Service Broker are not supported, and even my favourite %%PhysLoc%% query would not compile!

According to a contact of mine, there are currently only about 20 people in the Azure product team, and as I already mentioned, the offering is still fairly immature, so I fully expect that a lot of functionality will appear over the next few months and years, so from a technical perspective, I don't feel that we are quite ready to put many systems in the Cloud "yet", but we may be soon. I also suspect that the size restrictions will be relaxed as bandwidth improves.

My main question mark around Azure (thinking Medium-Term) is more political. If you are storing client data, especially if it is from a sensitive sector, such as public service data, I can not imagine those Clients, when asking the regular question "Where exactly are you storing my data", liking the answer "Oh, it's in the Cloud..."

Now I know that this stance is silly because I would be willing to bet large amounts of money that data is much more secure in Microsoft's Cloud, than it is in the vast majority of Data Centres, but what I'm not sure, is how long it will take, or how to even bring-about that cultural change that would be required, especially as the people who will need to be convinced are likely to be non-technical.

My other question mark, is a rather self-centred one, and revolves around (if I'm honest with myself), humans not liking change, because from a corporate point of view, it is actually a positive...

According to my contact, Microsoft's medium-term SQL Server policy will be to release features first to the Cloud, because they can easily be rolled back if problematic. (This is a concern in itself for the Cloud because it will always be the most up-to-date but possibly also the most unstable platform), then to release to Appliances. (FastTrack is already on v3, PDW is out and OLTP and BI appliances are on the way), and then finally to release to the traditional market.

Now this leads me to assume, that within the medium-term, large SQL projects will either be hosted in the Cloud, or on Appliances, and both of these environments are largely pre-configured and pre-optimized, so where does that leave skilled SQL professionals, who make a living out of performing those complex optimizations that will no longer be required?

Again, this is probably a silly stance, because there are bound to be a whole different set of issues that will require highly skilled professionals. For example, just because Windows 7 is a damn site easier to work with and configure than DOS 6.1, does not mean that there is no market for Desktop Support Technicians, or Network Engineers, but it does mean that more people are able to support it, which means that the market rates are dramatically lower than they were 15 years ago.

Maybe I'm an eternal pessimist, but it is certainly food for thought? If you have any thoughts on the subject, please feel free to leave comments...

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Wednesday, 13 April 2011

SQL Server Southampton User Group 13/04/2011

I was speaking at the Southampton SQL UG tonight, and as promised, here are links to the code samples that I used, plus some more in depth articles on the subjects I spoke about. Thanks to everybody for a good night...!

Shrinking Databases

Why not to shrink a database here
The world's worst maintenance plan? here

Using %%physloc%%

How to find what page your data is stored on here
Logical Order NOT Physical Order here

Allocation Units

What Are Allocation Units here
Querying Allocation Units here

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Wednesday, 6 April 2011

The Curious Case Of The Phantom Last Log Backup!

I was extremely "busy" at work the other day. So much so, I was sat reading Paul Randal's Blog, when I came across an interesting article on how to tell if your database is really in Full Recovery, see here.

The general premise is that when you switch from Simple to Full recovery model, the model does not change until the log backup chain has been established, with a backup. This is called pseudo-simple mode, and can be recognised by a database that is in Full recovery model showing a NULL value for last_log_backup_lsn in sys.database_recovery_staus

I was a little board, so I though that I would have a look and make sure that our OLTP databases were really in Full Recovery Model, and they were, but much to my surprise I found that some of our VLDs that were in Simple Recovery Model were showing a non-NULL value in the last_log_backup_lsn column in sys.database_recovery_status catalogue view.

This confused me and I ask Paul Randal if there was a pseudo-full mode, but there is not, so why was there a last_log_backup_lsn? Well, intrigued, I kept an eye on the transaction logs of the databases involved, buy using SELECT * FROM sys.fn_dblog(NULL,NULL) which shows all log records in the active VLFs.

Day 1 - 7436 entries, day 2 - 762 entries.

This was good news! It meant that the log had been truncated, so the databases were behaving as if the were in Simple recovery model. This was a relief, as my initial fear was that they were behaving as though they were in Full, and of course, we were not taking Transaction Log backups, as we thought they were in Simple!

So the next test - Was this record of a last LSN only in the sys.database_recovery_status catalogue view or was it embedded more internally? To check this, I ran DBCC PAGE on the boot page of the database. The database ID happened to be 8, so the syntax was

DBCC PAGE(8,1,9,3)

Here, I turned on trace flag 3604, so that the DBCC output would be to the messages window, rather than the event log. The parameters for DBCC Page represent the Database ID, The file number, the page number and then the level of detail. The database boot page is always at page 9 of file 1 see here for more info on system pages.

The results showed that the boot page also contained the lsn and time stamp of the last log backup, so the mystery continued. Where did it come from? Unfortunately, it was a production system that I was looking at, so I was not able to play around with it too much, but after a few hours of trying different things, that I new had occurred on the production system, I managed to reproduce the symptom, with the following script:

USE [master]

The results were as expected...

name recovery_model_desc last_log_backup_lsn
lsn_test FULL NULL

So now...

TO  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER=MSSQL\Backup\lsn_test.bak'
WITH NOFORMAT, NOINIT,  NAME = N'lsn_test-Full Database Backup',

Again, results as expected...

name recovery_model_desc last_log_backup_lsn
lsn_test FULL 25000000006200064

Next, was to switch the Database to Simple...

USE [master]

Once more, predictable results...

name recovery_model_desc last_log_backup_lsn
lsn_test SIMPLE NULL

So now lets backup the database again...

BACKUP DATABASE [lsn_test] TO  DISK = N'C:\Program Files\Microsoft SQL=erver\MSSQL10.MSSQLSERVER\MSSQL\Backup\lsn_test.bak'
NAME = N'lsn_test-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  ST=TS = 10

Once more, the correct results...

name recovery_model_desc last_log_backup_lsn
lsn_test SIMPLE NULL

So finally, what happens if we restore the database...?

RESTORE DATABASE [lsn_test] FROM  DISK = N'C:\Program Files\Microsoft =QL Server\MSSQL10.MSSQLSERVER\MSSQL\Backup\lsn_test.bak'

Ah, this doesn't look so straight forward...!

name recovery_model_desc last_log_backup_lsn
lsn_test SIMPLE 25000000011900001
So basically, SQL has picked up the last log backup LSN from the restore, but it is meaningless, and essentially a bug. So in the end, it turned out not to be a major issue, but it was certainly interesting, tracking it down, and a big thanks to Paul Randal for helping me get my head around it!

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Saturday, 2 April 2011

How To Query Allocation Units

Following my post explaining Allocation Units see here I though that I would do a quick follow-up post, demonstrating how you can interrogate these structures. There are two ways of doing this.

The first (and documented way) is to use the catalogue view called sys.allocation units. This view will display one row per allocation unit, for every table and index. This can be joined back to sys.tables, via sys.partition, in order to retrieve the table name, as demonstrated below...

SELECT, au.* FROM sys.allocation_units au
INNER JOIN sys.partitions p
     ON au.container_id = p.partition_id
     INNER JOIN sys.tables t
          ON p.object_id = t.object_id

...A portion of the output (run against AdaventureWorks2008) is detailed below...

 ...This query is interesting, and shows you how large each of the allocation units is, in terms of pages. What is even more interesting, however, is the undocumented way to interrogate allocation units. This is with the use of sys.system_internals_allocation_units. Let re-run the last query, using this catalogue view...

SELECT, au.* FROM sys.system_internals_allocation_units au
INNER JOIN sys.partitions p
     ON au.container_id = p.partition_id
     INNER JOIN sys.tables t
          ON p.object_id = t.object_id

...Lets have a look at the results of this query...

...The results of this query are even more interesting. Here, alongside the page counts, we can also see the location of the first IAM Page in the IAM chain, the indexe's root page, and the first page of the index. We can crack these results by using the sys.fn_physloc_formatter() function, as below...

     sys.fn_PhysLocFormatter(au.first_page) AS 'First Page',
     sys.fn_PhysLocFormatter(au.root_page) AS 'Root Page',
     sys.fn_PhysLocFormatter(au.first_iam_page) AS 'First IAM Page'
FROM sys.system_internals_allocation_units au
INNER JOIN sys.partitions p
     ON au.container_id = p.partition_id
     INNER JOIN sys.tables t
          ON p.object_id = t.object_id

...The results are shown below...

...Pretty cool hey? Be warned, however, that the sys.system_internals_allocation_units catalogue view is both undocumented and unsupported! This means that I can not vouch that the page locations it gives you will always be accurate. It is, however, a useful and interesting starting point.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

What Are Allocation Units?

I found out this week that a couple of my colleagues were unfamiliar with Allocation Units, so I thought I would knock-up a quick post explaining what they were, and how you can use them to optimize performance. So first off - what are they?

Well, traditionally, SQL Server has had a fixed limit of 8060-Bytes for a row and 8000-Bytes for a column, as a single row can not span more than one page. So, in more recent versions of SQL Server, it has been possible to push some of this data off-row, in order to let developers overcome this fixed limitation.

So now, every index, including the clustered index, or heap can have it's data split across multiple allocation units. There are 3 types of allocation unit. The first is for your standard data, and this is called the IN_ROW_DATA Allocation Unit. This is the allocation unit that will hold the majority of your data, in most cases. The second Allocation Unit is know as the LOB_DATA allocation unit. This is used to store the older type LOBs (Large Object Blocks) such as Text, NText and Image. The final Allocation Unit, which is the most recent addition, is the ROW_OVERFLOW_DATA Allocation Unit. Also know as the SLOB (Small-LOB) Allocation Unit, this is used to store newer Large Objects, such as Varchar, NVarchar, SQLVariant (although this should be avoided!) and the .NET based data-types, such as Geospatial data-types.

So, are there three allocation units per table? Oh no! There is a lot more than that! Every index can be partitioned, with a maximum of 1000 partitions, and in SQL Server 2008, you can have up to 1000, indexes, and the Allocation Units propagate down to the partition level, so it is actually possible to have 3 Million Allocation Units. In SQL Server 2005, the limit is 750,000 Allocation Units, because you are limited to 250 indexes per-table.

By default, LOBs (Text, NText and Image) are stored out of row. You can move them in row, by changing the value of 'Text In Row' by using sp_tableoption. Obviously, this will only bring in LOBs that will physically fit on the page. (i.e. They still can not be more than 8000-Bytes each, and 8060-Bytes in total), and you can also specify the maximum size for values to bring in-row. If you do not set this limit, the default is 256-Bytes.

By Default, SLOBs are stored in-row (providing they will fit in the row, using the same calculations as above) but you can push them off-row by setting the 'Large Value Types Out Of Row' option using sp_tableoption. Unlike the 'Text In Row' option, it is not possible to specify a custom size limit. Either all rows that will fit are stored in-row, or all values are stored out of row. The size limit is 8000-Bytes, and this can not be changed.

So the next question, is when should you push data off-row? Well, before we tackle that question, we need to think about how much overhead is caused by pushing data off-row, and this calculation is fairly simple. If LOBs are stored off row, a 16-Byte pointer is stored in the row. If SLOBs are pushed off-row, then a 24-Byte pointer is stored in-row. This can get slightly confusing however, because if you specify the (MAX) operator for a Varchar or NVarchar, then SQL treats it as a LOB, rather than a SLOB, and only requires a 16-Byte pointer.

So the first, and most obvious considerations are (from an SLOB perspective) are 1) Are the majority of SLOBs less than 24-Bytes? If so, there would be no benefit in pushing them off-row. In fact, it would be detrimental. 2) Are the SLOBs more than 8000-Bytes? If so, you have no choice, but to store them off row. 3) Do your SLOBs add-up to more than 8060-Bytes? If so, again, you have no choice, they must be off-row.

But what about the cases in the middle? What about if your SLOBs are all between 24 and 8000-Bytes, and together, they all fit within the 8060-Byte limit? How should you store them? Well, it depends on how you are using your data. If these SLOBs are going to be frequently queried, then you should store them in-row. If you do not, then you are increasing IO, and also forcing that IO to be random, as opposed to sequential, and random IO is not your friend!

If on the other hand, the SLOBs are essentially just a data store, and you are only querying them infrequently, but the other columns in the table are being queried, then you should store the SLOBs off-row. This is because you will be able to fit more rows on a page, and this give you two big advantages. Firstly, queries that need to read this table will have to read less pages (because there will be less pages to read). And the second reason is simply that you will use less memory. If you read less pages, then less pages will be read into the Buffer Cache, and memory pressure will be reduced.

Tuesday, 29 March 2011

The Five Horesmen Of The SQL Server Appocolypse!

Ok, so it should be four horsemen, but there was a two-way tie for fourth place. What can I say?

Thank you all from you votes from the What is the WORST Performing Feature Of SQL Server Poll (see this post)
You can see the results and percentages in a nice little chart at this link, and they are certainly interesting, so lets look at the winners in a bit more detail...

The clear winner is Auto shrink / Scheduled shrink. But why is this so bad? Well basically it kills performance, both during the shrink operation, and worse still, after the operation has finished. When you shrink a file, starting at the end of the file, and moving backwards, it takes each page, and moves it to the earliest available space within the file.

This is a resource intensive operation, and can cause performance issues whilst it is running, but also causes your indexes to become highly fragmented. I have already blogged about this in detail, so I won't repeat myself too much here, but please check out this post.

Next in the ranking, is Autogrow logs by 1MB . Why is this so bad? Well, once again, there is a two-fold problem. Firstly, if you log is coming under space pressure, and therefore needing to grow, then it can cause huge performance problems, as it needs to grow the Log file (in some circumstances) for almost every transaction. Remember, depending on the action you are performing, SQL may need to record before and after images of the data. I have also experienced issues in the past, where during large operations, your database suddenly ends up in Recovery, because it has not been able to keep up with the number of log grows required, and eventually resulted in the log becoming corrupt, resulting (in my case - the DB was nearly 1TB) in over an hour of downtime, and of course the transaction being rolled-back. See this post for some tips on Log File Optimization.

The other issue is Log File Fragmentation. Log Fragmentation? What on earth is that? Well, basically if you grow your log file in tiny chunks, then you end up with a massive amount of VLFs (or Virtual Log Files). On the VLDB I mentioned above, the client ended up with over 2000. The rule of thumb recommendation, is not to have more than 50! See this post for more info.

Very close behind, in 3rd place, came Functions in Select list. Why not put a function in a SELECT list? That is what they are there for isn't it? No! No, no, no, no, no, no, no! NO! :) If you put a function in a SELECT list, it needs to be evaluated separately, for every single row of the query, which is causing a "cursor like" effect, and often can not be included in the same execution plan. Lets have a look at an example, and how we could perform the task better...

Using the AdventureWorks2008DW database, I have created the following example, which is a reproduction of one of the worst SQL implementations I have seen. (The code has been changed to protect the guilty!) :)

The first thing I am going to do, is create some "dirty data" in the FactResellerSales table, to demonstrate why the code existed, with the following script...

  UPDATE [AdventureWorksDW2008R2].[dbo].[FactResellerSales]
  SET OrderQuantity = 0
  WHERE ProductKey = 351

  UPDATE [AdventureWorksDW2008R2].[dbo].[FactResellerSales]
  SET UnitPrice = 0
  WHERE ProductKey = 223

...Now to create the Function, which I will call SaftyDiv, and will basically stop a divide by 0 error occuring...

CREATE FUNCTION dbo.SaftyDiv(@FirstNumber DECIMAL, @SecondNumber DECIMAL)
     DECLARE @ReturnValue DECIMAL

     IF (@FirstNumber = 0)
          SET @ReturnValue = 0

     ELSE IF (@SecondNumber = 0)
          SET @ReturnValue = 0

          SET @ReturnValue = @FirstNumber / @SecondNumber

     RETURN @ReturnValue

...So, there are so many things wrong about this function, I will not patronise you, or drive myself insane by listing them all, but lets see how it performs when we use it to select values from the table...

      , [dbo].[SaftyDiv] ([OrderQuantity], [UnitPrice])
  FROM [AdventureWorksDW2008R2].[dbo].[FactResellerSales]

 SQL Server Execution Times:
   CPU time = 438 ms,  elapsed time = 1559 ms.

...If you look at the Execution plan for this, it looks very clean, but that is in fact, because SQL was unable to include the function in the same plan, and actually had to create a second plan for it!

So lets see what performance we get, if we tear down the buffer cache, and then rewrite this query to use a CASE statement in the Select list. To be honest, there are more elegant ways of doing it than this, but it's getting late, and this demonstrates the point...

      , CASE WHEN ([OrderQuantity] = 0 OR [UnitPrice] = 0) THEN (SELECT 0)  ELSE (SELECT [OrderQuantity] / [UnitPrice] ) END 

  FROM [AdventureWorksDW2008R2].[dbo].[FactResellerSales]

 SQL Server Execution Times:
   CPU time = 94 ms,  elapsed time = 1177 ms.

...So you can see, we got a 78.5% improvement in processor time (because it only needed to compile 1 plan, instead of 2) and we also got a 24.5% improvement in execution time. Remember, in this example, we are only dealing with aprox. 65,000 rows. Imagine if we started scaling that up to millions of rows!

Tied for fourth place, were Encrypt all data with cell level encryption and Cursors. Now I must be honest, this was a surprise for me, I though that cursors would be right up there with Auto/Scheduled shrink. Why did I think that? Well, they are the bain of my life! With no disparity meant what so ever to .NET developers, you can always tell when somebody who is experienced in writing .NET code, but less so in SQL, has been writing SQL code. That is because they love cursors! And it makes perfect sense, in .NET languages, looping is often the best way to achieve your goals, but T-SQL is a SET-Based language, meaning it is optimized for performing an operation on multiple rows at the same time, as opposed looping over a set of rows, which is exactly what a Cursor does.

There is a place is T-SQL for the use of Cursors, but these days, it is a very small, limited place, and basically only shows itself in situations where you need to iterate over a series of DDL objects, such as looping over indexes in a dynamic rebuild scenario.

For almost all other purposes, we have a better way of doing things. For example, if we need to produce a cross-tabulated query, then we have the Pivot and UnPivot operators. If we need to implement recursive logic, then we have Recursive CTEs. For concatenating rows into a string, or vice versa, we have tricks we can use with XML (see this post for an example), and for very complex logic, or string manipulation, we have CLR integration.

Lets look at an example of using a simple Cursor, versus a simple SET-Based solution to perform the same task, and see how they perform...

SalesAmount DECIMAL,
RunningTotal DECIMAL

        @RunningTotal DECIMAL

SET @RunningTotal = 0

SELECT SalesAmount
FROM FactResellerSales

OPEN myCursor
FETCH NEXT FROM myCursor INTO @SalesAmount
      SET @RunningTotal = @RunningTotal + @SalesAmount
      INSERT #Sales VALUES (@SalesAmount,@RunningTotal)
      FETCH NEXT FROM myCursor INTO @SalesAmount

CLOSE myCursor

ORDER BY RunningTotal

...To be honest, I don't know how long this would take to complete, because I got bored of watching it run after about 19 minutes, (remember there are only 65,000 rows in this table) and killed it, so I could run my SET-Based version!

So, how long did the SET-Based version take? Drumb roll...

SELECT a.SalesAmount,
       SUM(b.SalesAmount) AS RunningTotal
FROM FactResellerSales a
INNER JOIN FactResellerSales b
ON (a.SalesOrderNumber = b.SalesOrderNumber
        AND a.SalesOrderLineNumber = b.SalesOrderLineNumber)
GROUP BY a.SalesAmount
ORDER BY RunningTotal

...Less than 1 Second! I rest my case! :)

So finally, what is wrong with using cell-level encryption? Well nothing, if used in moderation, and it is often a necessary evil, in order to meet regulatory requirements, but if you over-indulge, then you have a problem.

If you need to encrypt a CreditCardNumber column to meet a regulatory requirement, then this is fine. Use a symmetric key, and avoid encrypting that key with an asymmetric key, and that with a certificate, and so on! But do not go right on ahead and encrypt the entire table, despite there being no requirement to do so, other than a manager "thinks it might be a good idea". As a technical person, it is your responsibility to point out the limitations of technology to the business, so if you, as I have, had a Director telling you that they want the salaries encrypted, to avoid the slightest risk that somebody might find out what a disgustingly large bonus they get, the correct answer is something along the lines of my standard reply... "Yes Sir, certainly, you are the customer, and if that's what you want, then I can make it happen. However, please be warned that depending of the encryption algorithms used, then this can cause a performance degradation of up to 45-50% when accessing that data, and can cause a data-bloat of up to 4000%!" That is normally enough to make people see it your way! ;-)

I hope you find these results as interesting as I did.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Sunday, 20 March 2011

What Is The WORST Feature Of SQL Server?

I am planning a post on "How To Kill Your SQL Server Without Even Trying!" and I would like too know what you think the worst features of SQL Server are (From a PERFORMANCE perspective ONLY!) I will do another post on worst practices (from a non-performance perspective) at a later date. If your favourite option is not listed, leave a comment.

Once I have your votes, I will include the most popular answers in a post, with examples.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Sexy SSIS In Denali - Usability Enhancements

Following the success of my last post on SSIS in Denali see here I have been wanting to do a post for some time, giving an overview of some of the nice new usability enhancements for Developers. Unfortunately, it is predominately visual, so I have been putting it off, as really it requires a webcast, and 1) I hate the sound of my own voice! 2) I do not have any professional screen capture software. Today, however, I bit the bullet and recorded a short piece, that gives an overview of some of my favourite new features for developers. Most of them seem like small changes, but together, they make the product much easier to use.

I hope you enjoy it, but please bare the previous two caveats in mind!!! ;-)

If you enjoy this, and would like to see more webcasts, then please leave a comment to let me know. It's always good to know how to spend my blogging time!

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

How Do You Create A Database?

There was a question on MSDN, that paraphrased, asked what steps you need to go through to create a database. Now, of course the exact steps will be different in every situation, but there are certain things that people often miss, so I though I would compile a 15-point list of the high level steps that you should take -

1) Consult the business to decide on what data you need to store, estimated volumes, how it will be used, how many users, etc. (You should try to predict 3 years into the future)

2) Go through conceptual/logical design. i.e. Normalization, ERD diagrams, etc.

3) Decide what technology will suite your application best. Is this SQL Server (normally in my bias opinion! ;-) ), or is it Oracle, MySQL, etc.

4) Decide what hardware spec you will require to support database application, and what software versions you need. i.e. Windows, SQL editions, etc.

5) Design the physical table structure, including data types, compression, etc.

6) Design how you will get your data your data in and how you will get your data out of the database. This will involve logical steps agreed with the business, and may include physical technologies, such as SSIS, Stored Procs, Functions, Endpoints, Linked Servers, BCP, etc, etc.

7) Create physical database, specifying files, filegroups, etc.

8) Create physical structures, such as tables, programmable objects, etc.

9) Design security policies, and ensure the principle of least privilege is followed.

10) Agree SLAs with the business owners

11) Design HA and DR strategies for database, so you can set appropriate Recovery Model, configure Mirroring, etc.

12) Go through SAT cycle on Dev environment. Test code functionality, performance, HA strategy, recovery times, etc.

13) Promote database to UAT environment through Backup/Restore, Scripts, or Copy Database wizard, etc.

14) Make sure business fully test and sign-off functionality.

15) Promote to Live.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Sunday, 13 March 2011

How Do Join Operaters In SQL Server Work?

I was asked a question this week, that I could only answer at a very high level, so of course my instant reaction was to research a more detailed answer and blog about it! The question was "How do the join operators in SQL Server work?"

So first for a little background... In SQL Server, if you join two table together, in the old days, there was only one way that the optimizer could perform this task, and that was with an operation called Nested Loops (or loop join). In more recent versions however, in order to support larger databases, with larger tables, two additional operators have been added. These are Merge Join or Hash Match (Hash join). But how do these operators work?

Well, first up is Nested Loops. This operator will use statistics to decide which is the smallest of the two tables (in terms of rows) and will mark this table as the "outer table", and mark the larger table as the "inner table". For each row in the outer table, it will loop through every row in the inner table, looking for a matching key, based on the join column(s), and join the tables based on this value.

Next is Hash Match. This operator works by choosing one of the tables and running a Hash Function against the key column(s). It then puts these hashed values into an in-memory hash table, although if this becomes to large, it can be written to TempDB. It then runs the Hash Function against the second table, and joins the rows based on a match of the two hashed values.

Lastly is Merge Join. This operator relies on both of the tables being sorted by the join key. Because both keys are sorted in the same way, SQL can look at the first value in the first table and the first value in the second table. If they match, then it can output them to the result set, if not, then it can take the next row from the first table and repeat the process, until all of the rows from the first table have been matched.

So when is each operator best? Well Nested Loops is generally better for smaller tables, and also needs to be able to perform an index seek on the "inner table", to ensure any kind of performance.

Hash Match can be better for larger tables, because it requires less IO. This is because each key only needs to be read once, rather than multiple times. However, the hash function causes a fair bit of CPU, and hash table can require a lot of memory, and if you are coming under memory pressure, it may need to write the table to TempDB, which can cause more IO, as well as the memory pressure, making it a less desirable option.

Merge Join is generally regarded to be the most efficient of the operators for larger datasets, because it only needs to read the keys once, and also does not require a lot of CPU cycles. However, it does require both tables to be sorted. If they are (especially by the Clustered Index) then the operator is very fast, but if they are not, then SQL needs to perform a sort operation on them, before they enter the join operation. This, of course, is very inefficient.

It is possible to "override" the optimizer's choice of operator, by using OPTION(Loop Join), OPTION(Hash join) or OPTION(Merge Join) after the join in your query. However, remember that optimizer is very smart, and does not make that many mistakes. Updating Statistics and rebuilding indexes is usually the best way to ensure the optimizer uses the correct option. If you really do need to use these hints, however, then you should always work with SQL, rather than against it.

What do I mean by that? Well, if we take the example that SQL is deciding to use a Merge Join, but we know that this will never be the most effective option, because one of the tables (for some reason) will never be sorted by the join key, then instead of telling the optimizer that it must use a Hash Match, we should tell it to use anything it wants except a Merge Join. We can do this, by comma separating the list of operators that it can use. In this example, we would say OPTION(Hash join, Loop Join)

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Sunday, 6 March 2011

System Pages In SQL Server

As you know, SQL Server's data files are made up of 8KB pages, that are organized into 64KB Extents. There are several different page types, for Data, Indexes, LOB data and IAMs (Index Allocation Maps), which are the root pages for partition/allocation unit combinations, but there are also several other pages, that SQL Server maintains internally, in order to operate quickly and correctly.

The first of these page types, is the Database Boot Page. This is always Page 9, in file 1 of your database, and includes information such as the current and original database versions, the LSN (Log Sequence Number) of the last Checkpoint, and the Database Mirroring ID.

The second page type is the File Header Page, and this is always page 0 in every file. This also contains a raft of useful information, with some of the highlights being the auto growth size, the minimum file size, the HDD sector size and the LSNs/GUIDs required for restore operations.

Page 1 of every file is a PFS (Page Free Space) page, that is a byte map, detailing how full each page is. Obviously, there is a limited number of pages that can have there space recorded within an 8060-byte byte map, so this page is repeated on every 8088th page. The information stored for each page recorded in the byte-map includes:

The amount of free space on a page
Are there Ghost records (deleted rows) on the page?
Is the page an Index Allocation Map?
Is the page part of a mixed extent?
Has the page been allocated?

On pages 2 & 3 we begin the GAM Interval. This consists of a GAM (Global Allocation Map) and a SGAM (Shared Global Allocation Map) page. These two pages are bit-maps, that record the status of the following 64000 extents, which equates to around 4GB of data.

The GAM page is a record of which extents have been allocated. 0 = Allocated, 1 = Unallocated.

Then, the SGAM page is a record of if there are free pages within a mixed extent or not. 0 = The extent is either full of mixed pages, or is a uniform extent, 1 = There is one or more pages free in the mixed extent.

It is then possible for SQL to combine the bits from each of these bit-maps to determine if an extent is uniform, mixed or unallocated, so that it knows which mixed extents can be used for new or small objects and which extents are free to be assigned uniformly to larger objects.

Just taking a step back, SQL Server (by default) uses both mixed extents (where different pages are assigned to different objects) and uniform extents (where all pages within the extent are assigned to the same object). The rule it uses, is that when an object is first created, it will use a mixed extent, but once the object has grown to a point where it would fill an entire extent, it switches to uniform extents. It never switches back however, even if you truncate a table.

It is possible to change this behaviour, by turning on Trace Flag 1118. This is generally used to alleviate contention for system pages in TempDB, and will force SQL to only use uniform extents, even when a small object is first created.

Anyway, back to the point - The last two system pages of note, are the MLM and DCM pages. These pages can be found at pages 6 and 7 within a file, and contain information that helps with Disaster Recovery. Like the other bitmap pages, they are repeated every 511,230th page.

The MLM (Minimally Logged Map) page is a bitmap, that specifies whether a page has been modified by any minimally logged transactions, since the last transaction log backup. This helps SQL quickly determine what changes have occurred since the last transaction log backup.

Finally, the DCM (Differential Change Map) page is a bitmap that specifies if a page has changed since the last full backup. This means that when you take a Differential backup, SQL can easily determine which pages is needs to backup.

Denormalizing A Column Into A String

If you wanted to take query results and turn them into a string, how would you go about it? Traditionally, you may use a dreaded cursor/while loop, or a best, a complex, poorly performing UDF. But since the introduction SQL 2005 and native XML, there is a nice little trick you can use.

As an example, I am going to use a scenario from a question I answered on MSDN. Basically, the requirement was to take a list of values, turn them into a string, and then insert the average value into the middle. So the first thing I am going to do, is use a Table Variable to generate the list...

DECLARE @values table(col1 float)

INSERT INTO @values VALUES(2), (2), (3), (4), (5)

...Next, I am going to find the average value, and insert this into the list. For simplicity, I will use a second table variable...

DECLARE @values2 table(col1 float)

INSERT INTO @values2
SELECT AVG(col1) AS col1 FROM @values
SELECT col1 FROM @values

...So, now I have the list of values I need, pre-sorted in a table variable, It's time for the interesting bit. Basically, I am going to use the data() X-Query function, which returns a typed value for each item specified, inside a sub-query, with a FOR XML clause. The outer query will have no FOR XML clause, so the results will be returned as relational data...

SELECT CAST(col1 as decimal(3,2))  AS [data()]
FROM @values2

...The results are as follows. I have highlighted the average value in bold...

2.00 2.00 3.00 3.20 4.00 5.00

...Very cool eh?

Thursday, 24 February 2011

SQL Server Schedulers and CPU Performance

For most applications, Windows handles (as you would expect) the scheduling of work onto each CPU. This uses a preemptive scheduling approach, which means that Windows will give each process a priority based on a combination of resource usage, interactivity, activity and user preference.

Once a process has reached its turn, Windows assigns it to a core and it begins, but the preemptive approach, means that Windows is able to forcibly relinquish this process and let another process take over.

This works well for most applications, but not so well for SQL Server. This is because it treats SQL Servers worker threads with the same priority algorithms as any other processes within Windows, and this can lead to lots of context switching.

To get around this issue, SQLOS has it's own CPU Scheduler, called the SOS Scheduler. This allows SQL Server to schedule it's own processor workload, using a cooperative scheduling algorithm. This means that instead of forcing a thread to yield, it allows to threads to yield voluntarily.

But hold on a moment, surely, this could lead to a situation where a runaway thread just takes control of the entire system and does not let anything else get a look in, right? Thankfully, no! Instead, SQL has a process called the Scheduler Monitor, which is a thread that will periodically check to make sure that active schedulers have yielded since the last time it checked. Unlike other SQL threads, this thread works in preemptive mode, so it can not cause the above issue itself.

So how many schedulers do you have? Is there a one-to-one mapping with CPU cores? Well, not exactly. There will be one scheduler for each CPU core in the server (even if SQL has been configured not to use them, by setting processor affinity), there will be one scheduler for the DAC (so that even if something has gone badly wrong with the schedulers, you can still connect) and then one scheduler for each Resource Monitor.

Resource Monitor is a process that helps control memory pressure in the cache, and there will be one for every NUMA Node within the system, regardless if you have hard or soft NUMA. (Non-Uniform Memory Access is a way improving performance by splitting memory between processors or groups of processors, to avoid cache misses - but I digress further and further, so I will talk about this another time!)

You can see your schedulers by querying the sys.dm_os_schedulers DMV. This will give you lots of information, but there are some columns that are especially worthy of note.

First up, is a column called Status. Here you may see various options made up of ONLINE/OFFLINE and VISIBLE/HIDDEN. If a scheduler is Online, it means the SQL Instance has access to it. Offline, means that you have set CPU Affinity, and SQL is not allowed to use it. The reason it is set to Offline, rather than removed, is because it allows you to change your CPU affinity, without the need to restart the Instance. (Very cool!)

If a scheduler is Visible, it means that it is mapping directly to a CPU Core and will accept user requests. If it is set to hidden, it means it is associated with a Resource Monitor. The scheduler associated with the DAC is easy to identify, as it ends with (DAC) ! ;-)

The next interesting column is preemptive_switches_count. Now, you will remember me telling you earlier that SQL threads are cooperative, rather than preemptive, so why do we have this counter? Well, if SQL needs to run anything outside of the SQL Server environment (such as distributed queries, or certain CLR objects, etc), then it must switch to preemptive mode, so that Windows can schedule it to run.

runnable_tasks_count brings some context to our understanding of SQL Waits. A request in SQL can be in one of three states. RUNNING, RUNNABLE or SUSPENDED. Running is fairly obvious, Suspended means that it is waiting for a resource to become available, (this is where you see wait types associated with disc such as IO_LATCH_STATS, network waits, such as NETWORKIO and transaction log bottlenecks, such as WRITELOG see this post, etc), and Runnable, means that it is ready to run, but waiting for a scheduler to become available.

work_que_count shows you the number of tasks that are assigned to the scheduler and waiting to run. pending_io_count shows the number of IOs that the scheduler has that are waiting to be completed. These can be helpful for diagnosing resource pressure.

Whilst we are on the subject of resource pressure, it is quite common in multi-processor servers to see a large number of CXPACKET Waits. These waits can often be a symptom of badly tuned queries, and can be resolved by introducing proper indexing strategies, etc, but sometimes they can occur as a result of parallelism being a bottleneck. But parallelism is good right? Well not for OLTP systems. It would be nice if there was a way to tell SQL the type of workload to expect, but there isn't. If your queries are tuned, and you are still getting lots of CXPACKET Waits, then basically a query is running in parallel, and one core has already finished it's work, and is waiting to receive a packet from another core, so that it can synchronise, etc. The way to get around this, is to reduce the MAXDOP setting for the query (or the whole instance in some cases).

If however, you reduce your MAXDOP setting, then you can suddenly see your CXPACKET Waits being replaced by SOS_SCHEDULER_YIELD Waits. This is because now you have reduced the parallelism, then you are coming under CPU pressure on the cores that are still being used. Basically it is telling you that a thread has voluntarily yielded, so that another thread can take over, and is now waiting for it's next turn on the Scheduler.

So if you are coming under CPU pressure, what can you do about it? Well, that is another long blog for another time, but here are a few suggestions.

1) Check your benchmarks to see if you can establish a usage increase pattern (You are taking regular benchmarks, right?) You may find that it is a natural curve, or you may be able to correlate a sharp spike to a particular release, and then go troubleshooting...

2) Run SELECT * FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_query_plan(a.Plan_Handle) ORDER BY total_worker_time DESC You can then click through to the query plans of the most expensive queries to see if they can be optimized.

3) Run a Perfmon trace simultaneously with a Profiler trace. You can then correlate the two traces in Profiler to identify the queries causing CPU pressure and investigate.

I hope you enjoy reading this post as much as I enjoyed writing it. After a week spent playing with a very large FastTrack at a Microsoft Lab, I felt the overwhelming urge to write something geeky! :)

Monday, 21 February 2011

Is Dynamic SQL ALWAYS Bad? - Follow Up

Last week, after a chat with Kimberley Tripp, I wrote this post. Kimberley has turned this conversation into two SQLMag articles, which probably explain the situation slightly more elequantly that I did...

See here and here. :)

Tuesday, 15 February 2011

SSIS In Denali. Sexy! But Substance?

In Denali, SSIS has been given a face lift, and it looks pretty sexy, that's a given. But for my first post on the subject, I wanted to look instead at some of the new functionality to see how it performed, and one of the biggest new features in terms of functionality, is the Integration Services Database, so...what does it give us?

Well, I started by creating a simple SSIS package in the VS 2008 Shell. (We have been promised VS 2010 shell for RTM!) I used the new Source Assistant and Destination Assistant to create a data flow that just pumps rows from one table to another.

Then, in SSMS, I created an Integration Services Catalog, by right clicking on the Integration Services folder. Two points of note here were that it made me enable CLR, and it enforced a complex password...

 ...Then I created an organisational folder, under the catalog...

...I then tried to import the package from the context menu of the folder. It did not work, but to be fair, I am using CTP1! So I went back to BIDS and used the new Deployment Wizard (I'll blog about this soon, because it is quite good, and very, very different!) and deployed the project to the SSIS database. Denali uses the concept of deploying projects, much more rigorously than previous version, as many of the new features (such as parameters) are based at the project level.

After deployment, The project and package appeared under my folder. Right clicking the package allowed me to run a validation process, shown below. I could also have done this at the project level...

...Everything seemed Ok, so I ran the package. This gave me a jazzed up "Progress Window", which furnished me with status information in real-time...

...After package execution, a right click on the Catalog, allowed me to click the operations button, which gave me a history of all the actions that I had performed so far...

...I then found a whole raft of extra information in the SSISDB Database, that was automatically created when I created the catalog. Some of the most interesting tables included Executions table, the Catalog_Property table and the Data_Type_Mapping table, all shown below...

...Finally, a quick look at the Properties of the Project, revealed the Previous Versions Window. This allows us see, and easily roll back to a previous version of the package, in the event that you had not tested your code properly! (Not that would every happen of course!)...

...So in conclusion, there will be many more Denali SSIS posts to come, but I'm sure you can tell from this post alone, that DBAs are going to love it!!!

When Is Data Written To Disc?

When you run a query in SQL, the first thing that happens, is SQL checks to see if the data pages that it needs to modify are currently in the Buffer Cache or not. It does this by reading an in-memory hash table, that records the dbid-fileno-pageno of each page that is in the cache at that time. You can see for yourself what pages are currently in your cache, by sys.dm_os_buffer_descriptors. If the pages are not in the cache, then it reads them in from disc.

Once all the required pages are in cache, SQL will begin an Implicit Transaction (if an Explicit Transaction is not already open for the statement). It will then perform the required operations on the page(s). If this was a DML operation, it will also mark them as being dirty. (This just means that they have changed in memory, without being written to disc). It also writes the transaction records to the log cache.

When the transaction (either implicit or explicit) commits, all transaction records, up to and including the commit, will be written to the log. There is a caveat here however; It is not just the log records from this specific transaction that are written. In order to make the transaction durable, SQL will also have to flush other log records that are intermingled with this transaction. It is also worthy of note, that SQL never writes individual log records, it always writes in blocks of between 512 bytes and 60 kb. This is called the "Log Block" and it's size is managed dynamically, internally.

This process is called write-ahead logging, or WAL, and ensures database consistency, but can cause a bottleneck, because SQL caps the amount of transactions that can be "in-flight". (This means that the Log Manager has issued a write, but an acknowledgement that the write has completed). The cap is different depending on your version of SQL, but in 2008, it is 3840KB, regardless of whether you are using a 32-bit or 64-bit version.

If this is causing a bottleneck, then you will notice lots of WRITELOG waits, when you query sys.dm_os_wait_stats. If this is the case, then you should try to optimize your transaction log. Details can be found in this post.

Once the log records have been committed to disc, then SQL is free to flush the "Dirty Pages" to disc. It does not do this straight away however. These dirty data pages are written to disc when the LazyWritter kicks in, or when a Checkpoint occurs.

A checkpoint occurs in a number of circumstances. Firstly, and most obviously, if you manually issue a checkpoint, when a backup starts, when the server is shut down (gracefully), or when an ALTER DATABASE statement is run. Also, if you are in Simple Recovery Model, a checkpoint will occur when your log is 70% full, or if you are in Full Recovery Model, a Checkpoint will run periodically, depending on your setting for recoveryinterval, which allows you to specify a target recovery time.

The LazyWriter process is a SQLOS thread, and it's role in life is to keep the Buffer Pool clean, with enough space to add new pages. It does this by periodically scanning the buffer cache and keeping track of how often a page is referenced. If it decides that it is no longer useful to keep a page in the cache, (using an algorithm that tracks how often a page is used and even what type of page it is) it checks to see if the page is "Dirty", if it is it writes it to the data file. It then marks the page as being "free" in a linked list. When a worker thread needs a page, it will choose one from this list.

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Saturday, 12 February 2011

Is Dynamic String Execution ALWAYS bad?

I have always worked on the principle that DSE is always evil, and should only be used as a very last resort. However, I have been having a really interesting conversation with Kimberley Tripp (find her Blog here) this weekend, and I may have changed my mind...

So first off, lets set the scene... Imagine that you have a multi use stored procedure, that accepts multiple parameters, and different queries pass in widely different values, or even similar values with highly variable selectivity.

The first time you run the stored procedure, or the first time it needs to recompile, (due to stats updates, being kicked out of the cache, etc) SQL will use "parameter sniffing". This is where it builds the plan based on the first set of values you pass in. If, the next time you run the procedure, you give a value with a very different level of selectivity (i.e. The first time, you pass in 'Banana Custard' and the second time, you pass in 'B%') then SQL will use the original execution plan, which will obviously not be the optimal one.

To get around this issue, you can use local variables in the stored procedure. The problem with this, is that the local variables are unknown at compilation, which means that when SQL will use statistics that are based on an overall average of values, rather than from the histogram. This means that once again, you can end up with a sub-optimal execution plan.

So, to get around this, we could use OPTION(RECOMPILE), which means that we force SQL to use a "one-use" plan, and you will get a new plan every time it runs. This issue here, is that OPTION(RECOMPILE) has several issues including (depending on service pack level) returns incorrect query results.
So to get around this, (and yes, we are finally there...!) use dynamic string execution with EXEC(@Query). Although it is possible for this plan to be reused, it is pretty unlikely, and it is essentially treated like an ad-hoc query.

Using this technique comes with it's own challenges, not least the risk of SQL Injection, but you can mitigate these risks with simple techniques, such as ensuring you use the "principle of least privilege", using EXECUTE AS clause in your procedures, using the QUOTENAME() function of parameters you pass into the procedure, etc.

So in conclusion, yes there are times, when dynamic string execution is the best method. You learn something new every day!

A big thank you to Kimberley for helping me get my head around this one!

Sunday, 6 February 2011

Deleting Duplicate Rows

Myself and two of my colleges were lucky enough to be invited to meet Denise Drapper (Head of Data Integration at Microsoft) last week, and amongst other things, she gave us a tour of the upcoming DQS (Data Quality Services) that is set to ship with Denali. Basically, it supports the kind of functionality that previously we would have to do manually, by using Fuzzy Lookup component, etc in SSIS, but this is using different algorithms and is all wrapped up into a nice easy to use GUI, that makes suggestions about possible data quality issues, with a similarity score, and then asks you to confirm or deny the duplicate. It also learns from the work you have done previously, which is pretty cool.

Unfortunately, it doesn't look like I'm going to get my hands on this tech, to play around with it until CTP3 in the Autumn, however, so for now, we are stuck with the tools we have. And that brings me nicely to the point of this post. Basically, if you scan the Net for ways of deleting rows which are duplicated, but have different keys, there are several solutions, most of them derived from the Microsoft solution, which is not pretty, and involves moving data into # tables, etc.

In SQL 05 and 08, however, providing that there is a way of uniquly identifying each row on a series of columns, other than the unique key (even if this is consists of every column in the table) there is a method that can delete duplicates in one statement, without any # tables. (Although it will still create structures behind the scenes in TempDB).

Lets assume that we have a table created with the following structure...

CREATE TABLE DeleteDuplicates
    Col1 INT,
    Col2 VARCHAR(20),
    Col3 VARCHAR(20)

...Now lets populate it with a few rows...

INSERT INTO DeleteDuplicates(Col1, Col2, Col3)
VALUES (1, 'Peter', 'Carter-Greenan'),
       (2, 'Peter', 'Carter-Greenan'),
       (3, 'Charles', 'Feddersen')

...Ok, so if we assume that in this table, business rules allow us to identify duplicates using Col2 and Col3, we can see that we have one duplicate. So now, we can run the following query to remove the duplicate row...

FROM DeleteDuplicates


        ROW_NUMBER() OVER(PARTITION BY Col2, Col3 ORDER BY Col2, Col3) as RowNumber
        FROM DeleteDuplicates
    ) Dupes
    WHERE Dupes.RowNumber > 1

...So there you have it. Dupes removed, no temp tables, one statement. What more is there to say?

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Salvage The Tail End Of The Log

In the event of disaster, the first thing that any experienced DBA will do, is try to back-up the tail-end of the log, in an attempt to salvage DML operations that have been made since the last log backup, but what if the server is inaccessible?
Well, the fix is "dirty" but effective. You basically need to "fudge" the log onto another server, in order to perform the backup.

First, on a second instance or server, create a shell database with the same name as the original. In this case...


...Now, set this database to be offline...


...Next, delete the data files and the log file that exists for the database on the second server. Now, Copy the log file from the original server, to the folder that the new database's file were in (before you deleted them).

If you now try to bring the database back on-line, you will receieve an error (obviously, considering you have no data files and an incorrect log file!), but you will be able to perform a tail-log backup, with the following syntax...

BACKUP LOG FudgeLog TO DISK = 'D:\SQLServerDownAndDirty\FudgeLog\FudgeLog_Log.bck' WITH INIT, NO_TRUNCATE

...You can now restore this backup at the end of your log chain, on your original server. You will loose no data, keep your job, and sleep happily at night! ;-)

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom

Optimizing You Log File

Optimizing your transaction log could easily fill a 50 page white paper, or a week's consultancy, but here are a few points to get you started:

How many physical log files?
This one is easy...One! Exactly one, always one, the answer is one! There is an urban myth that having multiple log files will improve performance, but this is just a myth. Log files are written sequentially, so if you have multiple files, it fills one, then fills the next one, etc. Pointless!

How many virtual log files?
This one is a bit more complicated. Your physical log file is managed internally with a series of virtual log files. See this post for more details. If you have too few, then the active portion of your log may be too large, leading to longer backup times. If you have too many VLFs however, this leads to "log fragmentation" and cause performance issues.

The general rule of thumb is to have VLFs sized at around 500MB. To achieve this, grow your log in 8000MB chunks. If you already have log fragmentation, the way to resolve it, is to shrink your log file (thus deleting the existing VLFs) and then grow it again, using the desired chunk size.

I/O Subsystem
This breaks down into two parts. Firstly, make sure that you put the log file on a separate physical disc to your data files. Otherwise you will get I/O contention.

The second part is a little more complex, and that is what RAID level should you use? Well, this depends on what features of SQL you are using. Before we begin, it may be beneficial to bring some context, by looking at the standard RAID levels, and how many IOs they generate.

RAID 1 - IOs = R + (W*2) / 2
RAID 5 - IOs = R + (W*4) / No Of Discs
RAID 10 - IOs = R + (W*2) / No Of Discs

If you are just using SQL Server at a basic level, then RAID 1 is a good choice. This is because almost all activity on the log will be write activity. The only read activity will be backups and restores. In this scenario, RAID 1 is better than RAID 5 because for every one write, there will be two physical writes to the volume with RAID 1, as opposed to 4 writes with RAID 5. It is also better than RAID 10 in this scenario, because RAID 1 will perform the operations as sequential IO, where as RAID 10 will perform them as random IO.

However, this recommendation changes if you are using some of the more advanced feature of SQL, especially the HA features. If you are using Database Mirroring, Replication or Log Shipping, then you are liable to have a much higher read to write ratio, and hence need better read performance, whilst maintaining write performance. In this scenario, RAID 5 is still a no-go, because you will double the number of physical writes for every write operation, from 2 to 4, as discussed above, so your best choice would be RAID 10. RAID 10 will maintain the 2 writes per write operation ratio, but also give you improved (and more scalable) read performance. This is because the the overall IOs will be divided by the number of discs in both RAID 1 & RAID 10, but in RAID 1, you always have exactly 2 discs, where as with RAID 10, you will have a minimum of 4, and as many as you require.

Log Manager Bottlenecks
One reason that your log can become a bottleneck, is because in SQL Server 2008, the Log Manager will only allow you to have 3840K of data that has been sent to be written, but where no acknowledgement of the write has been received back. Before SQL 2008, this limit was even lower.

Monitoring Log Performance
Within SQL, keep an on the sys.dm_os_wait_stats dmv. If you see lots of WRITELOG waits, then the log is probably coming under pressure.

In Perfmon, you should be monitoring the obvious disc counters for you log drive, such as Avg Disk Queue Length, Disk Writes/Sec, etc, but you should also keep track of the Log specific SQL counters. In particular, pay attention to Log Flushes/Sec and Log Flush Wait Time.

Together, these counters should give you an all-round idea of how your log is performing. From a performance aspect, the Log tends to be the forgotten relative, but be careful, as forgotten relatives have a nasty habit of turning up unexpectedly, and making a nuisance of themselves!

Find my book, Pro SQL Server Administration on Amazon -


United Kingdom