Sunday 30 January 2011

How Clever Is your Index?

If you Seek an index, would you expect SQL to seek the index once for each query, or once for each operator, or once for each value? You would hope that it was once for each query, but is that true? Well, it depends. I've created a rather contrived example, to demonstrate SQL's behavior.

Firstly, I created and populated a table, and then create an index on it using the following script...

CREATE TABLE LogicalReadsTbl (Col1 INT)
DECLARE @i INT
SET @i = 1


WHILE @i < 100000
BEGIN
     INSERT INTO LogicalReadsTbl
     VALUES(@i)
     SET @i = @i + 1
END


CREATE NONCLUSTERED INDEX IX1 ON LogicalReadsTbl(col1)

...I have created a nonclustered index, but for this particular example, the behavior would be identical with a clustered index (although I will let you prove that for yourself)

Next, I need to make sure that my queries are going to span rows that are stored in multiple pages, so I ran a simple query, and included %%physloc%% (see this post) to see how my rows were organised.

Based on this, I choose to use values between 619 and 626, as the page break was on 623/623.

Next, I ran sys.dm_db_index_physical_stats against my table, to double check the number of levels in the index. As suspected (because of size), there were 2 levels.

So the next thing to do, was turn on IO statistics. (Please bare in mind that because I ran a query including %%physloc%%, I am already expecting all pages of this table to be in the buffer cache, and hence I should see no physical reads, only logical reads) Just for fun, I also turned on Include Actual Execution Plan, then I ran the following query, and checked out the results...

SELECT COL1
FROM LogicalReadsTbl
WHERE Col1 BETWEEN 619 AND 626


Table 'LogicalReadsTbl'. Scan count 1, logical reads 2

...Perfect! SQL performed an index seek, and then, as the pages were next to each other, it read them as one block.

Ok, so now lets look at another query. Now this query will only return 2 rows, not 8, and read exactly the same pages, so we should have an identical query plan, and identical IO stats...right???

SELECT
COL1
FROM LogicalReadsTbl
WHERE Col1 = 619 OR Col1 = 626


Table 'LogicalReadsTbl'. Scan count 2, logical reads 4


...Ok, that's not so good, despite the same execution plan, and the same environmental conditions, and despite the fact that this time, we are reading 1/4 of the number of rows, the IO has doubled. SQL has performed 1 seek, for each page it required.

So be careful, sometime SQL is not as clever as we like to think it might be!!!

Do you find this kind of thing interesting? Please let me know, as it's good to know how to use my blogging time!!!

Find my book, Pro SQL Server Administration on Amazon -

America

United Kingdom

Thursday 27 January 2011

Livelocks - Do they exist?

I have been involved in an interesting discussion over the last couple of days around livelocks, and whether or not they exist in SQL Server. The question arose because somebody had been asked the difference between a deadlock and a livelock in a job interview and had not been able to answer the question, and in fact, if you Google livelocks in SQL Server, 90% of the results are sites offering standard SQL Server interview questions.

So first off, what is a livelock? Basically, it is where a resource has an exclusive lock is blocked by a shared lock, but then another shared lock is granted before the exclusive lock, and then another, and another, and so on, meaning that the exclusive lock is never granted.

But do they exist? Well, my initial reaction was no, because SQL's lock manager queues lock requests and grants them in order, and there are not any scenarios where locks can "jump the queue" but I never like to state something as a fact unless I can prove it, and it is very difficult to prove a negative.

However, I though that I would do my best to replicate this behavior, and see what happened. So the first thing I did was create a table and populate it with the following script...

CREATE TABLE livelock (col1 int)
INSERT INTO livelock
VALUES(1)
GO 10000


...The next step was to try and create a live lock, so I opened three query windows and in the first, I ran the following query...

BEGIN TRANSACTION
SELECT col1
FROM livelock WITH (TABLOCK, HOLDLOCK)


...I used TABLOCK to reduce the number of locks, to make it easier to monitor and HOLDLOCK to create a queue.

Then in the second windows I ran...

BEGIN TRANSACTION
DELETE FROM livelock
COMMIT TRANSACTION


...And then in the third I ran the original query...

BEGIN TRANSACTION
SELECT col1
FROM livelock WITH (TABLOCK, HOLDLOCK)


...Now, in a separate query window, I ran a query against the DMV sys.dm_tran_locks, which looks at lock manager, and lets you see the lock waits in progress...

resource_type   resource_associated_entity_id    request_mode     request_type request_status
OBJECT           1927677915                                S                          LOCK GRANT
OBJECT           1927677915                                IX                        LOCK WAIT
OBJECT           1927677915                                S                          LOCK WAIT

...Following this, MVP Tibor Karaszi confirmed that LiveLocks had been removed back in version 6 of SQL Server, so in this context, they no longer exist.

This was where I was planning to end the post, but whilst writing this article, I stumbled across this post by Bob Ward at Microsoft, which talks about the relaxing of the strict queuing within SQL Server from 2005 onwards. However, it also mentions that the logic has been designed to avoid starvation.

So in conclusion, SQL has logic to avoid Livelocks. It may be possible to force one to happen, but fundamentally they are very unlikely to occur.

Find my book, Pro SQL Server Administration on Amazon -

America

United Kingdom

Monday 24 January 2011

Nested Transaction? No such thing!

My post about transaction isolation levels has prompted me to have a good old rant about one part of the T-SQL syntax that drives me mad is the ability to create nested transactions, despite the fact that no such thing exists in SQL! Don't believe me? Check this out...

CREATE TABLE NestedTransactions
(Col1 int)

BEGIN TRAN --Outer Transaction
     INSERT INTO NestedTransactions
     VALUES(1)
     BEGIN TRAN --Inner Transaction
          INSERT INTO NestedTransactions
          VALUES(2)
     COMMIT TRAN --Inner Transaction
ROLLBACK --Outer Transaction

SELECT * FROM NestedTransactions

...So in the script above, we have committed the inner transaction, so even though we rollback the outer transaction, the NestedTransaction table should still include the value 2 from the inner transaction, right? Wrong...



...Ok, so what about if we try this script...

BEGIN TRAN --Outer Transaction
     INSERT INTO NestedTransactions
     VALUES(1)
     BEGIN TRAN --Inner Transaction
          INSERT INTO NestedTransactions
          VALUES(2)
     ROLLBACK --Inner Transaction
COMMIT --Outer Transaction

SELECT * FROM NestedTransactions

...This time, we have rolled back the inner transaction, but committed the outer transaction, so the NestedTransactions table should have one value, of 1, right? Wrong...



...And check out the message I received...

(1 row(s) affected)
(1 row(s) affected)
Msg 3902, Level 16, State 1, Line 9
The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
(0 row(s) affected)

...As you can see, when it rolled back the transaction, it rolled back everything, so when I tried to commit the outer transaction, there was nothing to commit.

The bottom line, is that nested transactions do not exist!

Find my book, Pro SQL Server Administration on Amazon -

America

United Kingdom

Understanding Transactional Anomolies & Isolation Levels

This isn't going to be a particularly technical post, but I have seen some confusion lately about transactional anomalies, and the isolation levels that protect against them, so I first wanted to cover what the three main anomalies are, and then talk about how isolation levels protect against them.

The worst anomaly is a Dirty Read. This is where a transaction reads data that is subsequently rolled back by another transaction, and hence, never exists in the database, as in the example below. This should never be allowed if business decisions are going to be made, based on this data.


DIRTY READ


Transaction 1
Transaction 2


BEGIN TRAN

INSERT INTO myTbl(Col1,Col2)
VALUES (1,2)


BEGIN TRAN

SELECT Col1, Col2
FROM myTbl
WHERE Col1 = 1 AND Col2 = 2

COMMIT TRAN
ROLLBACK



The next anomaly is a non-repeatable read. This is where, within the context of one transaction, the same query is issue twice, but the second time it runs the results are different, due to an UPDATE statement in a different transaction, as in the example below. This can cause an issue if you are performing complex calculations such as compound interest.

NON REPEATABLE READ

Transaction 1
Transaction 2


BEGIN TRAN

SELECT Col1, Col2
FROM myTbl
WHERE Col1 = 1 AND Col2 = 2


BEGIN TRAN

UPDATE myTbl
SET Col1 = 2
WHERE Col1 = 1 AND Col2 = 2

COMMIT TRAN
SELECT Col1, Col2
FROM myTbl
WHERE Col1 = 1 AND Col2 = 2

COMMIT TRAN



A phantom read is similar to a non-repeatable read, but relates to the other transaction inserting or deleting data, as opposed to updating it, such as in the example below. This will cause problems if you are performing very complex calculations, such as actuarial calculations.

PHANTOM READ


Transaction 1
Transaction 2


BEGIN TRAN

SELECT Col1, Col2
FROM myTbl
WHERE Col1 = 1 AND Col2 = 2


BEGIN TRAN

INSERT INTO myTbl(Col1,Col2)
VALUES (1,2)

COMMIT TRAN
SELECT Col1, Col2
FROM myTbl
WHERE Col1 = 1 AND Col2 = 2

COMMIT TRAN



So how can we protect against these anomalies? Well, the main tool we have at our disposal is transaction isolation levels. Traditionally, SQL Server has supported the following isolation levels:
Read Uncommitteded
Read Committed (DEFAULT)
Repeatable Read
Serializable
But since SQL Server 2005, the following isolation levels have also been provided:
Read Committed Snapshot (Becomes DEFAULT if turned on)
Snapshot
Read Uncommitted does not protect against any of the aforementioned anomalies, and works by not taking out any locks for read operations. It is known as an "optimistic" concurrency level.
Read Committed will protect against dirty reads, but will allow the other anomalies to occur. This level is the default, and works by taking out a lock for read operations, but then immediately releasing it. This is also an "optimistic" concurrency level.
Repeatable Read is a "pessimistic" concurrency level, and is so called because it will stop all anomalies except for Phantom Reads, but works by taking out locks for read operations and holding them for the duration of the transaction.
Snapshot is also a "pessimistic" concurrency level, and works by locking either an index range (if possible) or the entire table.
It goes without saying that if you choose either of the two pessimistic isolation levels, then you will encounter higher levels of contention, meaning more waits and worse...deadlocks!
So to get around this issue, Microsoft introduced Snapshot and Read Committed Snapshot. These isolation levels offer the same level of protection as Serializable and Read Committed respectively, but without taking out any locks!
So that's fantastic isn't it? Lets just use Snapshot for everything right? Wrong! Like everything, there is always a trade-off, and in this case, it relates to how these two isolation levels work. They both work by taking advantage of TempDB called the Version Store.
If you use Snapshot isolation, then at the beginning of every transaction, SQL takes a "snapshot" of all of the required data and puts it in the version store, and satisfies the query(s) from there. This has tow issues. Firstly, you are generating a lot more I/O, so you must ensure that TempDB is properly sized and scaled, and that the I/O sub-system is man enough to handle the extra workload. The second issue is that this could potentially lead to "lost updates" which are not supported by SQL Server. If they were, this isolation level would be aptly called Chaos. This means that if you update the data in the version store, and in the base table at the same time, your snapshot transaction will throw an error and be rolled back.
Read Committed Snapshot also uses the version store, but this time it takes the snapshot at the beginning of every statement within the transaction, rather than at the beginning of the transaction itself. Worse than that, if you turn Read Committed Snapshot on, you are turning off Read Committed, which means that Read Committed Snapshot becomes the default isolation level, and is used for any transactions where an isolation level is not explicitly stated...Watch your server die!
Quite often, if you need a pessimistic isolation level, you just need it for part of a transaction, or for one or two specific tables. If this is the case, rather than using a transaction isolation level, you can use query hints, which give you much more granular control. More information about query hints and how to use them is available in books online.
I hope this cleared things up for anybody who was not familiar with this topic!

Find my book, Pro SQL Server Administration on Amazon -

America


United Kingdom

Friday 21 January 2011

Why Not To Shrink A Database

In my last post, about maintenance plans, see here I spoke about the fragmentation caused by shrinking a database, and promised a follow-up post, with a demo of the issue. Well, this is it!

The problem arrises from the way the shrink occurs, and this applies to DBCC SHRINKFILE, DBCC SHRINKDATABASE and of course auto shrink. Basically, what happens is that SQL Server goes to the end of the file, picks up pages one-by-one and moves them to the first available space in the file. This can essentially reverse the order of your pages, turning perfectly defragmented indexes and perfectly fragmenting them.

For this demo, I am going to use use a table I created in my post here about why not to use GUIDs as Primary Keys, as it gives me a near perfectly fragmented set of indexes that I can defrag, and then mess up again!

So here is the fragmentation (found using sys.dm_db_index_physical_stats) before I have done anything...

 (No column name)   index_type_desc                 avg_fragmentation_in_percent avg_page_space_used_in_percent
  FragWithNewID     CLUSTERED INDEX            98.80952381                              72.04514455
  FragWithNewID     NONCLUSTERED INDEX    96.96969697                              78.59694836
  FragWithNewID     NONCLUSTERED INDEX    98                                             64.22041018


...Almost perfect fragmentation, so lets rebuild these indexes, and check the stats again...

ALTER INDEX [PK_Table_1] ON [dbo].[FragWithNewID] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, SORT_IN_TEMPDB = OFF, DATA_COMPRESSION = NONE )
GO
USE [GUIDFragmentation]
GO
ALTER INDEX [IX_Col1] ON [dbo].[FragWithNewID] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, SORT_IN_TEMPDB = OFF, DATA_COMPRESSION = NONE )
GO
USE [GUIDFragmentation]
GO
ALTER INDEX [IX_Col2] ON [dbo].[FragWithNewID] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, SORT_IN_TEMPDB = OFF, DATA_COMPRESSION = NONE )
GO


(No column name) index_type_desc                avg_fragmentation_in_percent   avg_page_space_used_in_percent
FragWithNewID    CLUSTERED INDEX           0                                                 99.2190140845071
FragWithNewID    NONCLUSTERED INDEX   1.88679245283019                       97.881504818384
FragWithNewID    NONCLUSTERED INDEX   3.03030303030303                       97.3163825055597


...Ahhh, that's better. Now lets shrink the database and try again...

DBCC SHRINKDATABASE(N'GUIDFragmentation' )
GO

(No column name)    index_type_desc                avg_fragmentation_in_percent avg_page_space_used_in_percent
FragWithNewID       CLUSTERED INDEX           73.7704918032787                     99.2190140845071
FragWithNewID       NONCLUSTERED INDEX   45.2830188679245                     97.881504818384
FragWithNewID       NONCLUSTERED INDEX   72.7272727272727                     97.3163825055597


...Uh, oh dear! Lets not do that again!

The World's Worst Maintenance Plan


Below is the world's worst Maintenance Plan. Please try to ignore the fact that it is in Denali, and hence looks sexy!


This is a pretty standard maintenance plan. You might even have one like it in your environment, so what's so bad about it? Well, there are 3 issues, and they get progressively worse.

1) The first, is the fact that the Index Rebuild Task in SQL Maintenance Plans in "dumb". Or in other words, it blindly does through and rebuilds every index, regardless of whether it needs to be rebuilt or not. This is Ok if you have a nice big maintenance window, but if you in something approaching a 24/7 environment, as lots of us are these days, then you are likely to want to reduce your maintenance window as much as possible. Also, if you have Database Mirroring set-up (especially in SQL 2005) you might be flooding the network with your log stream. Even if you you don't have either of these considerations, I am still a great believer in not using more resources than you need to.

2) The second issue relates to the Update Stats Task. Now, not many people realise this, but when you rebuild your indexes, SQL Server takes advantage of having 100% stats at it's disposal, and updates the index stats based on a 100% sample. So at best, your Update Stats Task will be set to use 100% sample, and you are merely doing the work twice. In the worst case, you will be using a 10% or 20% sample to update your stats, and not only will you be doing the work twice, but you are going to end up with worse stats than you had originally.

3) Shrink is evil - Do not use! I mean it. The smallest problem with shrinking your database is that you should be managing your space pro-actively, and shrinking the files will just mean they have to grow again. The much worse side effect is that shrinking your files will cause hugh amounts of fragmentation, thus undoing all of the work your Index Rebuild has done.

In my next post, I will do a demo of the fragmentation caused by shrinking a database.

Find my book, Pro SQL Server Administration on Amazon -

America

United Kingdom

To GUID Or Not To GUID?

Sometime, I come across people who are in the habit of using GUIDs as Primary keys (and building a Clustered index on that primary key).

Well, the idea of a Primary Key is that it uniquely identifies records in a table, and the idea of a GUID is that it's unique, so that's fine, right? Wrong!

There are a couple reasons for this, the first of which is fragmentation. If you create you GUID values by using a DEFAULT Constraint that calls the NEWID() function, then it creates the 16-Byte numbers in a random order, which means that in order to maintain the logical order of the rows, (remember that a clustered index does NOT maintain physical order of rows see this post) then it is going to have to shift the logical order of rows in order to insert the new values. This can lead to page splits, and thus fragmentation.

But it gets worse than this if you have non-clustered indexes on the table, because each of these indexes will need to have their pointers to the new CI pages updated, leading to extra overhead, and fragmentation of those indexes as well. You know I like to prove things, so lets take a look at an example...

The first thing I did, was create a database, with one table using the following script...

CREATE TABLE FragWithNewID
 (
 ID uniqueidentifier NOT NULL,
 Col1 nchar(10) NOT NULL,
 Col2 int NOT NULL
 )  ON [PRIMARY]
GO
ALTER TABLE FragWithNewID ADD CONSTRAINT
 DF_Table_1_ID DEFAULT NEWID() FOR ID
GO
ALTER TABLE FragWithNewID ADD CONSTRAINT
 PK_Table_1 PRIMARY KEY CLUSTERED
 (
 ID
 )

CREATE NONCLUSTERED INDEX [IX_Col1] ON [dbo].[FragWithNewID]
(
 [Col1] ASC
)

CREATE NONCLUSTERED INDEX [IX_Col2] ON [dbo].[FragWithNewID]
(
 [Col2] ASC
)


 ...Next, a simple script to insert some values into the table...

INSERT INTO FragWithNewID (Col1, Col2)
VALUES('aaaaaa', 1)
GO 10000


...So now, lets check out the fragmentation. To do this, I will use the DMV sys.dm_db_index_physical_stats (Yes, I know it's actally a DMF, but like everybody else, I tend to use the term DMV collectivly)...

SELECT OBJECT_NAME(object_id), index_type_desc, avg_fragmentation_in_percent, avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats(DB_ID('GUIDFragmentation'),OBJECT_ID('FragWithNewID'),NULL,NULL,'DETAILED')
WHERE index_level = 0


...The results are staggering! Almost perfect fragmentation of all 3 indexes...

 (No column name)   index_type_desc                   avg_fragmentation_in_percent    avg_page_space_used_in_percent
  FragWithNewID     CLUSTERED INDEX           98.80952381                              72.04514455
  FragWithNewID     NONCLUSTERED INDEX   96.96969697                              78.59694836
  FragWithNewID     NONCLUSTERED INDEX   98                                             64.22041018



...So what about if you use NEWSEQUENTIALID(), rather than NEWID()? Well, it is the better of the two evils, as it will not cause anything like as much fragmentation, because the GUIDs will be generated in order, but you are still using a 16-Byte value, rather than a 4-Byte INT column, with the IDENTITY property set! This means that your rows will take up more space on the page, meaning you will have move pages, and thus less IO performance. Also, remember that the clustered key is not just stored in your CI, it is stored in your non-clustered index as well, so all indexes on the table will be less efficient.

I'm not saying UNIQUEIDENTIFIERs don't have their place, they certainly do, but that place is not in a clustered index! I saw one instance where a third party application insisted that the PK was a GUID, the recommendation I made was to have the PK constraint on the UNIQUIEIDENTIFIER, and then create the clustered index on a different column with a UNIQUE constraint. This column was an INT with the IDENTITY property set!

There is always a way around things! :)

Tuesday 11 January 2011

How many Virtual Log Files do I have?

As you may know, a physical log file is made up of multiple Virtual Log Files (or VLFs). This allows SQL to manage space and transactions effectively. In fact, when you truncate a log, you are not actually physically deleting any data, instead, you are simply marking a VLF as inactive, so that it can be reused, and if you shrink a log, the smallest size you can shrink it to, is the size of the active VLFs. By active VLF, we mean VLFs that contain open transactions.

This has the effect that when the end of the transaction log is reached, SQL moves back to the start of the log (The first inactive VLF) and begins to recycle. There is a misconception that if the loop of VLFs catches up with itself, you will receive a 9002 (Log full) error. This is incorrect. SQL will actually grow the log, thus creating additional VLFs, and "skip" the active VLFs. When the loop next goes round, the "skipped" VLFs we be reused as normal (Providing it has now been marked as inactive). The only exception to this rule is quite an obvious one...If the maximum size you have set for the log file has been reached, or the drive is full. Obviously, if either of those conditions are met, you will receive a 9002 error as you would expect.

So, back to the point of this post...How many VLFs do you have? Well, this depends on the initial size of the log when you created it, how many times it has grown and by what size, also if you have shrunk the log file.

The math for the number of VLFs is as follows:

When the log is first created, if the log is less than 64MB, there will be 4 VLFs, if the log is larger than 64MB and smaller than 1GB then there will be 8 VLFs, and if the log is larger than 1GB, then there will be 16 VLFs.

Then, every time the log grows, the same rules will be applied. If you grow the log by less than 64MB, SQL will create an additional 4 VLFs, if you grow the log by between 64MB and 1GB, then SQL will add 8 new VLFs, and growing the log by more than 1GB, will result in 16 VLFs being added.

As a performance consideration, it is generally recommended that you have less than 50 VLFs and it is recommended that you grow your log file 8000MB chunks, in order to keep your VLFs at around 500MB in size. You should specifically not grow you log by exactly 8GB (or exactly 8192MBs), as this exposes a bug, that calculates the size incorrectly if it grows in multiples of 4GB, that makes it grow less than 1MB at a time, and kills your database!

So, the big question...Is it possible to actually see how many VLFs you currently have, and how many are active, etc? Well, of course it is! You can do exactly that by using DBCC LOGINFO.

Firstly, I created the database with the following script...


...If we then examine DBCC LOGINFO



RecoveryUnitId
FileId
FileSize
StartOffset
FSeqNo
Status
Parity
CreateLSN
0
2
134152192
8192
34
2
64
0
0
2
134152192
134160384
0
0
0
0
0
2
134152192
268312576
0
0
0
0
0
2
134152192
402464768
0
0
0
0
0
2
134152192
536616960
0
0
0
0
0
2
134152192
670769152
0
0
0
0
0
2
134152192
804921344
0
0
0
0
0
2
134668288
939073536
0
0
0
0


...As you can see, 8 VLFs have been created, as you would expect. In this output, FileID is the ID of the physical log file, FileSize is the size of the VLF, StartOffset marks the beginning of the VLF in relation to the beginning of the physical Log, FSeqNo is the sequence number of the VLF, Status is set to 0 for inactive and 2 for active, (so if you were troubleshooting a log that would not truncate, one point of call would be to look in this columns for lots of 2s!), Parity is a parity bit column, and CreateLSN is the first LSN of the VLF. You may know that a description of RecoveryUnitID is conspicuous by it's absence. This is because is has only appeared in Denali, and tonight is the first time I have seen it, so more details to follow, but I suspect it ties in with the new HADR features!?! Watch This Space for updates!

Anyway, I digress...If we now increase the log size by an additional 8GB, (This will not expose the bug for me, as I am running Denali, and the bug is fixed in this release. If I were running on SQL 05 or 08, I would have increased by 8000MB), it will add 16 new VLF, giving us a total of 24. This means that we see the following results...



RecoveryUnitId
FileId
FileSize
StartOffset
FSeqNo
Status
Parity
CreateLSN
0
2
134152192
8192
34
2
64
0.00
0
2
134152192
134160384
0
0
0
0.00
0
2
134152192
268312576
0
0
0
0.00
0
2
134152192
402464768
0
0
0
0.00
0
2
134152192
536616960
0
0
0
0.00
0
2
134152192
670769152
0
0
0
0.00
0
2
134152192
804921344
0
0
0
0.00
0
2
134668288
939073536
0
0
0
0.00
0
2
536870912
1073741824
0
0
0
34000000006600000.00
0
2
536870912
1610612736
0
0
0
34000000006600000.00
0
2
536870912
2147483648
0
0
0
34000000006600000.00
0
2
536870912
2684354560
0
0
0
34000000006600000.00
0
2
536870912
3221225472
0
0
0
34000000006600000.00
0
2
536870912
3758096384
0
0
0
34000000006600000.00
0
2
536870912
4294967296
0
0
0
34000000006600000.00
0
2
536870912
4831838208
0
0
0
34000000006600000.00
0
2
536870912
5368709120
0
0
0
34000000006600000.00
0
2
536870912
5905580032
0
0
0
34000000006600000.00
0
2
536870912
6442450944
0
0
0
34000000006600000.00
0
2
536870912
6979321856
0
0
0
34000000006600000.00
0
2
536870912
7516192768
0
0
0
34000000006600000.00
0
2
536870912
8053063680
0
0
0
34000000006600000.00
0
2
536870912
8589934592
0
0
0
34000000006600000.00
0
2
536870912
9126805504
0
0
0
34000000006600000.00


...Finally, if we try to shrink the log file back down to 950MB, because we are just deleting inactive VLFs, we will end up with our original 4 VLFs, with the same original sizes, totaling 1GB, not 950MB...

RecoveryUnitId
FileId
FileSize
StartOffset
FSeqNo
Status
Parity
CreateLSN
0
2
134152192
8192
34
2
64
0.00
0
2
134152192
134160384
0
0
0
0.00
0
2
134152192
268312576
0
0
0
0.00
0
2
134152192
402464768
0
0
0
0.00
0
2
134152192
536616960
0
0
0
0.00
0
2
134152192
670769152
0
0
0
0.00
0
2
134152192
804921344
0
0
0
0.00
0
2
134668288
939073536
0
0
0
0.00


...Hope you found this useful, and I will post an update, with the meaning of RecoveryUnitID column, as soon as I have it, unless of course you already know, in which case, please post a comment!

CREATE DATABASE LogInfoTest
ON  PRIMARY
( NAME = N'instantinittest', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.DENALI\MSSQL\DATA\loginfotest.mdf')

 LOG ON
( NAME = N'instantinittest_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.DENALI\MSSQL\DATA\loginfotest_log.ldf' ,   SIZE = 1GB)

Find my book, Pro SQL Server Administration on Amazon -

America

United Kingdom