Category Archives: Internals

Halloween Protection

I thought it would be appropriate today, on the 40th anniversary of discovery of the problem, to address the issue of Halloween protection. Describing the problem is nothing new, of course, but nonetheless, I thought it would be interesting to illustrate in my own words.

Reportedly, the Halloween problem was identified on October 31, 1976 by three researchers, and the name stuck. Imagine that we have a table of employees with salary information. We’ll create such a table and make the employee name the primary key and the clustering key, and then put a nonclustered index on salary. Here is such a table with some random data.

create table Employee
(
	FirstName nvarchar(40) not null,
	Salary money not null,
	constraint pk_Employee primary key clustered (FirstName)
);
 
create nonclustered index ix_Employee__Salary on Employee (Salary);
 
insert Employee (FirstName, Salary)
values ('Hector', 22136), ('Hannah', 21394), ('Frank', 28990), ('Eleanor', 29832), ('Maria', 28843),
       ('Maxine', 23603), ('Sylvia', 26332), ('Jeremy', 20230), ('Michael', 27769), ('Dean', 24496);

The task at hand is to give all employees with a salary of less than 25,000 a 10% raise. This can be accomplished with a simple query.

update Employee
set Salary = Salary * 1.10
where Salary < 25000.00;

Now imagine that the SQL Server optimizer decides to use the nonclustered index to identify the rows to update. The index initially contains this data (bearing in mind that the index implicitly contains the clustering key).

employeencidx

Without Halloween protection, we can imagine that SQL walks through the rows in the index and makes the updates. So the Jeremy row is updated first, and the new salary is 22,253.

employeeafterjeremyupdate

However, the index is keyed by Salary and should be sorted on that column, so SQL Server will need to move the row in the index to the appropriate location.

employeeafterjeremyupdateandindexmove

Next, SQL updates the Hannah row and moves it within the index.

employeeafterhannahupdateandindexmove

And then for Hector:

employeeafterhectorupdateandindexmove

However, at this point, the next row is Jeremy again. So the database updates the Jeremy row for a second time.

employeeafterjeremyupdateandindexmove2

Hannah again.

employeeafterhannahupdateandindexmove2

Then Maxine.

employeeaftermaxineupdateandindexmove

Finally Hector for a second time, followed by Jeremy for a third time, and then Dean.

employeeafterhectorjeremydeanupdateandindexmove

At this point the next row is for Hannah again. However, her salary is greater than or equal to 25,000 and therefore doesn’t qualify for update based on the query predicate. Since none of the remaining rows meet the WHERE clause condition, processing stops and query execution is complete.

That, in a nutshell, is the Halloween problem: some rows get process multiple times as a result of the row being physically moved within the index being used to process the query. For comparison, the (incorrect) results described the above processing are on the left, and the correct results are on the right.

employeeupdatefinalresultsincorrect employeeupdatefinalresultscorrect

In this particular case, the process results in every employee having a salary of at least 25,000. However, imagine if the query didn’t have the WHERE clause but still chose to use the Salary index to process. The result would be an infinite loop where each employee’s salary would be repeatedly increased by 10% until an overflow condition occurred.

So how does SQL Server protect against the Halloween problem? There are a variety of mechanisms in place for this very purpose.

  • SQL Server may choose an alternate index. In fact, this is precisely what SQL will do in our example by reading from the clustered index to drive the updates. Because the clustered index isn’t sorted by Salary in any way, updates during the processing phase don’t affect the row location and the problem does not arise. This separation of the read and write portion avoids the problem.

    halloweenalternateindexselection

  • But what if scanning the clustered index is a relatively expensive operation. When I load 1,000,000 rows into the Employee table and only 5 employees have salaries below 25,000, SQL Server will naturally favor the nonclustered index.

    halloweennonclusteredindexselection

    Note particularly the highlighted Sort operator. Sort is a blocking operation, that is, the entire input set must be read (and then sorted) before any output is generated. Once again, this separates the read and write phases of query processing.

  • Suppose that the Salary index is the only means of access the data. For instance, if I drop the nonclustered Salary index and the primary key on the table, and then create a clustered index on Salary, SQL Server is forced to use the Salary index. In this case, SQL Server will insert an eager spool into the plan, once again separating the reading and writing.

    halloweeneagerspool

  • Memory-optimized (Hekaton) tables avoid the Halloween problem by their very nature. Rows in a memory table are read-only, and “updates” to the row cause the current version of the row to be invalidated a new version of the rows to be written. Once again, read and write separation happens, in this case organically.

I don’t intend this to be a comprehensive list of protections in SQL Server, but it covers the most common scenarios.

To me, the key takeaway here is that Halloween protection is an integral and crucial aspect of query processing that the optimizer simply must take into account. This most certainly affects the plan generation processes and likely adds overhead to the plan to ensure correct results.

What Records Are Included in a Transaction Log Backup?

I had an interesting situation come up recently, and it boiled down to this. Suppose I start a transaction log backup at 10:00:00 and it runs for one minute, until 10:01:00. Is a transaction that happened at 10:00:30 included in the backup? Let’s do an experiment.

I have a sample database containing customers and orders, and an application that fires up a bunch of threads and inserts random data into the database. So I kicked off the application and let it run for several hours so as to generate a bunch of log records. I disabled the normal log backups for the duration of this test.

With the application still running, I then kicked off a transaction log backup, introduced a brief delay, followed by a second log backup.

backup log CorpDB to disk = 'S:\Backup\SQL2014\Log\log1.trn' with init, compression;
waitfor delay '0:00:10';
backup log CorpDB to disk = 'S:\Backup\SQL2014\Log\log2.trn' with init, compression;

Now I want to look in the header of my backup files.

restore headeronly from disk = 'S:\backup\sql2014\Log\log1.trn';
restore headeronly from disk = 'S:\backup\sql2014\Log\log2.trn';

RestoreHeader

I see that the first log backup started at 7:30:17 pm and ended at 7:31:37, and the second backup ranged from 7:31:47 to 7:31:47. Unfortunately, this is only stored with one-second resolution, but we can see that the first backup ran for about 80 seconds.

So the question now is which backup contains a transaction that occurred at, say, 7:30:30?

I’m going to load the transaction log records into a couple of temp tables.

select row_number() over (order by [Current LSN]) rn, *
into #log1
from sys.fn_dump_dblog(null, null, 'disk', 1, 'S:\backup\sql2014\Log\log1.trn',
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default);
 
create clustered index ix1_log1 on #log1 (rn);
 
select row_number() over (order by [Current LSN]) rn, *
into #log2
from sys.fn_dump_dblog(null, null, 'disk', 1, 'S:\backup\sql2014\Log\log2.trn',
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default, default, default, default, 
default, default, default, default, default, default, default, default);
 
create clustered index ix1_log2 on #log2 (rn);

Now let’s take a look at the last 10 records in the first log file, the first 10 records in the second log, and the last 10 records in the second log.

select 1 log_nbr, rn, Operation, Context, [Current LSN], [Transaction ID], [Begin Time], [End Time]
from #log1
where rn >= (select max(rn) from #log1) - 9
union all
select 2 log_nbr, rn, Operation, Context, [Current LSN], [Transaction ID], [Begin Time], [End Time]
from #log2
where rn <= 10
union all
select 2 log_nbr, rn, Operation, Context, [Current LSN], [Transaction ID], [Begin Time], [End Time]
from #log2
where rn >= (select max(rn) from #log2) - 9
order by log_nbr, rn;

CombinedLogRecords

The first backup contains log records through 7:30:17.873, and the second backup starts with log records a few milliseconds later at 7:30:17.887 and ends at 7:31:47.327. In other words, the second log contains the records covering the 90 seconds between the start times of the two log backups.

To answer my specific question I can also find a transaction that happened at 7:30:30 pm in the second log file:

select top 1 rn, Operation, Context, [Current LSN], [Transaction ID], [Begin Time], [End Time]
from #log2
where [Begin Time] >= '2016/04/29 19:30:30'
order by [Begin Time];

SearchForSpecificTxn

From this, I conclude that a log backup includes transactions up to approximately the BackupStartDate contained in the header, and not the BackupEndDate.

Caveat: I only tried this on SQL Server 2014 SP1 Developer Edition. It is entirely possible that other versions / service packs / editions behave differently.