Category Archives: SQL Server 2014

Hekaton concurrent updates: Large vs. small transactions

Here is another experiment that I performed with In-Memory OLTP (Hekaton) on CTP2 of SQL Server 2014.  I started by creating a table and a sequence:

create table Employee
(
	EmployeeId bigint not null
		primary key nonclustered hash with (bucket_count = 20000000),
	Salary money
) with (memory_optimized = on, durability = schema_only);
 
CREATE SEQUENCE EmployeeSequence AS BIGINT
START WITH 1 INCREMENT BY 1;

Next, I added about 10 million records into the table. The EmployeeId was generated from the sequence object and the Salary was a random value from 25,000 to 75,000:

with Records as (select top (216) status from master.dbo.spt_values)
insert	Employee (EmployeeId, Salary)
select	next value for EmployeeSequence as EmployeeId,
	cast(cast(25000.00 + 50000.00 * (abs(cast(binary_checksum(newid()) as int)))
		* 1.0 / 2147483648.0 as int)as money) as Salary
from	Records r1 cross join Records r2 cross join Records r3;

The first few rows looked like this:

EmployeesTop10

I don’t know enough about the internals of Hekaton to be sure how the records in the Employee table are accessed during a table scan (or if there is a deterministic way that they are accessed), but I am making the guess that it happens in EmployeeId order. I set up the following queries in two separate connections:

Connection 1 – Give all employees a 5% raise:

update Employee
set Salary *= 1.05;

Connection 2 – Set a specific salary for a specific employee:

update Employee
set Salary = 80000.00
where EmployeeId = 9999999;

Before running anything, row 9,999,999 looked like this:

Employee9999999Before

I then fired off the query in connection 1 and as quickly as possible switched to connection 2 and executed the statement there. Query 2 completed quickly. The first query ran for a few seconds and then generated an error:

Msg 41302, Level 16, State 110, Line 1
The current transaction attempted to update a record that has been updated since this transaction started. The transaction was aborted.
The statement has been terminated.

The bottom line is that even though query 1 started first and almost surely had done far more work (in terms of number of updates done before it hits the record for employee 9,999,999), nonetheless it gets the short end of the stick and is the “victim” in this case.  At this point, the target employee has this value:

Employee9999999After

This behavior of simply killing off the second transaction involved in a concurrent update in such a fashion is the price that is paid for having no locking.  My concern is that it will limit the usefulness of Hekaton tables for OLTP purposes. I can certainly envision a scenario where a batch update repeatedly fails because small one-off updates taking place across the same set of records will repeatedly cause the larger transaction be to terminated. And batch updates are just a part of life, at least in my experience, for most OLTP systems. That being said, I still envision great utility in using the In-Memory tables in situations where concurrent updates won’t be likely, such as for staging tables.

Hekaton error message is slightly misleading

Let’s take a look at the error message that we receive when we do a concurrent update on an In-Memory OLTP (Hekaton) table. I start by opening a connection and running:

begin transaction
 
update CheckingAccount with (snapshot)
set    Balance = 100.00
where  OwnerId = 123;

This update runs fine. Now, after this first statement completes, I open a second connection and run another update against the same table:

update CheckingAccount
set    Balance = 200.00
where  OwnerId = 123;

Since I have an open transaction in the first connection that has updated the same row, I expect the second attempt to fail. It does, and the error message is:

Msg 41302, Level 16, State 110, Line 1
The current transaction attempted to update a record that has been updated since this transaction started. The transaction was aborted.
The statement has been terminated.

The wording of this error message is a bit misleading, however. Clearly the first update did not happen after the second transaction started, particularly given that the second connection had not yet even been established at that time!

Hekaton – Classic deadlock scenario

Ever since CTP2 was released a few weeks back, I have become interested in the new In-Memory OLTP (aka Hekaton) feature currently being constructed for SQL Server 2014.  Once I learned that Hekaton uses lock-free structures, one of the first things I started to wonder about is how Hekaton handles a classic deadlock scenario.  After downloading and installing the CTP, I created a Hekaton-enabled database and then added a couple of tables:

create table CheckingAccount
(
       OwnerId int not null primary key nonclustered hash with (bucket_count = 1000),
       Balance money
) with (memory_optimized = on, durability = schema_and_data);
 
create table SavingsAccount
(
       OwnerId int not null primary key nonclustered hash with (bucket_count = 1000),
       Balance money
) with (memory_optimized = on, durability = schema_and_data);
 
insert CheckingAccount (OwnerId, Balance) values (123, 1000.00);
insert SavingsAccount (OwnerId, Balance) values (123, 2000.00);

Now in connection 1, I run the following:

begin transaction
 
update SavingsAccount with (snapshot)
set           Balance -= 300.00
where  OwnerId = 123;

In connection 2:

begin transaction
 
update CheckingAccount with (snapshot)
set           Balance -= 200.00
where  OwnerId = 123;

Back in connection 1:

update CheckingAccount with (snapshot)
set           Balance += 300.00
where  OwnerId = 123;

Msg 41302, Level 16, State 110, Line 1
The current transaction attempted to update a record that has been updated since this transaction started. The transaction was aborted.
Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
The statement has been terminated.

So right there we have the answer to how Hekaton handles this scenario: the second attempt at an update on the same row causes the transaction to fail.

Completing the scenario, switch back to connection 2 and run:

update SavingsAccount with (snapshot)
set           Balance += 200.00
where  OwnerId = 123;
 
commit transaction

Now check the state of the tables:

select ca.OwnerId, ca.Balance CheckingBalance, sa.Balance SavingsBalance
from CheckingAccount ca inner join SavingsAccount sa
on ca.OwnerId = sa.OwnerId where ca.OwnerId = 123;

hekaton-consistency
So the tables are transactionally consistent. Bear in mind that in the classic version of this scenario (using disk-based tables), connection 2 would have been chose as the deadlock victim. It’s OK that in this world the opposite is the case because the rules of consistency don’t dictate who is the “winner” and who is the “loser,” only that end result “balances out.”