SQL Server 2016 CTP2 Initial Impressions: Temporal Table Performance

One of the interesting new features that I came across within CTP2 of SQL Server 2016 are the new Temporal Tables, which essentially give you the ability to see your records from a point in time perspective. My hope was to see something that gave me an easier to use Change Data Capture like feel, and I was specifically hoping it would be with that sort of performance that CDC gives you. What I found was not quite that.

Here is the setup: First I would like to create a baseline table. This will not be Temporal, it is just to compare the results between a Temporal table and a normal table.

CREATE TABLE nottemporal
(
id INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED
,col1 VARCHAR(200)
,col2 DATETIME2
,col3 INT
);

Next, I will create my temporal table. It will have the same structure as my non Temporal table but with the required fields that essentially make a table Temporal.

CREATE TABLE testtemporal
(
id INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED
,col1 VARCHAR(200)
,col2 DATETIME2
,col3 INT
,SysStartTime DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL
,SysEndTime DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL
,PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
) WITH (SYSTEM_VERSIONING = ON);

The syntax is a bit different than what we would normally use within SQL Server, and it is required in order to make these Temporal Tables work. It is also worth noting that things look a bit different for a Temporal table within Management Studio. You will see that we have a little note that says it is System-Versioned and then when you expand the table there is a reference to the default History table that is created for us that contains this data. You can create your own table (so that it is named nicely) but that is for another time.

Temporal Table Object Explorer

Now I want to populate this table. Before doing that we need a numbers table to simplify things, plus Adam Machanic tells us that we require one! If you create your numbers table with 1 million rows and just like Adam tells us here, my scripts will work for you without any changes.

For reference, I am running all of these on an Azure VM created with the SQL 2016 CTP2 image with 4 cores and 3.5 GB of RAM. Pretty puny but it will do for this.

>SET STATISTICS IO ON;
SET STATISTICS TIME ON;
 
INSERT INTO testtemporal(col1, col2, col3)
Select 'test data ' + cast(number as varchar(10))
,dateadd(ss, number, getdate())
,number
From numbers;
GO
 
INSERT INTO nottemporal(col1, col2, col3)
Select 'test data ' + cast(number as varchar(10))
,dateadd(ss, number, getdate())
,number
From numbers;
GO

From a time perspective these two ran in roughly the same amount of time and obviously there will be slightly higher IO for the Temporal table, since it has a couple of extra columns, if nothing else.

For the Non Temporal Table:

Table 'nottemporal'. Scan count 0, logical reads 28318, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Numbers'. Scan count 1, logical reads 1615, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 7812 ms, elapsed time = 57097 ms.

For the Temporal Table:

Table 'testtemporal'. Scan count 0, logical reads 38198, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Numbers'. Scan count 1, logical reads 1615, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 7844 ms, elapsed time = 58777 ms.

But it is when we start updating that this gets interesting. In this case, I am going to update all 1 million rows. Perhaps not a real-world scenario, but it is the best way to show what I see as a concern.

>Update testtemporal
set col1 = 'Modified Data ' + cast(col3 as varchar(10));
GO
Update nottemporal
set col1 = 'Modified Data ' + cast(col3 as varchar(10));

For the Non Temporal Table:

Table 'nottemporal'. Scan count 1, logical reads 61298, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 8766 ms, elapsed time = 42353 ms.

For the Temporal Table:

Table 'MSSQL_TemporalHistoryFor_565577053'. Scan count 0, logical reads 15993, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'testtemporal'. Scan count 1, logical reads 83217, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 31109 ms, elapsed time = 68449 ms.

This is where it gets interesting from an IO/Time perspective. Realizing that Time is one of those things that you cannot bank on, it is a noticeable jump here though quite obviously YMMV. Comparable to having a trigger on the temporal table, I would say (without real data to back that up…just a feeling). 68 seconds to update that Temporal table vs 42 seconds to update the non Temporal table.

The reads are higher, but it is not significant … not yet anyway.

Seeing how poorly that was on the first update, I figure it is worth checking to see what would happen if I go and update those records again back to what they were before.

>Update testtemporal
set col1 = 'test data ' + cast(col3 as varchar(10));
GO
Update nottemporal
set col1 = 'test data ' + cast(col3 as varchar(10))

And the IO/Time data for this gets even crazier.

For the Non Temporal Table:

Table 'nottemporal'. Scan count 1, logical reads 11165, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 5063 ms, elapsed time = 15502 ms.

For the Temporal Table:

Table 'MSSQL_TemporalHistoryFor_565577053'. Scan count 0, logical reads 3242447, physical reads 0, read-ahead reads 3, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'testtemporal'. Scan count 1, logical reads 15156, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 66359 ms, elapsed time = 165560 ms.

This is where it went nuts, in my opinion. We see the IO on the History table jump up from 15,000 for that first update to over 3 million on the second (and as I later found out a number of updates after that) and the time to complete was an order of magnitude higher. Obviously, from a performance perspective this could be problematic.

Digging into the why of this, it was pretty clear by just looking at the structure of that History table that SQL creates for us. The clustering key is built on our own Primary Key, but it also adds on the SysStartTime and SysEndTime datetime2 values.

Temporal Table Clustering Keys

And then running a quick DBCC SHOWCONTIG to see how things are laid out after those updates, we see that there is a fair amount of Fragmentation, as would be expected for a clustering key of this nature.

DBCC SHOWCONTIG scanning 'MSSQL_TemporalHistoryFor_565577053' table...
Table: 'MSSQL_TemporalHistoryFor_565577053' (597577167); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 12149
- Extents Scanned..............................: 1520
- Extent Switches..............................: 7212
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 21.06% [1519:7213]
- Logical Scan Fragmentation ..................: 99.99%
- Extent Scan Fragmentation ...................: 0.79%
- Avg. Bytes Free per Page.....................: 2256.4
- Avg. Page Density (full).....................: 72.12%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

In summary, Temporal tables look like they could be a very cool feature for the next and future releases of SQL Server. But they will definitely be coming with a price.

UPDATE 7/20/2015: Tested on CTP 2.1 with identical results.

UPDATE 9/30/2015: Tested on CTP 2.4 with similar results.

Leave a Reply

Your email address will not be published. Required fields are marked *