Blog Post

Columnstore Indexes – part 76 (“Compression Delay”)

,

Continuation from the previous 75 parts, the whole series can be found at http://www.nikoport.com/columnstore/.

While playing with SQL Server 2016 CTP 3.2, I have found some interesting feature being mentioned inside the engine. The strange thing was called a compression delay and it was an option for Columnstore Indexes. I was surprised. I had to check on the internet and on Microsoft documentation pages but there was nothing to be found.

After joining the bits all together, contacting Microsoft and doing experiments, this blog post will show you the benefits and the importance of the Compression Delay.

This blog post is about Compression Delay feature of all Columnstore Indexes in SQL Server 2016. This is a feature that you can enable on your indexes in order to delay data compression of the Delta-Stores (open and the closed ones), thus converting Rowstore data into compressed Row Groups, the true Columnstore format.

The Intro

Why is there such a need to delay the data conversion from the Delta-Stores into the compressed Row Groups in the first place?

In SQL Server 2016 the OLTP Systems have received a significant improvement – support for the Columnstore Indexes (disk-based Nonclustered Columnstore & In-memory based Clustered Columnstore).

In both cases we have as the base the underlying OLTP-style table, with a Delta-Store object (or Tail Row Group for InMemory tables), that will hold the new data being inserted or updated by the final users. The data that is being frequently updated in OLTP-style systems is called Hot Data. The data that just being inserted into your table is definitely a Hot Data.
The important moment for the table is when the data becomes Cold or mostly infrequently read-accessed, and meaning that it can be compressed into Columnstore format.

That moment of conversion from Cold into Warm Data is something that can be so different on different systems and even within a specific system, different tables might have very different needs.

By default as of now we have the following delays pre-configured:
Disk-Based Columnstore: 0 Minutes This means that once you inserted data into a Delta-Store and it has reached it’s maximum size (which can vary under different circumstances, you can read more in Columnstore Indexes – part 73 (“Big Delta-Stores with Nonclustered Columnstore”)) the data will be compressed with the next invocation of Tuple Mover (by default one in 5 minutes at max or by the Alter Index Reorganize invocation).
In-Memory Columnstore: 0 Minutes (New!) for In-Memory Columnstore Index the situation is the same since the last version of SQL Server 2016 CTP, removing the difference that there was before. Once the Tail Row Group has over 1048567 rows, the background compression process will pick the 1048567 rows at a time and convert them to a compressed Row Group. Alternatively, of course, you can execute the migration process manually with the help of the stored procedure and then the data will be migrated instantly. ??

In SQL Server 2016 CTP 3.2 there is already another way of controlling the compression delay for both disk-based Columnstore Indexes (and hopefully the in-memory support will be done by RTM).

The best part is that this feature is enabled at all important levels of columnstore index table operations:

– Table Definition/Creation

– Create Columnstore Index command

– Alter Index command

This means that you can take control of your when your data is compressed at any point of time.

The syntax of this feature is very simple:

COMPRESSION_DELAY = [0 | <duration>] [MINUTES]

meaning that we can define a non-negative value in minutes of the compression delay for our table.

At this point we should definitely try it out and here is a simple structure for a table with Nonclustered Columnstore Index that contains the option of compression delay set to 5 minutes:

create table dbo.ncci_test_inline
(c1 int,
index PK_NCCi_test_inline nonclustered columnstore(c1) 
with (COMPRESSION_DELAY = 5 Minutes)
);

To my surprise I have received a following error message:

Msg 102, Level 15, State 1, Line 8
Incorrect syntax near 'COMPRESSION_DELAY'.

At which point I tried to experiment with the other options of defining the columnstore index outside of the table definition or redefining it with the help of Alter Index command – but all in vain … ??

From the other side, I have the contact of some wonderful people in development team who shared with me the workaround and even authorised for sharing with the community through the blog ??

You need to enable this specific trace flag for being able to play with this feature here and now, hopefully before it becomes enabled by default:

dbcc traceon(10213, -1);

After enabling the trace flag I have re-executed my test script and it was created successfully this time:

create table dbo.ncci_test_inline
(c1 int,
index PK_NCCi_test_inline nonclustered columnstore(c1) 
with (COMPRESSION_DELAY = 5 Minutes)
);

Now this table will function in a different way, giving a delay of 5 minutes to any data that is ready for being compressed.

We can always change this setting by using the Alter Index command, for example in the following script I am setting it to be equal to InMemory tables – 60 minutes:

alter index PK_NCCi_test_inline
on dbo.ncci_test_inline
set (COMPRESSION_DELAY = 60 Minutes);

Brilliant stuff! ??

The Test

For the test I will use again the free ContosoRetailDW database from Microsoft, which I will restore from the downloaded backup that was copied into C:\Install folder:

USE [master]
 
alter database ContosoRetailDW
    set SINGLE_USER WITH ROLLBACK IMMEDIATE;
 
RESTORE DATABASE [ContosoRetailDW] 
    FROM  DISK = N'C:\Install\ContosoRetailDW.bak' WITH  FILE = 1,  
        MOVE N'ContosoRetailDW2.0' TO N'C:\Data\ContosoRetailDW.mdf', 
        MOVE N'ContosoRetailDW2.0_log' TO N'C:\Data\ContosoRetailDW.ldf',  
        NOUNLOAD,  STATS = 5;
 
alter database ContosoRetailDW
    set MULTI_USER;
GO
GO
ALTER DATABASE [ContosoRetailDW] SET COMPATIBILITY_LEVEL = 130
GO
ALTER DATABASE [ContosoRetailDW] MODIFY FILE ( NAME = N'ContosoRetailDW2.0', SIZE = 2000000KB , FILEGROWTH = 128000KB )
GO
ALTER DATABASE [ContosoRetailDW] MODIFY FILE ( NAME = N'ContosoRetailDW2.0_log', SIZE = 400000KB , FILEGROWTH = 256000KB )
GO

I have also added a Clustered Columnstore on FactOnlineSales table, just because I love it so much :p

use ContosoRetailDW;
ALTER TABLE dbo.[FactOnlineSales] DROP CONSTRAINT [PK_FactOnlineSales_SalesKey];
create clustered columnstore Index PK_FactOnlineSales
on dbo.FactOnlineSales;

Let’s create a sample table for the Nonclustered Columnstore Index, setting the Compression Delay instantly to 30 Minutes.

CREATE TABLE [dbo].[FactOnlineSales_NCCI](
[OnlineSalesKey] [int]  NOT NULL,
[DateKey] [datetime] NOT NULL,
[StoreKey] [int] NOT NULL,
[ProductKey] [int] NOT NULL,
[PromotionKey] [int] NOT NULL,
[CurrencyKey] [int] NOT NULL,
[CustomerKey] [int] NOT NULL,
[SalesOrderNumber] [nvarchar](20) NOT NULL,
[SalesOrderLineNumber] [int] NULL,
[SalesQuantity] [int] NOT NULL,
[SalesAmount] [money] NOT NULL,
[ReturnQuantity] [int] NOT NULL,
[ReturnAmount] [money] NULL,
[DiscountQuantity] [int] NULL,
[DiscountAmount] [money] NULL,
[TotalCost] [money] NOT NULL,
[UnitCost] [money] NULL,
[UnitPrice] [money] NULL,
[ETLLoadID] [int] NULL,
[LoadDate] [datetime] NULL,
[UpdateDate] [datetime] NULL,
index NCCI_Test Nonclustered Columnstore (OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate)
) ON [PRIMARY]
GO
alter index NCCI_Test
on dbo.FactOnlineSales_NCCI
     set ( COMPRESSION_DELAY = 30 Minutes );

Now let’s load 1.100.000 rows from the FactOnlineSales table into our test table so that we have enough data to fill out a whole Delta-Store, but in a way that will not trigger the Bulk Load API compressing data automatically into a compressed Row Group (loading less data then 102.400 rows will be exactly as flying under the radar). In this case I simply went and loaded 100.000 rows 11 times ??

insert into dbo.FactOnlineSales_NCCI
select top 100000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate
from dbo.FactOnlineSales;
GO 11

I will use CISL, my free and open-sourced library of Columnstore Indexes scripts for monitoring the behaviour, executing it twice with the interval of 5 minutes, thus allowing automated Tuple Mover to finish it’s execution in the background (it runs every 5 minutes):

exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
waitfor delay '00:05';
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'

Compression Delay - CISL RowGroupsDetails results with DelayYou can see that even after 5 minutes delay the closed Delta-Store has not changed from Closed to Compressed Row Group. This is achieved because Compression Delay setting did now allowed the automated Tuple Mover to convert the data from Delta-Store.

To prove it let’s truncate the table and reset the Compression Delay to 0:

alter index NCCI_Test
on dbo.FactOnlineSales_NCCI
     set ( COMPRESSION_DELAY = 0 Minutes );
Truncate table dbo.FactOnlineSales_NCCI;
GO
insert into dbo.FactOnlineSales_NCCI
select top 100000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate
from dbo.FactOnlineSales;
GO 11

Let’s monitore again with the help of the cstore_GetRowGroupsDetials stored procedure from CISL:

exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
waitfor delay '00:05';
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'

Compression Delay - CISL RowGroupsDetails results without Delay

This time we have a compressed Row Group, since nothing has prevented the automated Tuple Mover to convert the closed Delta-Store, and so it worked as expected.

InMemory Columnstore

For InMemory Columnstore you can simply use the script below to create the memory optimised filegroup and then the test table where I go and load directly 1.1 Million Rows at one time, since for In-Memory we do not have an automated Bulk Load API optimisation for automated conversion to compressed Row Group enabled:

USE master;
ALTER DATABASE [ContosoRetailDW] 
ADD FILEGROUP [ContosoRetailDW_Hekaton] 
CONTAINS MEMORY_OPTIMIZED_DATA
GO
ALTER DATABASE [ContosoRetailDW]
ADD FILE(NAME = ContosoRetailDW_HekatonDir, 
FILENAME = 'C:\Data\xtp') 
TO FILEGROUP [ContosoRetailDW_Hekaton];
GO
use ContosoRetailDW;
CREATE TABLE [dbo].[FactOnlineSales_Hekaton](
[OnlineSalesKey] [int] NOT NULL,
[DateKey] [datetime] NOT NULL,
[StoreKey] [int] NOT NULL,
[ProductKey] [int] NOT NULL,
[PromotionKey] [int] NOT NULL,
[CurrencyKey] [int] NOT NULL,
[CustomerKey] [int] NOT NULL,
[SalesOrderNumber] [nvarchar](20) NOT NULL,
[SalesOrderLineNumber] [int] NULL,
[SalesQuantity] [int] NOT NULL,
[SalesAmount] [money] NOT NULL,
[ReturnQuantity] [int] NOT NULL,
[ReturnAmount] [money] NULL,
[DiscountQuantity] [int] NULL,
[DiscountAmount] [money] NULL,
[TotalCost] [money] NOT NULL,
[UnitCost] [money] NULL,
[UnitPrice] [money] NULL,
[ETLLoadID] [int] NULL,
[LoadDate] [datetime] NULL,
[UpdateDate] [datetime] NULL,
Constraint PK_FactOnlineSales_Hekaton PRIMARY KEY NONCLUSTERED ([OnlineSalesKey]),
INDEX NCCI_FactOnlineSales_Hekaton 
CLUSTERED COLUMNSTORE 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
-- Insert 1.100.000 Rows 
insert into dbo.FactOnlineSales_Hekaton
select top 1100000 *
from dbo.FactOnlineSales
order by OnlineSalesKey desc

But then the interesting part starts: In the SQL Server 2016 CTP 3.2 you can not execute the ALTER INDEX command:

alter index NCCI_FactOnlineSales_Hekaton
on FactOnlineSales_Hekaton 
set ( COMPRESSION_DELAY = 0 Minutes );

Msg 10794, Level 16, State 14, Line 53
The operation 'ALTER INDEX' is not supported with memory optimized tables.

But in SQL Server 2016 RC1 you can already drop and then add a Columnstore Index to the Memory-Optimised Table:

alter table FactOnlineSales_Hekaton 
drop index NCCI_FactOnlineSales_Hekaton;
alter table dbo.FactOnlineSales_Hekaton
add INDEX NCCI_FactOnlineSales_Hekaton
CLUSTERED COLUMNSTORE 
with (COMPRESSION_DELAY = 0 MINUTES);

Naturally the main idea would be to fall back and define at the table creation moment:

CREATE TABLE [dbo].[FactOnlineSales_Hekaton](
[OnlineSalesKey] [int] NOT NULL,
[DateKey] [datetime] NOT NULL,
[StoreKey] [int] NOT NULL,
[ProductKey] [int] NOT NULL,
[PromotionKey] [int] NOT NULL,
[CurrencyKey] [int] NOT NULL,
[CustomerKey] [int] NOT NULL,
[SalesOrderNumber] [nvarchar](20) NOT NULL,
[SalesOrderLineNumber] [int] NULL,
[SalesQuantity] [int] NOT NULL,
[SalesAmount] [money] NOT NULL,
[ReturnQuantity] [int] NOT NULL,
[ReturnAmount] [money] NULL,
[DiscountQuantity] [int] NULL,
[DiscountAmount] [money] NULL,
[TotalCost] [money] NOT NULL,
[UnitCost] [money] NULL,
[UnitPrice] [money] NULL,
[ETLLoadID] [int] NULL,
[LoadDate] [datetime] NULL,
[UpdateDate] [datetime] NULL,
Constraint PK_FactOnlineSales_Hekaton PRIMARY KEY NONCLUSTERED ([OnlineSalesKey]),
INDEX NCCI_FactOnlineSales_Hekaton 
CLUSTERED COLUMNSTORE 
with ( COMPRESSION_DELAY = 0 Minutes ) 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

But in SQL Server 2016 CTP 3.3 the error message persists, proving that this feature is not fully compatible yet:

Msg 10794, Level 16, State 90, Line 48
The index option 'compression_delay' is not supported with indexes on memory optimized tables.

In SQL Server 2016 RC1 the above query executes well without any mistakes. ??

I do my best to avoid writing about InMemory tables with Columnstore Indexes, because I expect many things to improve at the RTM timeframe. I also suggest not to do a fast judgement on this brand new feature.

Current Compression Delay

You might wonder if there is a way to discover the current compression delay for each of the tables, especially since sys.tables DMV does not contain any new columns, or is there another trace flag to find it out ?

Its much easier then that – the information about the compression delay setting is stored in the sys.indexes DMV and here is the query for finding out the current compression delay for all Columnstore Indexes:

select name, object_name(object_id) as table_name, index_id, type, type_desc, compression_delay
from sys.indexes
where type in (5,6);

Compression Delay - Current settings

Here you can see exactly the settings I have chosen for my tables and the most interesting case is of course the dbo.FactOnlineSales table itself – it contains the value of -1, which to my understanding represents a default value.

The Clustered Columnstore

When thinking about Clustered Columnstore Indexes, one might believe that the feature of Compression Delay is quite useless, we do the ETL and we want all our Delta-Stores to be converted to Compressed Row Groups.

Well, but what if we have multiple steps of ETL and for example in this 2 step exercise

Step 1. We have loaded the data.

Step 2. We shall run some updates over it. Having all data converted at this step will be an expensive operation of creation of even more new Delta-Stores, while deprecating the data inserted in the previous step and introducing some serious fragmentation into our Clustered Columnstore Index.

Compression Delay here would prevent closed Delta-Stores to become compressed and would allow to have overall resource spending and better final result.

Final Thoughts

We do not get way too many knobs to control the SQL Server and I really hope that this one is staying within SQL Server engine. The importance of the control on the table level of the Delta-Store conversion to compressed Row Groups is not to be underestimated. Especially with the appearance of InMemory Columnstore Indexes, the need of the precise control of the available resources and lowering the logical fragmentation (deleted bitmap content) are the key to get the best performance possible out of the system.

In an high-performance OLTP System getting an explanation that the Reporting/Analytics engine is slowing it down is something that will not stick well and this new feature Compression Delay will allow to address some of the issues.

to be continued with Columnstore Indexes – part 77 (“SSIS 2016 & Columnstore”)

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating