Continuation from the previous 72 parts, the whole series can be found at http://www.nikoport.com/columnstore/.
Since the very first version of SQL Server the maximum size for the Delta-Stores as well as the compressed Row Groups was limited to 1048567 Rows. This number represents the power of 2 in exponent of 20.
In SQL Server 2016 CTP 3.0 and in the current version of Azure SQLDatabase, Microsoft has changed this number effectively for updatable Nonclustered Columnstore Indexes under some circumstances and this blog post is dedicated in discovering and testing this feature.
Operational Analytics
In Data Warehousing we load data very infrequently but when we do that – we load massive amounts of rows.
In Operational Analytics we update just a few rows in a typical operation, but we do that very frequently and allowing hundreds or thousands of update process to take place at the same time.
Sometimes, the amount of operation data that is being inserted in a table can reach huge numbers and an active compression of the closed Delta-Store would bring a significant impact on the CPU, hitting it potentially very hard at the moment when it hurts the most.
For that purpose in SQL Server 2016 CTP 3.0 and in Azure SQLDatabase the Delta-Stores for the updatable Nonclustered Columnstore Indexes have the possibility to grow to bigger sizes under pressure, thus allowing to delay the compression of the Delta-Store.
Under intensive workloads, the Delta-Stores will be doubling their size starting from our good old 1048567 Rows (Let’s call it 1M Rows), to 2M Rows, to 4M Rows, 8M Rows, 16M and then finally 32M Rows as the maximum.
Creating a bigger Row Group would allow to delay the compression impact on the system, until the maximum has been reached.
Let’s take this functionality for a ride, by using Azure SQLDatabase (13.0.702) version, that is very similar to SQL Server 2016 CTP 3 version (13.0.700), but having some additional updates included.
Taking a copy of the ContosoRetailDW free database to the cloud, you will need to copy tables to one of the Premium Insances in order to make it work.
Alternatively you can simply use SQL Server 2016 CTP 3 to execute all of this blog post scripts.
Let’s create a copy of the FactOnlineSales table, calling it FactOnlineSales_NCCI and besides putting a Primary Clustered Key, let’s add an updatable Nonclustered Columnstore Index with all columns of our table:
CREATE TABLE [dbo].[FactOnlineSales_NCCI]( [OnlineSalesKey] [int] NOT NULL Constraint PK_FactOnlineSales_NCCI Primary Key Clustered, [DateKey] [smalldatetime] NOT NULL, [StoreKey] [int] NOT NULL, [ProductKey] [int] NOT NULL, [PromotionKey] [int] NOT NULL, [CurrencyKey] [int] NOT NULL, [CustomerKey] [int] NOT NULL, [SalesOrderNumber] [nvarchar](20) NOT NULL, [SalesOrderLineNumber] [int] NULL, [SalesQuantity] [int] NOT NULL, [SalesAmount] [money] NOT NULL, [ReturnQuantity] [int] NOT NULL, [ReturnAmount] [money] NULL, [DiscountQuantity] [int] NULL, [DiscountAmount] [money] NULL, [TotalCost] [money] NOT NULL, [UnitCost] [money] NULL, [UnitPrice] [money] NULL, [ETLLoadID] [int] NULL, [LoadDate] [smalldatetime] NULL, [UpdateDate] [smalldatetime] NULL, Index NCCI_FactOnlineSales_NCCI Nonclustered Columnstore (OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate) )
Now let’s imagine we are adding some 100.000 rows to our operational table:
insert into dbo.[FactOnlineSales_NCCI] SELECT TOP 100000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales]
To see the details of the Row Groups with data, I will use the newest addition to the CISL library (notice that you will need the version 1.0.3 or better) – the function cstore_GetRowGroupsDetails, filtering on the name of the table:
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
As you can see on the image above, we have got very normal Delta-Store with 100.000 rows.
Now let’s try to load more then 102.400 rows, thus expecting that we shall get maybe a compressed Row Group as was described in the post about the Bulk Load API magic number:
truncate table dbo.[FactOnlineSales_NCCI] insert into dbo.[FactOnlineSales_NCCI] SELECT TOP 150000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales]
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
Looks pretty much the same, isn’t it ? The same open Delta-Store, this time only it has 150.000 Rows. This means that the usage Bulk Load API does not have any significant effect on the Nonclustered Columnstore Index, in the way that one would expect when looking at the Clustered Columnstore in SQL Server 2014.
I totally agree with that. I do not want to have a compressed Row Group, only because someone is loading a big number of rows – this is an OLTP solution, and thus I definitely prefer to have less impact at the moment, enjoying better (bigger) Row Groups for my analytical queries.
Ok, last try – lets use TABLOCKX hint. In SQL Server 2016 this should enable parallel insertion and maybe this will help us to get a compressed Row Group at once:
truncate table dbo.[FactOnlineSales_NCCI] insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 150000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales]
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
Not really.
We should consider bigger amounts of data in order to observe the behaviour we are interested in, and for that purpose lets load 2 Million Rows:
truncate table dbo.[FactOnlineSales_NCCI] insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 2000000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales]
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
This time we have 2 Delta-Stores, with the first one being a closed one, because it has reached the maximum number of rows for the first Row Group – 1048567, and the second one keep on growing because it has not reached the same number.
Let’s add an extra 200.000 Rows to our table, and for that I will use the following query that will ensure that the data I am adding does not exists in the FactOnlineSales_NCCI table:
insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 200000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales] where OnlineSalesKey not in (select OnlineSalesKey from dbo.[FactOnlineSales_NCCI])
Checking on the Row Groups of our table:
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
Look – we have an Open Delta-Store with 1.151.424 Rows !!! Before SQL Server 2016 this was definitely impossible.
The 2nd Delta-Store will stay open until we shall reach it’s limit, which will be 2.097.152 in this case, representing the double amount of the rows, allowed in the first Delta-Store.
What about bigger number of rows, will it grow even bigger ?
Let’s load 10 Million Rows to observe the effects:
truncate table dbo.[FactOnlineSales_NCCI] insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 10000000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales]
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
This result was observed right after the finish of the loading script, where we can clearly see 4 Delta-Stores for 10 Million Rows. 3 of the Delta-Stores are Closed and 1 Delta-Store is Open, which is an absolutely impossible combination if we think about Clustered Columnstore Indexes, where one would expect to have 10 Compressed Row Groups or 10 Delta-Stores (9 Closed & 1 Open).
If you take a more detailed look at the associated sizes of the closed Delta-Stores, you will see that they increase each time a new Delta-Store is being used. For example, the first one is capped at 1.048.567 Rows, the second one is capped at 2.097.152 and the last closed Delta-Store is set to 4.193.904 Rows – meaning that the size is being constantly doubled.
What about the last open Delta-Store ? I guess that it is kept open in order to reach the maximum of over 8 Million Rows.
Let’s add another 6 Million Rows in order to make sure that it will work this way.
IMPORANT: You have to be rather quick in adding those 6 million rows, because once Tuple Mover kicks in and starts compressing Closed Delta-Stores, the behaviour changes:
insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 6000000 OnlineSalesKey - 30000000, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales]
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
Look, we have reached more then 8 Million Rows in a Delta-Store, as expected.
Can we take it any further ? Lets double the table size, by loading OnlineSalesKey values within a negative range and multiplying it by -10:
insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT OnlineSalesKey*-1, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales_NCCI] where OnlineSalesKey*(-1) not in (select OnlineSalesKey from dbo.[FactOnlineSales_NCCI]);
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
This is an incredible picture – we have 5 Delta-Stores, containing over 30 Million Rows!!!
We have not reached the limit of the last Delta-Store, but we are quite clearly reaching out for the 16 Million Rows in it.
The next thing to happen with our Columnstore Structure is quite remarkable: the Tuple Mover will kick in and start compressing the closed Delta-Stores, splitting them into compressed Row Groups and leaving the Tombstones on the place of the used closed Delta-Stores, which will be removed in a later clean-up process.
On the picture above, you will notice that a Delta-Store with a Row Group ID = 0, has become a Tombstone, and as a result we have a new Compressed Row Group with ID = 5.
The advantage of using bigger Delta-Stores dividable on 1048567 is that every big Delta-Store will turn into a certain number of compressed Row Groups, where each of them is full.
Note: It looks like the number of rows inside of bigger Delta-Stores are not correctly dividable through this number, I will be investing this further.
After some minutes – everything is ready, the only Row Group that was not processed is the last one we have loaded. It has ~14.6 Million Rows and we are expecting it to reach the next corresponding maximum of ca. 16 Million Rows, and then ca. 32 Million Rows, after which the Delta-Stores would be cut at that value.
Pressure Reset
The scaling of the Delta-Stores is a linear process, but which can get affected if the Tuple Mover has kicked in in the mean time.
Consider following example, where I will load 5 Million Rows into our table with a Nonclustered Columnstore Index:
truncate table dbo.[FactOnlineSales_NCCI] checkpoint insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 5000000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales];
Let’s take a look into the internal structures:
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
Now, Let’s wait until our automated Tuple Mover kicks in (its execution cycle is automatically set on 5 Minutes), or if you are impatient as me, you can invoke the Tuple Mover manually:
alter index NCCI_FactOnlineSales_NCCI on dbo.FactOnlineSales_NCCI Reorganize;
Executing the CISL Stored Procedure for listing the available Row Groups, you will be able to observe the process of compression turning the Delta-Stores into Tombstones and creating compressed Row Groups:
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
If you are lucky enough, you can even catch the moment when a Row Group convert its status from HIDDEN into Compressed Row Group.
According to the previously observed behaviour, should we load more data, the current Open Delta-Store should reach 4.193.904 Rows, let’s see if this is still true – because Tuple Mover has already alleviated some pressure.
In the following script I shall insert 5 Million Rows from the original table, ensuring that they were not inserted before:
insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 5000000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales] where OnlineSalesKey not in (select OnlineSalesKey from dbo.[FactOnlineSales_NCCI])
Let’s take a look at the result:
exec dbo.cstore_GetRowGroupsDetails @tableName = 'FactOnlineSales_NCCI'
Well, surprise! After reaching the 2.097.152 number of rows (the closest minimum dividable on 1.048.567, meaning that in the case when we have 9 Million Rows, the Delta-Store will be closed at ca. 10 Million Rows, once the number is dividable on the maximum number of rows per Row Group) in our Delta-Store, since the pressure was relaxed by allowing the previous Delta-Stores to become compressed, we have a kind of a restart, because our next Delta-Store will be capped at the next maximum of 2.097.152 Rows and then on 4.193.904 Rows and so on …
This way if there is an opportunity to release the pressure from the Columnstore Index, it will compressed and the mathematics of increasing Delta-Stores will be restarted, because for most of the time, there should not be a real need to that.
Unless you are building Nonclustered Columnstore for a Business Intelligence solution, because of some columns that can’t be included into the Clustered Columnstore
Delta-Store compression under Nonclustered Columnstore
I have shown in the Columnstore Indexes – part 68 (“Data Loading, Delta-Stores & Vertipaq Compression Optimisation”) that for Clustered Columnstore Indexes with no secondary Rowstore Indexes the Delta-Stores are no more compressed with the Page Compression, but what about the updatable Nonclustered Columnstore Indexes ?
Let’s restart the loading process and load again 5 Million Rows into our test table:
truncate table dbo.[FactOnlineSales_NCCI] checkpoint insert into dbo.[FactOnlineSales_NCCI] with (TABLOCKX) SELECT TOP 5000000 OnlineSalesKey, DateKey, StoreKey, ProductKey, PromotionKey, CurrencyKey, CustomerKey, SalesOrderNumber, SalesOrderLineNumber, SalesQuantity, SalesAmount, ReturnQuantity, ReturnAmount, DiscountQuantity, DiscountAmount, TotalCost, UnitCost, UnitPrice, ETLLoadID, LoadDate, UpdateDate FROM [dbo].[FactOnlineSales];
Now, using the sys.internal_partitions DMV, let’s see what kind of the compression is being used for the Delta-Stores:
select object_name(part.object_id) as TableName, ind.name as IndexName, part.internal_object_type, part.internal_object_type_desc, part.row_group_id, part.rows, part.data_compression, part.data_compression_desc from sys.internal_partitions part left outer join sys.indexes ind on part.object_id = ind.object_id and part.index_id = ind.index_id where part.object_id = object_id('dbo.FactOnlineSales_NCCI') and internal_object_type = 3 /* Delta-Stores */
As you can see, all of the Delta-Stores are using PAGE compression, which is a kind of strange behaviour at this point. I would definitely prefer to see them using no compression at all, this way less resources would be spent on conversion to Columnstore (no need to decompress the data first) and the insertion will be more effective as in the case of Clustered Columnstore Indexes.
Additionally, having parity across different Columnstore Indexes would be an item that would be very nice for understanding and managing their behaviour.
Final Thoughts
Now that we actively have more Delta-Stores that can contain more than 1048567 Rows, is this finally the step into the direction towards making bigger Row Groups ?
I would definitely love to think so.
I do not see a big need for the same functionality in Clustered Columnstore Indexes, since using Bulk Load API will allow us to convert those loaded Row Groups into compressed ones immediately, and delay in this functionality is not something that is commonly required.
I am also wondering about the choice of some of the numbers for the maximum values of Delta-Stores … They are not truly dividable through 1048567.
Stay tune – this will be investigated.
to be continued with Columnstore Indexes – part 74 (“Row Group Merging & Cleanup, SQL Server 2016 edition”)