June 12, 2014 at 3:28 am
Hi all,
I've tested a simple insert statement using Native Compiler SP. It's was working fast if compare to normal SP. when i continuous to call the same Native Compiler SP few times. The performance is exactly same as the normal SP. Anyone know why?
June 12, 2014 at 5:29 am
Without seeing the query plan, the structures, the query... nope, not really.
My tests show a pretty consistent faster behavior for the natively compiled procedures.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 12, 2014 at 9:58 pm
May i know how is your test script? have you try the while loop vs normal insert statement for 10k?
June 13, 2014 at 3:50 am
A WHILE loop? No, I haven't done that sort of testing at all. It's not something I'd write within T-SQL or expect to perform well, natively compiled or not.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 13, 2014 at 3:58 am
Im using native compiler procedures vs normal procedures. my script as below.
--- Normal SP
CREATE PROCEDURE xx_insert_Normal
AS
BEGIN TRAN
DECLARE @i INT = 1
WHILE @i <= 100000
BEGIN;
INSERT INTO [dbo].[memorytable] VALUES (@i, 'data')
SET @i += 1;
END;
COMMIT
END
--- Natively compiled SP
CREATE PROCEDURE xx_insert_NC
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english')
DECLARE @i INT = 1;
WHILE @i <= 100000
BEGIN;
INSERT INTO [dbo].[memorytable] VALUES (@i, 'data');
SET @i += 1;
END;
END;
June 13, 2014 at 5:52 am
I've done all the looping outside the procedure.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 13, 2014 at 7:57 am
What's the DDL for memorytable?
If you're making it durable, then you may be bottlenecking on I/O - either for transaction logging, or for the sequential write out to data files. The Transaction logging is virtually the same regardless of using durable memory-optimised tables or disk-resident tables.
The WHILE loop itself carries quite a large overhead and is obviously serial - you may also bottleneck here, long before you reach your I/O or interpretation bottleneck.
Hekaton is all about parallel concurrency. A more realistic test would be to drive an external parallel application to execute inserts and ramp them up over time, measuring the throughput differences between the two. If you're not bottlenecking on transaction logging, and you have spinning disks, in-memory + native compilation will undoubtedly win out on a pure singleton insert test as it only performs sequential writes, but maybe not by as much as you think. If you're on SSDs, the difference may be negligible as random write performance can be nearly the same as sequential.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply