July 29, 2014 at 8:11 pm
Hi, I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table please....URGENT!...any help is much appreciated.
July 29, 2014 at 8:21 pm
v4vaas 19815 (7/29/2014)
Hi, I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.I need a sample script to insert 500 million records into a table please....URGENT!...any help is much appreciated.
Here is a simple insert from a Tally CTE
😎
USE tempdb;
GO
SET NOCOUNT ON;
DECLARE @SET_SIZE INT = 500000000;
CREATE TABLE dbo.Test500Million (N INT PRIMARY KEY CLUSTERED NOT NULL);
;WITH T(N) AS (SELECT N FROM (VALUES (NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL)) AS X(N))
,NUMS(N) AS (SELECT TOP(@SET_SIZE) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM T T1, T T2, T T3, T T4, T T5, T T6, T T7, T T8, T T9)
INSERT INTO dbo.Test500Million(N)
SELECT N FROM NUMS;
July 30, 2014 at 8:56 pm
Thank you so much.
Please help with below scenario.
I am trying to Copy the 500 million rows of data of Table A from Server A(sql 2008 r2) in chunks (100 million each) to Table B (In-Memory enabled table) in Server B (Sql 2014) ...I want to use import/export wizard and use the option Write a query to specify the data to transfer.....Can someone please provide the script for this scenario please...URGENT!....Any help is much appreciated.
July 30, 2014 at 9:02 pm
I believe you are going about this the wrong way.
How big is your table with 500 million records?
How many indexes do you plan to have on that table?
How much memory do you have allocated to SQL Server on the target server?
How much free space do you have on the disks of the target server?
Does the table that is desired to be in memory contain out of row data (LOBs)?
All of these things are critical and could impact the ability to be able to put that data into memory on the target server.
Next concern.
Why wouldn't you move the data to a table on the target server first, then enable that new table to be in-memory?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
July 30, 2014 at 9:46 pm
Hi Jason,
Thanks for your reply.
Ok, The purpose of doing this test is to conduct performance stress test by loading the data into a in-memory enabled table. so..I created the table structure first on target server with in-memory enabled.
Coming to your questions,
How much memory do you have allocated to SQL Server on the target server? - 220GB of memory
How much free space do you have on the disks of the target server? - 5.3 TB of disk..which is more than sufficient for this data
Does the table that is desired to be in memory contain out of row data (LOBs)? - No
All of these things are critical and could impact the ability to be able to put that data into memory on the target server.
Next concern.
Why wouldn't you move the data to a table on the target server first, then enable that new table to be in-memory?
--------------------------------
Please advise.
July 30, 2014 at 10:06 pm
What is the size of your table?
How many indexes?
And is that 220GB allocated explicitly to SQL Server or is that what is available to the server? If explicitly, how much have you left to the OS and other apps?
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
July 30, 2014 at 10:31 pm
Hi,
Size of the table: 195 GB
Number of Indexes: 90
220GB memory is allocated only for the sql server....out of total 260 GB Server memory.
July 30, 2014 at 10:47 pm
Quick suggestion, create multiple views on the source server, i.e. vw_source_1_100000000, vw_source_100000001_200000000 etc. where you filter the rows into the right sized chunks.
Then use the import/export thingy to append them to the target table one at the time.
You could of course to a table valued function for this but I think the views are simpler.
😎
July 31, 2014 at 7:48 am
With 90 indexes you are over a constraint right there.
You are limited in number of indexes (8 iirc) you can create on an in-memory table.
With the table being 195GB, I would look at doing a piecemeal restore type of scenario to get the data to the prod server. That is a ton of data to move across via import/export. Doable but would take a long time, and could cause some resource issues.
The next caution is that the 195GB is getting pretty close to the top end of what MS has documented for in memory use within an instance. The max (currently) is ~250GB and you would be taking most of that for a single table.
Is this table for OLTP type of work or for warehouse type of work? If it is for warehouse type of work, why not go with a columnstore index or two in 2014? Columnstore indexes were vastly improved in 2014.
If you are set on doing it with the in-memory table and via export data, clicking through the import/export gui is pretty straight forward.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
July 31, 2014 at 7:56 am
Ok, Thanks for all your inputs. Appreciate it.
July 31, 2014 at 7:59 am
I'm also a bit concerned that the table is 195GB with only 500 million records.
Could you run the script from here and paste the results for that table? It could be the 90 indexes causing bloat, it could be something hidden that is causing the bloat. Or it could be that the table is an extremely wide table. In any case it seems a bit bloated.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
July 31, 2014 at 8:02 am
Please remember that the point of in-memory is to improve *massively concurrent* inserts, not single inserts of large numbers of rows. If you don't have massive numbers of concurrent inserts (lots and lots and lots and lots of small inserts), you may be looking at the wrong feature.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 31, 2014 at 8:16 am
GilaMonster (7/31/2014)
Please remember that the point of in-memory is to improve *massively concurrent* inserts, not single inserts of large numbers of rows. If you don't have massive numbers of concurrent inserts (lots and lots and lots and lots of small inserts), you may be looking at the wrong feature.
Very good. I'm glad you brought that one up.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply