September 23, 2014 at 7:46 am
I come from a web based world were loading 1.5 million records into a temp table is suicide. I’m doing more data warehouse stuff now and I was looking into optimizing a buddies proc and noticed he was loading 1.5 million records into a temp table. We had a discussion about it because being from a web world I was drastically against it. He on the other hand didn’t feel it was an issue being it gets called once maybe twice a day. The tempdb is set to autogrow and it is on a different drive than all the other databases on the box. It has one ldf and mdf. He’s creating an index on the table after load.
Anyone have a valid reason why we shouldn’t be loading 1.5 million recs into temp table?
September 23, 2014 at 7:59 am
As long as production users aren't being affected I don't see a problem with it and I think the only limit is your hardware.
September 23, 2014 at 7:59 am
It's hard to comment without seeing the whole process. It does seem like a large number to be inserting into a temp table, but if done correctly, it may be a valid thing to do.
For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]
September 23, 2014 at 8:03 am
I do this a lot on my end, but you have to remember that the differences between tempdb and any other database are minimal, especially when it comes to limits. It's just another database with simple recovery. The other difference is database objects such as tables are cleared out on startup.
For me, as long as you are aware of the data you are loading and the requirements of doing so (i.e.: disk space, TempDB, ETC), then it doesn't sound like an issue to me. I typically use them and then drop them when I'm done. I've also used them on more records than 1.5 million. As long as I have the requirements, it doesn't impact anyone.
Quoting a decent article on some things to mind - https://www.simple-talk.com/sql/t-sql-programming/temporary-tables-in-sql-server/
Because temporary tables are stored just like base tables, there are one or two things you need to be wary of. You must, for example, have CREATE TABLE permission in TempDB in order to create a normal table. To save you the trouble, this is assigned by default to the DBO (db owner) role, but you may need to do it explicitly for users who aren’t assigned the DBO role. All users have permissions to create local or global temporary tables in TempDB because this is assigned to them via the GUEST user security context.
September 23, 2014 at 1:01 pm
loading 1.5 million records into a temp table should not be the question to ask. The main thing is, do you really need the whole set there? With a properly designed server and MSSQL instance, that should not be a problem.
Having said that, if tempdb is located in a slow drive or LUN or if you are putting more attributes on that temp table than what you really need, that can generate extra overhead on your SQL server.
Another important thing is the Indexes on the temp table. Check your logic and execution plan and ensure you really need an Index there. I've fixed/recommended lot of code at work like this where Clustered Indexes where applied without a reason. After removing those, queries that were taking half an hour or several minutes, went down to seconds.
September 23, 2014 at 1:39 pm
Generally, there is no issue of loading 1.5 million records to a temp table.
But in your case, you didn't provide enough information to argue on the question. 1.5 million records doesn't say too much. It can be a table with one int column, so the total size is around 6MB, which is not a big deal at all, or it can have some BLOB columns then the size will jump drastically. What happens with the table after it's loaded and index is created? How procedure is using it afterwards?
If you say don't use temp table then what is your solution? Is it better? How?
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply