January 15, 2004 at 10:20 am
I have read the FAQ on site it states:
1.
Answer:
If any statement that contains the name of the temporary table appear before the temporary table is created, stored procedure will recompile every time it is run. For example: 1.Efficient way: create proc au_info_sp as Create table #au_temp (lname varchar(40), fname varchar(20), phone char(12)) insert #au_temp (lname,fname,phone) select au_lname, au_fname, phone from authors select * from #au_temp drop table #au_temp 2. Avoidable way: create proc au_info_sp as select au_lname, au_fname, phone into #au_temp from authors select * from #au_temp drop table #au_temp The second procedure will recompile every time you run it |
Now I have tested this using profiler and I do NOT get these results. On a SQL2k box it inserted into Cache after first run and it hit it every time. Am I missing something? Help me out here guys I took this as gospel and quoted it now I can't prove it.
Also just for my own info, wouldn't a LONG running Select ... Into still cause a block on the Sysobjects table for concurrent runs of the same procedure?
Thanks
January 15, 2004 at 3:24 pm
One of the main reasons to avoid 'select ... into' with temp tables is that it can cause blocking in Tempdb.
--------------------
Colt 45 - the original point and click interface
January 15, 2004 at 3:35 pm
I read that everywhere and up until today you would of heard me making the same statement. How do we prove it? What I am seeing running profiler is that each run of the "Select Into" procedure creates a different instance of the table, so the lock on the system table in tempdb is no worse than it would be with a Create Table/Insert Into.
Understand I am NOT trying to go against the grain, I just want to back what we are saying.
January 15, 2004 at 3:41 pm
When SQL Server creates a temp table, it determines what the field sizes and data types should be based on the results of the recordset, not on the fields that the data is coming from.
So this means that the entire record set has to be returned before the table can be created. I do not know if SQL Server is holding a lock on the sysobjects table for the whole duration fo the query or just at the end when it creates the table.
However, another reason that i don't like to use Select Into unless I know for certain what data is in the recordset returned by the query is that you will get an error stating that it can not create a column of size 0 if the column contains only nulls or empty values.
January 15, 2004 at 3:42 pm
The length that the lock is held would depend on a number of variables. Number of users, size of table, quality of hardware, etc...
I've had it happen to me. The developers tested it in their playground and management were happy with that. So we implemented in production and released the update to the masses. The following day we restarted the server 4 times before the update was rolled back. The developers screamed for more hardware, but I just changed the 'select .. into' to a 'create table ...' and re-released the update. Problem solved.
--------------------
Colt 45 - the original point and click interface
January 15, 2004 at 3:47 pm
I have had a similar situation occur at another company Phill, however the question to the masses is, how do we prove it before we break that production box? Load Testing would be an option but for most things we are able to test without putting a substantial load on the box. Hmmmm??
January 15, 2004 at 4:06 pm
We had a similar performance problem caused by excessive locking. Our performance improved 4-fold when we added the with(nolock) optimizer hint to all of our select queries in our sp's.
This problem only became detectable because of the heavy load that our client sites get.
My point is, some things only show that they are bad when put under a very heavy load. Under mild conditions, the effects are neglible.
January 15, 2004 at 6:20 pm
FYI, "SELECT...INTO" need to be used when the table is created dynamically (not know the table exact structure before runtime), comparing to "CREATE TABLE" or "INSERT INTO". "SELECT...INTO" will perform lock to the tempdb and also the source table by issuing exclusive lock.
One of the method to avoid locking, try to use "SELECT...INTO...WHERE 1=0". By using 'always false' condition in the select statement will make the query run faster because SQL Server is "smart" enough to ignore 'pumping data' into the new create table but just create the blank table structure.
After the table created successfully, use "INSERT INTO" to 'pump' data into the table. By using this method, you can overcome many problems like locking at table level.
ALWAYS using table hints "WITH (NOLOCK)" in query from table/view in store proc or applications. With this table hints, SQL Server do not perform exclusive lock or shared lock to the table/row. The benefits are less time need (SQL Server do not need to do more work to put lock) and do not hit deadlock.
Hope the explaination above will help and answer all the questions.
Regards,
kokyan
January 15, 2004 at 6:37 pm
You should only use the "WITH (nolock)" query hint when you are not concered about the quality of the data.
When nolock is used it is possible to read data from an uncommitted transaction or a transaction that is being rolled back. This is whats known as a dirty read.
--------------------
Colt 45 - the original point and click interface
January 15, 2004 at 6:43 pm
phillcart is correct. I agree with him. Sorry for not include this important point in my answer.
Regards,
kokyan
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply