March 18, 2009 at 9:29 am
HI Masters,
Hi have a problem.
I have a table in my SQL Server.
The character columns of my table are Nvarchar(unicode). with the default collation of Latin1_general_CI_AS.
I have a text file that as text that i need to import into this table, trought Bulk Insert.
My bulk insert statement is like this:
BULK INSERT bulk_tablename
FROM 'C:\......................\21.txt'
WITH
(
FIRSTROW =2,
FIELDTERMINATOR ='|',
ROWTERMINATOR =''
When i finish to import the data, i go to my table and see that some characters are not good. This is because of the collation. how can i resolve this situation masters?
tks,
Pedro
March 18, 2009 at 6:22 pm
Make sure your file is saved in uni-code format before you do import; we had similar issues. After making sure the file is saved as unicode using Notepad+ or other editors we didn't have any issues for imports later.
Another note for 2000, have a read at this http://sqllearnings.blogspot.com/search/label/OSQL if you are using OSQL ever..
Thanks.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
March 19, 2009 at 2:30 am
Hi , tks for reply.
I can not change the file to uni-code., because i receive the txt file from other persons.
Isn't there any other work around?
I really need to import this file and this work is urgent.
Can some one help , please?
tks,
Pedro
P.S - This Bulk insert command is called form a sored procedure that i have on my server.
I'm not calling this method trought OSQL.
March 19, 2009 at 8:32 am
The situation was resolved, saving the txt file to unicode.
Tks for your help.
Other question:
What about tablock? i have been reading about this option, but i don't undestand it very well.
My bulk statement is now like this:
BULK INSERT bulk_Table
FROM 'C:\............\file.txt'
WITH
(
FIRSTROW =2,
FIELDTERMINATOR ='|',
ROWTERMINATOR ='',
DATAFILETYPE ='widechar'
)
go
By default, the table to where i import this data (trought this command above) is locked ?, or i have to put this option (tablock) on the statement abouve so that perfomance is bigger?
other quetion is about a clustered index. My table (to wich i'm saving the data) as no indexs. If i create a clustered index and order the data (in the BULK Command), will i gain perfonce, over a normal table wich have no indexs?
thanks,
Pedro
March 19, 2009 at 10:53 am
pedro.ribeiro (3/19/2009)
The situation was resolved, saving the txt file to unicode.Tks for your help.
Good stuff; I know it is a bit annoying but that is best I could do in 2000.
Other question:
What about tablock? i have been reading about this option, but i don't undestand it very well.
My bulk statement is now like this:
A tablock hint can be used to lock the entire table; normally when SQL Server is grab locks to objects it works from esclating mechinisme; meaning it gets locks on rows -> pages -> table. Really no such thing as table lock, more it locks all pages belong to table so esentially creating table lock.
This can improve performace because now SQL Server doesn't have to figure out the locking required; but problem with it is while this lock exists you cannot do any work on the table that requires an exclusive lock. So if you bulk_table is a staging table you should be able to do a tablock on it without too much problem.
other quetion is about a clustered index. My table (to wich i'm saving the data) as no indexs. If i create a clustered index and order the data (in the BULK Command), will i gain perfonce, over a normal table wich have no indexs?
Short answer: you should have cluster index on every table, so I would create one on identity column.
Long answer: If you are just inserting new data you might not see major improvement because inserting will happen in the free space avalaible in the reserved pages for the table. If you create a cluster key on column that is identity so it will be at the very end everytime; but if you create it on key that can be shifted (aka insert can be middle of the cluster index because of sort order) because of compounded keys then your cluster index will have to do extra work when you insert information. This is not saying cluster index is bad in this case but rather the choice of questionable key.
But Cluster index has other benfits for updating/selecting information. If you are updating information that has cluster index it is easier to locate the row because a hash compare doesn't have to be be done on every row which can be costly if you have lots of rows. If you are selecting and you have no ohter indexes then searching on cluster key can save you time because once you reach the leaf page of the cluster index you get the actual data that exists for the entire row...
I hope that helps a bit. Thanks.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
March 20, 2009 at 9:16 am
Yes, tks very mutch.
I only have one dought about this post.
If i make a:
delete from BULK_table
BULK INSERT BULK_table
FROM 'C:\..........'
WITH
(
FIRSTROW =2,
FIELDTERMINATOR ='|',
ROWTERMINATOR ='',
DATAFILETYPE ='widechar',
tablock
)
Will the expression "tablock" that is on the above BulK statment do a lock to the table (pages of the table), befour doing the insert of tghe data? and after the bulk is finnish , will the lock end automaticly?
tks,
Pedro
March 20, 2009 at 9:27 am
The lock will be acquired before your insert operation starts and will be held for the duration of the insert. It will be automatically released after opeartions are completed.
The delete statement before; if you are wiping data from the table; using truncate is better option.
Thanks.
Mohit.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
March 20, 2009 at 9:33 am
Why is truncate better than delete?
Is it faster? why?
This table is just for temporary use.
I delete de data first and then insert the data trought the bulk insert. Then i will put the data on other tables. then, on the end of the process i will delete the data again, because this table is only temporary.
In this case, can i use the truncate?
tks,
Pedro
March 20, 2009 at 10:35 am
pedro.ribeiro (3/20/2009)
Why is truncate better than delete?Is it faster? why?
This table is just for temporary use.
I delete de data first and then insert the data trought the bulk insert. Then i will put the data on other tables. then, on the end of the process i will delete the data again, because this table is only temporary.
In this case, can i use the truncate?
tks,
Pedro
Truncate is better then Delete if you are wiping the table because it is a non-logged event. When you delete from a table, an entry is made to the T-Log and then hardened to the database file. But using Truncate it just wipes the table clean. If you delete it also has to clear the indexes/organizes/table space, but with Truncate it deallocate all index and data pages to be reused by database. So in most cases it is faster ..
Few things to consider about truncate:
- It resets the seed value information for the identity columns
- It can't be used on tables which are part of dependance chain (aka TruncateTable being referenced by another table in database).
- There is no way to recovery from truncate table; as it is not logged in transaction log ** so don't use it on production tables **
If you are clearing data from a staging table that is used to temprary hold data; truncate is the best option.
Thanks.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
March 20, 2009 at 10:46 am
Thank you very mutch for all your help.
I undestud every thing, but i have some doughts in some words that you used (the meaning)
What is a staging table?
What is wipe?
tks,
Pedro
March 20, 2009 at 12:08 pm
What is a staging table?
I use the word "Staging" instead of "temp"; because temp to me implies that after the table is done with it is deleted dropped and no longer exists. Like Temp tables in SQL Server created using "#" sign. After the session is over these tables are destroyed, thus bring the turn term temp applied to them. But when you have a table that is there to hold information tempoarly so you can message it; and then later import it into another table I think that is a Staging table.
Where you "tempoarly" stage data that gets moved to another permanent location later.
What is wipe?
That is clearing the table; since you are doing "DELETE FROM TableName" without where clause you are cleaning the entire take aka a wipe. Don't think it is a actual SQL termenolgy but meh it explains the idea :hehe:.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply