August 6, 2010 at 7:24 am
I have a database that was setup by the vendor with very large column lenghts, like 254 and the largest record is only 9 characters long. I have created a new table with varchar 50 settings.
When I try a restore it barks at me like I thought it would. Is there a way to force a restore from a backup set from the previous DB? I can import the data using an SSIS package but the backup is so much easier because the backup was compressed.
Thanks in advance!
August 6, 2010 at 7:32 am
smitty-1088185 (8/6/2010)
I have a database that was setup by the vendor with very large column lenghts, like 254 and the largest record is only 9 characters long. I have created a new table with varchar 50 settings.When I try a restore it barks at me like I thought it would. Is there a way to force a restore from a backup set from the previous DB? I can import the data using an SSIS package but the backup is so much easier because the backup was compressed.
Thanks in advance!
Restore will always restore the database as it was at backup time. Data, structures, etc. would be back in time, no changes after backup time will survive.
Having said that... did the vendor set those columns as varchar(254)? in the affirmative case, why bother? a 9 characters long string will use exactly the same amount of space no matter if column is defined as varchar(254) or varchar(50).
Hope this helps.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.August 6, 2010 at 7:43 am
My college professor told me that the varchar data type doesn't take up disk space but it does affect memory allocation. Is that not true?
They also created a clustered index on two columns.
So it looks like this:
Access_Instance numeric (pk, numeric (16,6) not null
Process_ID numeric (pk, varchar(254) not null
~ more columns....
Row count 1667635777
Doesn't that make a hugh index?
August 6, 2010 at 8:08 am
smitty-1088185 (8/6/2010)
My college professor told me that the varchar data type doesn't take up disk space but it does affect memory allocation. Is that not true?
No. Pages in memory are almost the same as pages on disk. Varchar columns take 2 bytes + size of data in them
Doesn't that make a hugh index?
It's a wide index, but nothing's going to change if you reduce the length of the varchar column. It doesn't affect what's stored.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 6, 2010 at 8:21 am
If you declare a variable @varchar(1000) it will allocate more memory to the query than varchar(100). This can be a good thing. A field is different, as described above.
August 6, 2010 at 8:29 am
It makes sense now. Thank you for the replies!
August 6, 2010 at 9:42 am
Gingkoo (8/6/2010)
If you declare a variable @varchar(1000) it will allocate more memory to the query than varchar(100).
Do you have hard support for this statement?
As far as I remember variable lenght datatypes show the same behavior no matter if defining a local variable or a column.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.August 6, 2010 at 10:18 am
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply