July 24, 2008 at 7:42 am
Sheer laziness or the inability to properly determine the growth rate of your DB. @=)
Actually, our unrestricted DBs tend to be in our DEV environments where we're unsure of how changes / new processes will affect the growth of the DB. In this case, it makes sense to just monitor the HD space while developing because usually development takes more "space" than production. Especially since the DBAs and Developers are working out the kinks in the processes and more than often start off with bloat code that they work down to non-bloat code.
Does this technobabble make sense?
July 24, 2008 at 7:44 am
Gail,
AHHH. We're not encountering latch errors, but I haven't split the TempDB for the same reason I haven't split out other DBs. It didn't make sense from an I/O perspective to split files on a DB where the same disk controllers would be working on all the files. It seems to me (at least in our case) that the performance enhancement of splitting files doesn't work unless we have the files on different drives.
July 24, 2008 at 7:48 am
Brandie Tarvin (7/24/2008)
Does this technobabble make sense?
Ok, so it sounds like you should usually restrict the growth (esp. in prod), unless there's a good reason to do otherwise.
Thanks!
July 24, 2008 at 7:48 am
Brandie Tarvin (7/24/2008)
Sheer laziness or the inability to properly determine the growth rate of your DB. @=)
Intersting opinion. 😉
My main system is more a datawarehouse than anything. It grows and it will continue to grow.
We've got unrestricted autogrow on all files with alerts to the DBA on 20% and 10% free within each file (to allow the DBA to do a manual, controlled grow) and alerts to the sysadmins on 25% and 10% free space on the drive.
We could (and probably should) disable autogrow completely. It's just there as a safety net.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 24, 2008 at 7:53 am
angie stein (7/24/2008)
Brandie Tarvin (7/24/2008)
Does this technobabble make sense?Ok, so it sounds like you should usually restrict the growth (esp. in prod), unless there's a good reason to do otherwise.
In the ideal situation, you should disable autogrow completely and manually grow the files as and when needed.
Autogrow puts a small overhead on the system and allows the possibility of a file grow at an inconvenient time.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 24, 2008 at 7:56 am
GilaMonster (7/24/2008)
Brandie Tarvin (7/24/2008)
Sheer laziness or the inability to properly determine the growth rate of your DB. @=)My main system is more a datawarehouse than anything. It grows and it will continue to grow.
....
We could (and probably should) disable autogrow completely. It's just there as a safety net.
Actually, I'm doing up our datawarehouse now and I plan to do the same option. It's hard to come up with a restriction for a database or DW where the intent is that it will continue to grow and never get any smaller.
Although, IMHO, anyone who does do unrestricted autogrowth without monitoring their drive space is just asking for trouble. @=)
July 24, 2008 at 8:01 am
GilaMonster (7/24/2008)
In the ideal situation, you should disable autogrow completely and manually grow the files as and when needed.Autogrow puts a small overhead on the system and allows the possibility of a file grow at an inconvenient time.
Ok, so we're rarely in that "ideal situation" of course. So when setting up a db, how do you decide which one to set it at? Mission critical db's at autogrow, non-mission critical db's at restricted??
Sorry for all the questions, I'm just trying to really understand when to set it and why. Thanks!
July 24, 2008 at 8:13 am
It Depends... (a DBA's favorite answer).
There is no right or wrong way in this situation. There is only your situation.
My advice is to set up a job that monitors database growth. You create a table that stores the current size of each database, then every day or week run a job that adds a row to that table. Then do a report which sums up the average growth of each of your databases and compare those numbers with your current drive size.
Another factor which drives this is the cost of HD / SAN / NAS space. You need to find out how much your budget has available for space increases (if even anyone has thought about that) and how soon you'll need to be requesting space for your database files.
Somewhere in that factoring, you need to make sure you're regularly backing up and truncating your transaction logs or your numbers will be completely thrown off.
And then, at the end of the day, you need to sit down and say "Is it worth it to autogrow?" "Is it worth it to restrict DB size?". These are 2 SEPERATE questions. You can autogrow on a restricted DB or you can manual grow on an unrestricted DB. As Gail said earlier, autogrow can catch you out at the worst times. Like in the middle of a Bulk Load, causing the time of your job to increase exponentially and deadlocking other processes.
So, again, it depends. Which do you feel better about once you have all the above gathered information?
July 24, 2008 at 8:31 am
Great information! Thank you.
July 24, 2008 at 10:41 am
if your tempdb is full, try to change wherever places you use temporary tables with table variable.
like DECLARE @temptb TABLE(id int,name char(10))
July 24, 2008 at 10:54 am
either use this things
shrink tempdb,
or
detach the file and attach with a new one with 0 bytes
or still you cant do
just restart the sql services, everything will be refreshed.
July 24, 2008 at 10:55 am
onlyanji4u (7/24/2008)
if your tempdb is full, try to change wherever places you use temporary tables with table variable.like DECLARE @temptb TABLE(id int,name char(10))
That's a common fallacy.
Table variables, like temp tables are created in TempDB and allocated space within TempDB. They are both kept in memory as much as possible and only written to disk if they get too big.
See - http://sqlinthewild.co.za/index.php/2007/12/19/temp-tables-and-table-variables/
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 24, 2008 at 10:59 am
ALIF (7/24/2008)
shrink tempdb,or
detach the file and attach with a new one with 0 bytes
Shrinking a file that's full isn't going to achieve anything. It has no free space in it, so there's no space that shrink can release to the filesystem.
You can't detach TempDB.
Msg 7940, Level 16, State 1, Line 1
System databases master, model, msdb, and tempdb cannot be detached.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 24, 2008 at 11:00 am
And if your tempDB is full, no query you write on that SQL Instance is going to work. All databases utilize TempDB to some extent (even if you don't have variables or temp tables) to process the data.
So you still have to fix the problem of the full TempDB before you can accomplish anything.
July 24, 2008 at 11:02 am
As table variables can also end up in tempdb whenever there's insufficient RAM, that method could actually cause more problems than it solves, if the primary problem you're trying to fix is disk capacity related. I don't know if tempdb gets the structure for those right away or only if there's insufficient RAM. I suspect that such would be a bit of a wad of gum in a major crack in the Hoover Dam, as it were...
Steve
(aka smunson)
:):):)
onlyanji4u (7/24/2008)
if your tempdb is full, try to change wherever places you use temporary tables with table variable.like DECLARE @temptb TABLE(id int,name char(10))
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
Viewing 15 posts - 31 through 45 (of 57 total)
You must be logged in to reply to this topic. Login to reply