May 17, 2007 at 5:46 am
Hi all,
i have a problem. I have an table with 2 indexes, one clustered and one non-clustered. I have a lot of inserts - about 2 millions a day. Also, I have some jobs which runs in the night - they work on reindex (with fill-factor 0%) all the tables , update statistics on all tables, shrink databes and backup database. I do not know why "Mem usage" and "VM size" in Task Manager are increasing until SQL Server reach a limit when it says "Insufficient memory". I think i tis from reindex table because i made a test - it results a significant increasing of memory on it. Settings of SQL Server instance are: Dynamically configure SQL Server memory - 0-1000 MB. Computer memory - 2 GB .
May 18, 2007 at 2:57 am
Hi,
I've made another test: I've changed the fill-factor of reindexing the tables from 0 to 90 %. I've noticed a good change. Memory is still increasing but, so far, very little. Anyway, i dont' understand, how to decrease this level of memory? I've noticed, also, that I could decrease this level only if I should restart SQL Server. Is there other methods to decrease the memory? Please help.
May 18, 2007 at 3:05 am
I forgot to say: I have a database with 30 tables with, about 2,500,000 records, each. After the 30th day the oldest table is deleted. A new table is created dynamically when a record with datetime correspondent is arrived in database. Tables are made daily.
May 18, 2007 at 5:02 am
SQL 2000 does not give back the memory that it got from the OS once. The mem. released only when you restart the instance.
For the first scenario SQL engine has to move the pages that are not fit because of 0% fill-factor. If you insert new values into a table it's a good practise to keep the fill-factor other than zero (around 70-80 for your situation I guess)
Regards,
Zubeyir
May 18, 2007 at 5:44 am
Thanks for your reply!
So, if i set dynamically memory from min to max, let say, max=1,4 GB from a total memory RAM of 2 GB, I should expect from SQL Server to eat resources until this max set - 1,4 GB.
May 18, 2007 at 5:47 am
Yeah. It'll keep eating until the limit is reached. But SQL 2005 can give you back the mem. resources the it allocated.
Zubeyir
May 18, 2007 at 5:52 am
This is a good thing! Is there a setting in SQL 2005 to do that or is done automatically?
May 18, 2007 at 6:24 am
It's done automatically.
May 18, 2007 at 8:26 am
I will add one thing (in addition to the very important set max memory already covered). There is almost NEVER a good reason to shrink a database. Make it as big as it needs to be to cover expected growth for 6-12 months (both data and log space). Set growth rates for both appropriately. Then shut down sql server and defrag the hard drive(s) that contain the files. Extra space in the database is necessary to get effective index maintenance and lay the pages down contiguously for maximal performance.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 18, 2007 at 9:11 am
Thanks for advices!
Anyway, there would be SQL Server shut down? I ask that because my application should run continously, without pauses !! My application is almost a real-time application.
May 18, 2007 at 9:20 am
The shutdown was recommended prior to defragging your hard drive. This is not required, but I have seen that most clients I have never change the default database settings and wind up with hundreds of thousands of file fragments. One client got an immeditate 18% throughput improvement simply by defragging the harddrive so it can REALLY make a difference.
Defragging may be performed without shutting down sql server but is not recommended, especially if the system is under heavy load. I personally wouldn't take the chance of risking data corruption however.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 19, 2007 at 4:40 am
So, there is no solution for my problem. My system is under heavy load, even night. Maybe, a warm change of hard-disk with a new one. And this time of populating the new hard-disk with data, should defrags the old one. And change again.
May 22, 2007 at 8:06 am
Actually you should not bother to defrag a drive with SQL databases on it live. If they are open they will be locked and the defrag process will end before completing or wait eating resources until it can. You should only defrag the hard drive while SQL is stopped.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply