Viewing 15 posts - 1,201 through 1,215 (of 1,319 total)
I've never seen Paul harsh. Always professional, always out there helping with very knowledgable advice. When dealing with corruption or other severe issues, I want sound, direct advice on how...
April 10, 2008 at 10:10 am
That's probably what's going to happen. Question though: I've seen a lot about this while searching: max degree of parallelism. I've got to admit, as much as I've been reading...
April 9, 2008 at 11:46 am
I agree with both Gail's and Ed's posting. I run the disk defrag maybe once a year if I've got the application offline for an upgrade (meaning I'm in the...
April 9, 2008 at 6:55 am
I don't know that transactions per minute is going to indicate the frequency of log backups. Why not schedule them for every 30 minutes? It'll keep the log file size...
April 8, 2008 at 11:30 am
If backups of the log have never been done (or very infrequently) the log is going to continue to grow. Once you backup the log, it removes the unneeded transactions...
April 8, 2008 at 9:02 am
Just a tip - When you have really bad fragmentation, with the databases offline (SQL server services are stopped), you can run the system tool "Disk Defragmenter". It'll actually improve...
April 8, 2008 at 8:01 am
Running profiler after that fact isn't going to show you anything. There is definitely something running as was suggested. A maintenance plan rebuilding indexes or large data loads (you can...
April 8, 2008 at 7:48 am
Not sure what WMware is but I am running VMware and ESX server here. We have some smaller production databases (less than 3 Gb each) running on it without issue...
April 8, 2008 at 7:41 am
Great, even easier than I thought. I tend to over-analyze and lose sight of what is truly necessary. Thanks Jack.
April 7, 2008 at 2:50 pm
1) shrink - no, not recommended, you're right
2) reorganize index - not sure what this is so, no
3) rebuild index - yes
4) Update statistics - no. taken care of in...
April 7, 2008 at 2:49 pm
You can definitely schedule it through the GUI.
April 7, 2008 at 7:53 am
You can setup a job but have it run only when certain thresholds (alerts) are hit. Create a new job, have the step(s) be whatever action you are going to...
April 4, 2008 at 3:02 pm
Absolutely. Check out http://www.red-gate.com/products/index.htm
Go to the backup product and checkout the features link. (No, I don't work for RedGate:D)
April 4, 2008 at 8:36 am
I agree 100% with Julian. I also used both. LiteSpeed for a VLDB for manufacturing. The product was great for that application and the company was willing to pay for...
April 4, 2008 at 7:58 am
If it's a total refresh, restoring your backup of the source database to the target is probably the easiest way, especially if your talking about large amounts of data/tables. If...
April 4, 2008 at 7:19 am
Viewing 15 posts - 1,201 through 1,215 (of 1,319 total)