May 31, 2015 at 5:31 am
Comments posted to this topic are about the item The MidnightDBAs release Minion Backup
May 31, 2015 at 11:14 pm
And it was going so well.
Features were being described that I could imagine myself using, and then I got to the bottom of the page.
Shrink logs.
Why? Why would you even include this in your product.
Shrinking log files should never be a matter of routine.
June 1, 2015 at 2:23 am
And why would you make a blanket statement like that?
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
June 1, 2015 at 8:30 am
OK guys, now that I've had 3hrs of sleep one of us will be in here all day to answer Qs about the product, or whatever else you like I suppose, but the only answers that I guarantee are about Minion Backup.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
June 1, 2015 at 9:23 am
Hi,
first, your link to http://MidnightDBA.com gives this error:
403 - Forbidden: Access is denied.
You do not have permission to view this directory or page using the credentials that you supplied.
Second, does Minion Backup do a compression ?
June 1, 2015 at 9:30 am
Hey VictorDBA, thanks for letting me know about the link. http://www.MidnightDBA.com works but for some reason the null host header doesn't. I'll look into that.
And yes MB does compression and so much more. I guess we didn't mention it in the article, but that's probably because we consider it standard functionality and we wanted to discuss the extraordinary stuff.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
June 1, 2015 at 4:14 pm
Sean/Jen, does MinionWare support archiving/purging of backups to an Azure storage container? Do you have a compare/contrast with Ola's solution like you do for your Re-Index solution?
I appreciate your work in providing tools like this to the community.
Edit: just skimmed the doc file and saw "no, you can't manage those files" repeated throughout. The other question on compare/contrast still might be interesting. I'd also like to know if you have any plans to support purging those Azure files after some timeframe. I haven't seen a way to do it through SQL and those saved creds yet and want to avoid storing the key in a plain text file to use in a Powershell script.
June 1, 2015 at 4:17 pm
This looks interesting, i'll attend the Wednesday talk on this.
does this use the built in compression, or a custom algo? any comparisons (size/time) to the third party competitors?
I want to say thanks also for watching the discussions on article release...
Nice job.
June 1, 2015 at 4:28 pm
I'll answer both of you guys at once.
1. No, we don't have a comparison with Ola yet. If you recall, when we released MR it took us probably a month or 2 to get that prepared. You'd be surprised how much effort it takes to do that properly, and fairly. We want to represent both products in a way that we're not lying about anything.
2. Azure-- We can backup to Azure blobs, but we can't copy to them yet. Believe me, I wanted to put that in this release, but there are more used features I had to get in first, and I also wanted to see how people are using this first before I built it into the solution. So if you'd like to talk about your needs in this area ping me offline and we can talk about the ins and outs.
3. Compression-- We only use the built-in compression for SQL. Those of you who know me, know that I've been around the compression and backup game for about 15yrs or so now. I worked with Imceda and helped them a lot with Litespeed, and I've worked with others as well. And a lot of times the extra compression algorithms don't do that much good. They're more of a selling point than anything. You can pick another level of compression and maybe gain a couple more percent in size, but it often takes so much longer to backup that it's just not worth it. So the compression in SQL is really good enough in most circumstances.
4. Comparisons-- I think you're asking if we have any compression comparisons with other vendors. To that I'll say since we're just dong native sql compression, it's not necessary.
Come to the webinar wed. I'm going to throw the product by you so you can see exactly what we bring to the table. I really think you'll be pleased.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
June 1, 2015 at 4:49 pm
And why would you make a blanket statement like that?
Because as I said in the previous comment, shrinking logs should never be a matter of routine.
There's good stuff in this product, but that's a feature I would leave out, or at least not publicise!
Feel free to read some of the bloggers on the futility of shrinking log files daily.
June 1, 2015 at 5:05 pm
No worries. I didn't know exactly when the Ola comparison was released last time. I remember it being there by the time I got around to looking for it and that was good enough. The documentation alone likely took some time - that's pretty thorough.
Quick synopsis - we have some Azure VMs running SQL 2014. We're backing those up to Azure storage. We have that working to URL now and I'm debating about just using the built-in managed backups as "good enough" and letting that handle removing the files as well (which it should do). If I had something a bit more robust that could handle that as well, it would be wonderful. However, I completely understand the 80/20 rule when it comes to writing your stuff and if our situation is rare, then it doesn't make sense for you to write something like that into an early release of the software.
We can continue the discussion off-forum, though. Should I just ping you @ one of your midnight dbas e-mails or is there one more specific to this?
(And not related, but I could definitely see a case for shrinking the log file periodically in cases where that's not a chronic problem caused by code or a really, really busy system.) 🙂
June 1, 2015 at 5:13 pm
Ok, so feel free to post links to these blog posts you're referring to. I guarantee you that they're just discussing general practice, and that's the problem with articles like that... sometimes they don't give good practical advice.
Here's a situation where this could come in very handy... and in fact, it's the very situation that the feature was designed for.
I have a client currently who has like 65 DBs on a server. They have all their data files on 1 drive, and all their logs on another drive.
A couple times a week a very large import comes through in the middle of the night and blows the log out to like 90GB. Then the log backup comes along and backs it up, and truncates the log, but the file is still 90+GB.
Sometimes, the other log files on that drive also need to expand but they can't because that file is too big. And it's mostly empty.
Now this file is tripping space alerts, other processes are failing, etc.
And the DBA has to get out of bed, diagnose the problem, and then shrink the log so that there's room for the other logs to expand.
Then other teams have to get involved because their processes got rolled back and they have to figure out what's there and what's not. This is a ridiculous situation.
He manually shrunk the log to fix this issue. But before that, several things had to fail, Then the NOC had to be called. Then the NT guys got woken up for the space issue, then he discovered it's a SQL issue and the DBA was finally gotten out of bed. And all that because the log got blown out much bigger than it should be.
Now, is the perfect fix for this to add more space? Yes and no. Some companies don't have that much space to give so they have to run things a lot tighter. But there's a larger aspect to this. Your recovery.
You may have an RTO of like 30mins. But the log has to be zeroed out every time so when you try to restore that DB somewhere, you'll have to zero out 90+GB of log file. It'll easily take your entire recovery time just building the log file. In fact, to write 90GB of log you'll very easily double that time just in log... and that's before you even get to the data part of the restore.
So keeping your log really huge because it gets blown out that big sometimes isn't always practical.
But if you see how I built the feature, I give you options. I don't just shrink the log every time. You can say that it only gets shrunk if it grows over a certain size. And then you can tell it how much to shrink it down. So we're not always just shrinking it. You can say, hey, I only want my log to be shrunk if it's over 25GB and then just shrink it down to 10GB. And you can configure those thresholds on a per database basis.
As a DBA, not using this feature in a case like this is doing yourself and your company a disservice. There's no reason to get teams of people out of bed for a temporary issue. There's no need to stop several other DBs for a temporary issue. And there's no reason to not fix a temporary issue with an automatic process when you apply the same fix every time.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
June 1, 2015 at 5:15 pm
Yeah, that's one of the situations I envisioned when thinking about the Azure copies.
We can certainly take it offline if you like. I can tell you what all I have planned around this feature.
Just ping my at my MidnightDBA addr.
thanks.
Watch my free SQL Server Tutorials at:
http://MidnightDBA.com
Blog Author of:
DBA Rant – http://www.MidnightDBA.com/DBARant
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply