May 19, 2018 at 12:14 pm
Comments posted to this topic are about the item A Great Case for Powershell
May 20, 2018 at 2:35 pm
Heh... ironically, it's not a great case unless you use it fix the underlying problem. The underlying problem is that people HAVE made direct calls to linked servers in their code and that's about as smart as using 3 part naming conventions in your code and then finding out that your databases have moved or the databases in a different environment (Dev, Test, UAT, Staging, Prod) are all named differently.
The best way to fix this kind of thing in the future is to ONLY use 2 part naming conventions for everything and to correctly use synonyms when you have to address items outside of the database. Then, you only need to go through your synonyms and make the changes there. Done correctly, you won't ever need to dig into code and change it just because the name or location of a database has changed.
Once again, folks are trying to justify how to use PowerShell outside of SQL Server because they don't know how to do things correctly inside SQL Server. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
May 21, 2018 at 3:22 am
Fair point Jeff, but who would pay us if everyone knew how to do everything optimally? Fortunately it's not even close...
May 21, 2018 at 3:55 am
This will massively help me, I have a migration project for loads of databases in all different versions of sql and this will save me so much time, thank you Steve. 🙂
May 21, 2018 at 5:16 am
One nice thing about the Powershell code on Github is that the licensing and history of the code is clear.
May 21, 2018 at 6:02 am
call.copse - Monday, May 21, 2018 3:22 AMFair point Jeff, but who would pay us if everyone knew how to do everything optimally? Fortunately it's not even close...
Heh... it's not "optimally". It's just a little common sense about some fairly well known best practices. I will agree, though, there are a whole lot of people that don't think they need to know such things and then end up having to resort to this type of thing to get out of the woods. The thing I don't understand is that even after getting out of the woods, they turn right around and walk back in because they never put two and two together and say, "It's an easy fix to never have to worry about this again... let's fix it". The cool part is that it does provide a living for a lot of the denizens of SQLServerCentral. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
May 21, 2018 at 6:24 am
Vote up/big thumbs up for the 'dbatools' project as well. Very well designed and super useful.
May 21, 2018 at 9:45 am
It's worth learning IMO, I automatically restore and DBCC CHECKDB our production backups each night with a PoSh script (includes email alerting) I wrote that's only about 200 lines of code. A T-SQL implementation would have been pure spaghetti code. Having a decent procedural language in your tool belt really helps with the administrative tasks.
May 21, 2018 at 10:48 am
POSH is my go-to solution for any sort of weird Spackle that i need. Example: Importing and exporting JSON into/from a sql 2014 database: Powershell to the rescue : ). Parsing a weird XML file that someone squished 3 xml files into: PoSH to the rescue once again.
May 21, 2018 at 11:01 am
HighPlainsDBA - Monday, May 21, 2018 9:45 AMIt's worth learning IMO, I automatically restore and DBCC CHECKDB our production backups each night with a PoSh script (includes email alerting) I wrote that's only about 200 lines of code. A T-SQL implementation would have been pure spaghetti code. Having a decent procedural language in your tool belt really helps with the administrative tasks.
I wrote such a thing in about the same number of lines of T-SQL code. It certainly didn't turn out to be "pure spaghetti code". 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
May 21, 2018 at 12:37 pm
I'm not a huge fan of powershell, it seems big and rushed, like the backup command for diffs for instance:
Backup-SqlDatabase -ServerInstance "Computer\Instance" -Database "MainDB" -Incremental
from:
https://docs.microsoft.com/en-us/powershell/module/sqlserver/backup-sqldatabase?view=sqlserver-ps
Maybe the "-differential" option was already taken?
However, as always its going to be useful for lots of folks because it does loads of stuff and is developing some great community tools. Its sort of like perl for me, at a really fundamental level, I think perl is junky, has too many ways to do things and I'm not a big regular expression guy, yet perl is where I go to knock out jobs that need to be coded with some velocity. On the other hand, I have successfully evicted perl from my work's coding base because the language itself is in somewhat of a decline and probably powershell helps some here as it seems to be of a similar mindset albeit with the Microsoft touch.
sort of like the way "worst is better" seems to win out in the marketplace.
https://www.jwz.org/doc/worse-is-better.html (this is a really dated article, when it was written apparently the "future" was apparently 1995)
May 21, 2018 at 1:00 pm
Jeff Moden - Monday, May 21, 2018 11:01 AMHighPlainsDBA - Monday, May 21, 2018 9:45 AMIt's worth learning IMO, I automatically restore and DBCC CHECKDB our production backups each night with a PoSh script (includes email alerting) I wrote that's only about 200 lines of code. A T-SQL implementation would have been pure spaghetti code. Having a decent procedural language in your tool belt really helps with the administrative tasks.I wrote such a thing in about the same number of lines of T-SQL code. It certainly didn't turn out to be "pure spaghetti code". 😉
Interesting, did you have to do much with the your backup files in terms of figuring out which one to use? I don't know how I could write this more concisely in T-SQL.
#Find the latest FULL and DIFF backup.
$FullBackup = Get-Childitem Microsoft.PowerShell.Core\FileSystem::$PartialPath $fullExtension | Where-Object {$_.Name -match $Row[2] + "_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*" } | Sort-Object LastWriteTime -descending | Select-Object -FIRST 1
$DiffBackup = Get-Childitem Microsoft.PowerShell.Core\FileSystem::$PartialPath $diffExtension | Where-Object {$_.Name -match $Row[2] + "_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*" } | Sort-Object LastWriteTime -descending | Select-Object -FIRST 1
May 22, 2018 at 5:32 am
HighPlainsDBA - Monday, May 21, 2018 1:00 PMJeff Moden - Monday, May 21, 2018 11:01 AMHighPlainsDBA - Monday, May 21, 2018 9:45 AMIt's worth learning IMO, I automatically restore and DBCC CHECKDB our production backups each night with a PoSh script (includes email alerting) I wrote that's only about 200 lines of code. A T-SQL implementation would have been pure spaghetti code. Having a decent procedural language in your tool belt really helps with the administrative tasks.I wrote such a thing in about the same number of lines of T-SQL code. It certainly didn't turn out to be "pure spaghetti code". 😉
Interesting, did you have to do much with the your backup files in terms of figuring out which one to use? I don't know how I could write this more concisely in T-SQL.
#Find the latest FULL and DIFF backup.
$FullBackup = Get-Childitem Microsoft.PowerShell.Core\FileSystem::$PartialPath $fullExtension | Where-Object {$_.Name -match $Row[2] + "_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*" } | Sort-Object LastWriteTime -descending | Select-Object -FIRST 1
$DiffBackup = Get-Childitem Microsoft.PowerShell.Core\FileSystem::$PartialPath $diffExtension | Where-Object {$_.Name -match $Row[2] + "_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]*" } | Sort-Object LastWriteTime -descending | Select-Object -FIRST 1
Yes... lots and lots of TRN files. We do nightly FULL backups, no DIFs, and have TRN files every 15-30 minutes depending on the database and the instance.
I'm a bit unconventional because I've proven to myself that certain best practices aren't and a lot of them are based on fear and old wives tales from the past. For example, I know that having xp_CmdShell turned off does nothing to improve security. It only gives people that may not know better the nice warm fuzzies. Unless someone has sysadmin privs, they can't use it when it's enabled and if it's disabled, only people with sysadmin privs can enable it. That means if an attacker can get in with sysadmin privs, they can enable it and use it with no problem. The key is to keep the bad guys from breaking in with sysadmin privs and to NOT give non-sysadmin users the privs to execute it directly. I have an hour long presentation on the subject which ends with how DBAs and stored procedures can use it safely along with some education on what DOS Injection is. There are some really cool things you can do with it. For example, the error reporting generated in error files by either BULK INSERT or BCP absolutely suck. Doing a little trick with a combination of xp_CmdShell and SQLCmd actually captures useful error reporting from a BULK INSERT where, normally, that information would only be available to a human that's actively watching the process. Ironically, the use of xp_CmdShell has increased security quite a bit when it comes to ETL. Because Bulk Insert relies on the user's privs and such things are frequently run manually by certain people or other systems (WebMethods, for example), we were either having to give users/other systems extraordinary privs on the source file system or give them write privs at the OS level so they could copy files to the SQL Server box and we all know the problems that brings with memory usage not to mention using valuable disk space on the SQL Server for such junk. It really got crazy when they started doing exports to files. Now, they just execute a well written stored procedure that has the privs to use xp_CmdShell to get the job done. The users/systems actually have no privs to do any of it directly. They can't even do a DIR to see the file names directly.
That being said, I capture the results of a DIR command in a table where life is made easy and I also tickle the registry (again, using what's available in T-SQL) to get the default paths for restores if the database has been dropped or is new to the server. I also auto-magically repoint synonyms in the code based on the environment (Dev, QA, UAT, Staging, etc) and the name of the database, which changes depending on the environment it's in and reestablish the correct users and roles for databases that existed during the restore.
There are actually 348 lines of code to do all of that but half of that is embedded documentation in the form of comments and blank lines for the sake of readability.
Shifting gears a bit, what do you consider a line of code to be in PowerShell? I'm looking at your great examples (thanks for posting those) and, while each is technically only a single line because of the CrLf locations, in SQL Server each "piped" filter/command you have on those lines would be a separate line in SQL Server if you use the formatting conventions that I use. I'm thinking that if I were to single-line many of the simple SELECTs that I use and get rid of the blank lines that I use for formatting and drop all of the comments, I might actually have quite a bit less than 200 lines of code.
If someone absolutely loathes the use of xp_CmdShell, either a simple batch file or the use of xp_DirTree can help and, most certainly, the proper call to MSDB of the source server will do the trick for such restores. I'm still a bit confused as to why someone would allow the use of PowerShell with extraordinary privs and not xp_CmdShell with similar privs but more tightly controlled because general users can't actually use it directly nor do they need to be provided with elevated privs to any box at the OS level.
I certainly can't and won't speak for anyone never mind everyone but I've simply just not needed PowerShell to do the extraordinary things that can be done with PowerShell... and I still pass all the audits we have to go through every bloody year.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 22, 2018 at 6:18 am
I too have never touched Powershell either nor used it. I think if I had some very complex administrative tasks with Microsoft services, that might change, but for the most part, I have not really needed to do any of those complex tasks with said services. Most of my work is more developer focused and thus, I have relied heavily on Python to help with that. For example, the most common use case for me is not setting up and admining Microsoft services. It would be connecting to an API and maybe something like dynamically pivoting with ease and then handing it off to SQL Server.
May 22, 2018 at 7:04 am
xsevensinzx - Tuesday, May 22, 2018 6:18 AMI too have never touched Powershell either nor used it. I think if I had some very complex administrative tasks with Microsoft services, that might change, but for the most part, I have not really needed to do any of those complex tasks with said services. Most of my work is more developer focused and thus, I have relied heavily on Python to help with that. For example, the most common use case for me is not setting up and admining Microsoft services. It would be connecting to an API and maybe something like dynamically pivoting with ease and then handing it off to SQL Server.
I love python too. Problem comes in when you type Python into Powershell or vice versa... 😀
Viewing 15 posts - 1 through 15 (of 27 total)
You must be logged in to reply to this topic. Login to reply