February 9, 2012 at 7:00 am
Jeff Moden (2/8/2012)
Elliott Whitlow (2/7/2012)
Jeff Moden (4/29/2011)
I know little about deploying CLR's so please bear with me. Does the creation of the Asymetric key make it so that a low privileged user can execute a proc with calls to your CLR's but can't use the CLR's directly?Jeff,
I'm sorry I missed this one and I know it is an ancient post..
But the answer to your question (if I understand it) about the asym key, it has a little to do with execute permissions. When an assembly is installed the database must either be tagged as trustworthy or the strong name key from the assembly be granted the rights to "create" UNSAFE or EXTERNAL_ACCESS assemblies. I believe this access is required at run-time as well. However the execution of the sproc can be done by any user granted the rights to exec it. Also keep in mind that there is really not much of a difference between UNSAFE and EXTERNAL_ACCESS, a lot less than is presumed by the names. The only difference I am aware of is the use of P/Invoke is allowed in UNSAFE and not the others.
Not sure if this fully answered your question..
CEWII
Thanks for the feedback, Elliott. I'm more confused than ever though. Paul White built a CLR for me for the "Tally Oh!" article. It required neither a certificate, nor tagging the database as "trustworthy", nor allowing an UNSAFE CLR. From the sounds of it (and, again, I'm NOT a CLR Ninja by any means and could be flat out wrong), UNSAFE CLR's are generally something to avoid and, from personal experience, setting databases to "trustworthy" is a bit of a security headache.
Considering that Paul's CLR worked just fine for me, why would we ever need an UNSAFE CLR or a database to be set to "trustworthy" just to use one? It sounds almost (more actually) as risky as turning on xp_CmdShell and controlling it properly.
Hello Elliott and Jeff.
First, to Elliott's comment about "there is really not much of a difference between UNSAFE and EXTERNAL_ACCESS", that is not exactly correct. In fact, EXTERNAL_ACCESS is much closer to SAFE than it is UNSAFE. EXTERNAL_ACCESS is basically SAFE that also allows the code to see outside of SQL Server so it can access things like the file system, internet, other database connections (to the same server or anything else). UNSAFE, on the other hand, has NO restrictions. It will allow for unmanaged code, non-static variables, access to system areas, loading untested managed assemblies (any assembly that is part of regular .Net but not available by default to SQL Server assemblies).
So then to Jeff's point, we only need EXTERNAL_ACCESS and UNSAFE when doing things that do more than the basic set of .Net assemblies allow for or needing to look outside of the database. In this case being discussed here, just viewing the filesystem requires EXTERNAL_ACCESS. Very few things need UNSAFE. Sometimes either one of these is necessary, but certainly not for a split function. One should not use a higher lever of access than they truly need.
Regarding SQLCLR vs xp_cmdshell, there is a large difference with regards to controlling access. xp_cmdshell is a wide-open hole for anything. You can restrict access by database role, etc, but if two groups need to run CMDs or BATs or EXEs then you can't control who gets to call which ones. But with xp_cmdshell turned off and using SQLCLR instead, you can create a View Directories proc and grant EXECUTE only to RoleA and then create an FTP proc and grant EXECUTE only to RoleB. You can create those functions in CMD scripts but then both RoleA and RoleB would be able to execute both View Directories AND FTP since both would have access to xp_cmdshell. You might be able to abstract xp_cmdshell by restricting direct access and creating procs for both roles that have the explicit calls in there, but I haven't tried that to see if it is a practical solution.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 9, 2012 at 7:35 am
Solomon Rutzky (2/9/2012)
xp_cmdshell is a wide-open hole for anything.
First, thank you very much for the explanation. Your explanation is more along the lines of what I understood.
On the thing I quoted above... that's not exactly true. It's absolutely true that most people deploy xp_CmdShell in such a haphazard and footloose fashion so I certainly understand where you're coming from on that but there are simple methods to make it where someone with only PUBLIC privs can run only specific stored procedures which contain calls to xp_CmdShell and they can't even see what's in the stored procedure and they certainly can't run xp_CmdShell directly. To your point, airtight methods are fairly easy to allow one group to execute not only certain "classes" of DOS level files (.BAT, .EXE, .VBS, etc, etc), but it limits them to ONLY being able to execute what's actually in the proc.
The gross and improper use of xp_CmdShell has given a great tool one hell of a black eye that very few people are willing to look beyond.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 9, 2012 at 7:38 am
rmechaber (2/9/2012)
Jeff Moden (2/8/2012)
Thanks for the feedback, Elliott. I'm more confused than ever though. Paul White built a CLR for me for the "Tally Oh!" article. It required neither a certificate, nor tagging the database as "trustworthy", nor allowing an UNSAFE CLR. From the sounds of it (and, again, I'm NOT a CLR Ninja by any means and could be flat out wrong), UNSAFE CLR's are generally something to avoid and, from personal experience, setting databases to "trustworthy" is a bit of a security headache.Considering that Paul's CLR worked just fine for me, why would we ever need an UNSAFE CLR or a database to be set to "trustworthy" just to use one? It sounds almost (more actually) as risky as turning on xp_CmdShell and controlling it properly.
Jeff, if you're only manipulating data passed to a CLR-based UDF or SP, then SAFE works and no key is needed. The explanation of what you can and cannot access with CLR at the different security levels is summarized here:
http://msdn.microsoft.com/en-us/library/ms345101.aspx
To quote:
"SAFE
Only internal computation and local data access are allowed. SAFE is the most restrictive permission set. Code executed by an assembly with SAFE permissions cannot access external system resources such as files, the network, environment variables, or the registry. "
So if you don't need access to those external system resources, you can create the assembly as SAFE.
As an example, if you are using a CLR-based UDF to validate a passed-in email address, you only need to create the assembly with SAFE permission set. If you want to create a CLR-based TVF that returns file information, then SAFE won't work.
Rich
Thanks for the link, Rich.
Understood on the file access thing. Like Solomon posted, my understanding was that there are actually very few times where you'd want to use UNSAFE. EXTERNAL_ACCESS... sure. But someone really needs strong justification to use UNSAFE.
Can you good folks give me an example of where UNSAFE is actually required? Again, I'm definitely a neophyte on this subject and just trying to learn more from folks that have personal experience with CLR.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 9, 2012 at 7:51 am
Solomon Rutzky (2/9/2012)
Hello Elliott and Jeff.First, to Elliott's comment about "there is really not much of a difference between UNSAFE and EXTERNAL_ACCESS", that is not exactly correct. In fact, EXTERNAL_ACCESS is much closer to SAFE than it is UNSAFE. EXTERNAL_ACCESS is basically SAFE that also allows the code to see outside of SQL Server so it can access things like the file system, internet, other database connections (to the same server or anything else). UNSAFE, on the other hand, has NO restrictions. It will allow for unmanaged code, non-static variables, access to system areas, loading untested managed assemblies (any assembly that is part of regular .Net but not available by default to SQL Server assemblies).
So then to Jeff's point, we only need EXTERNAL_ACCESS and UNSAFE when doing things that do more than the basic set of .Net assemblies allow for or needing to look outside of the database. In this case being discussed here, just viewing the filesystem requires EXTERNAL_ACCESS. Very few things need UNSAFE. Sometimes either one of these is necessary, but certainly not for a split function. One should not use a higher lever of access than they truly need.
Regarding SQLCLR vs xp_cmdshell, there is a large difference with regards to controlling access. xp_cmdshell is a wide-open hole for anything. You can restrict access by database role, etc, but if two groups need to run CMDs or BATs or EXEs then you can't control who gets to call which ones. But with xp_cmdshell turned off and using SQLCLR instead, you can create a View Directories proc and grant EXECUTE only to RoleA and then create an FTP proc and grant EXECUTE only to RoleB. You can create those functions in CMD scripts but then both RoleA and RoleB would be able to execute both View Directories AND FTP since both would have access to xp_cmdshell. You might be able to abstract xp_cmdshell by restricting direct access and creating procs for both roles that have the explicit calls in there, but I haven't tried that to see if it is a practical solution.
Take care,
Solomon...
Solomon,
I have to disagree about the differences..
From Professional SQL Server 2005 CLR Programming by Derek Comingore and Douglas Hinson, Page 296, Unsafe Permission Level.
"We cannot say this any more plainly: Do not be folloed by these SQL CLR permission level names. It is a common misunderstanding to assume EXTERNAL access permissions level opens up some of the permissions and then UNSAFE opens up some more. The reality is that the EXTERNAL access permissions level gives away most of the operating system -level permissions. The only additional thing the UNAFE permission level does is allow access to unmanaged code. The reason for this is that the permission setsare named from the perspective of SQL Server. Assemblies are external because the access external resources to the SQL Server. Assemblies are UNSAFE because, like extended stored procedures, they can call code outside the CLR, can corrupt memory buffers in use by SQL Server (think blue screen of death), and can subvert security mechanisms."
And from the link posted above:
Security Note
SAFE is the recommended permission setting for assemblies that perform computation and data management tasks without accessing resources outside SQL Server. EXTERNAL_ACCESS is recommended for assemblies that access resources outside SQL Server. EXTERNAL_ACCESS assemblies by default execute as the SQL Server service account. It is possible for EXTERNAL_ACCESS code to explicitly impersonate the caller's Windows Authentication security context. Since the default is to execute as the SQL Server service account, permission to execute EXTERNAL_ACCESS should only be given to logins trusted to run as the service account. From a security perspective, EXTERNAL_ACCESS and UNSAFE assemblies are identical. However, EXTERNAL_ACCESS assemblies provide various reliability and robustness protections that are not in UNSAFE assemblies. Specifying UNSAFE allows the code in the assembly to perform illegal operations against the SQL Server process space, and hence can potentially compromise the robustness and scalability of SQL Server. For more information about creating CLR assemblies in SQL Server, see Managing CLR Integration Assemblies.
There are limited reasons to use UNSAFE and I try to avoid it. However if you still need to call a command line tool you can't do it with EXTERNAL_ACCESS. In the past I have had to do this a couple times. xp_commandshell was not something I was willing to use so I built a CLR that was capable of ONLY calling the tools specified. I can post the code for that.
The SAFE level is very restrictive and what I generally shoot for, however, it won't work for many things.
Once you need to use an EXTERNAL_ACCESS or UNSAFE assembly you either need to set the database trustworthy or handle the certs that the assemblies were signed with. Many (most?) people go with trustworthy because its the easiest, but the certs is the best method and most secure.
CEWII
February 9, 2012 at 11:01 am
Jeff Moden (2/9/2012)
Solomon Rutzky (2/9/2012)
xp_cmdshell is a wide-open hole for anything.First, thank you very much for the explanation. Your explanation is more along the lines of what I understood.
On the thing I quoted above... that's not exactly true. It's absolutely true that most people deploy xp_CmdShell in such a haphazard and footloose fashion so I certainly understand where you're coming from on that but there are simple methods to make it where someone with only PUBLIC privs can run only specific stored procedures which contain calls to xp_CmdShell and they can't even see what's in the stored procedure and they certainly can't run xp_CmdShell directly. To your point, airtight methods are fairly easy to allow one group to execute not only certain "classes" of DOS level files (.BAT, .EXE, .VBS, etc, etc), but it limits them to ONLY being able to execute what's actually in the proc.
The gross and improper use of xp_CmdShell has given a great tool one hell of a black eye that very few people are willing to look beyond.
Hi Jeff. Yeah, I kinda realized as I was writing my reply that you could just put the xp_cmdshell calls into Procs and control access via Roles in the same way that I was saying would be done with the SQLCLR procs. Ok, so fair enough that there is not a huge difference there, but I did think of some other benefits:
1) If you need to pull in the output of the DOS command into SQL Server for processing (for example, the article this forum is associated with does a DIR and pulls in that output), you get one row per line and have to do your own text parsing, which is error prone. Where I currently work a DBA needed to get a list of backup files and parse their create dates and on his machine the dates were written as DD/MM/YYYY as he was in Ireland but our servers in the US were showing a date format of MM/DD/YYYY so his process broke once he tried to roll it out. Getting a directory listing via SQLCLR gives you full control over that data in rich datatypes. Interacting with that output is even easier if you do User-Defined Functions as you can simply SELECT from them rather than having to dump their output into a temp table (or even call from OPEN QUERY).
2) You get more functionality than you would using standard DOS commands as DOS is not a very rich language. For example, if you wanted to allow deleting of files but only with a particular name pattern that was complex enough to not work with the basic ? and * wildcards in DOS, you could easily filter the directory list in C# or VB using Regular Expressions. Sure, you might be able to do this with a powershell script or a call to some other shell (some people might have PERL scripts to do more complex stuff or have even written C# or VB.Net EXE's to do this), but that is just another level removed from SQL Server (the issue here is explained in the next item).
3) You also get to contain and manage the dependency on the database functionality. Basically you make a truly external dependency into mostly an internal one. For example, people install 7zip.exe to compress files as it is easy to call from a command line and handles many formats and zip64 for files larger than 4 GB. If you are responsible for backing up and recreating this system either for testing or disaster recovery, it is another step to be concerned with as you have to have that EXE in your backup plan and make sure to install it either in the right place or make sure the PATH environment variable is set correctly. With SQLCLR assmblies, they are part of the database backup and you are not going to restore a database with any procs / functions / views that reference assemblies and somehow not have that assembly. It simplifies / centralizes the code making for easier management. And if you are creating external .Net programs to do things that cannot be done in T-SQL, you might as well just put that same code (in most cases) in the DB and remove a point of failure. Going back to the 7zip example, gzip compression is built into .Net so it does not take too much effort to write an internal zip function. And if you need more than the basic gzip (maybe you need zip64 for 4+ GB files or RAR format) then you can find ways to get that as well (for example, the FULL / Paid For version of SQL# includes a gzip function that handles zip64 for 4+ GB files :-)).
4) A tentative additional benefit would be that security can be more fine-grained. I say "tentative" as I have not tried this and I doubt it is used very often, but it is still available. Using xp_cmdshell the default security context that the OS sees is the service account that the SQL Server (DB Engine) process is running as. There is the lesser used (I assume) option of having a proxy account, but it seems that the proxy account is used in place of the "run as" account for users who are not a member of the sysadmin fixed server role and the caller of xp_cmdshell doesn't get to specify which account to run under (either "run as" account or the proxy account, if configured). But with CLR code, it is possible (again, I haven't yet tried) to set up impersonation to allow the caller of the CLR proc/function (assuming a Windows login) to use their own credentials when interacting with external resources such as the filesystem and internet. Again, I have not tried this so I hope I am not misstating how the CLR impersonation works, but that is my understanding from what I read and I hope to soon prove this out as it does seem rather interesting.
Hope this makes sense and helps. Take care, Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 9, 2012 at 12:27 pm
Elliott Whitlow (2/9/2012)
Solomon,I have to disagree about the differences..
From Professional SQL Server 2005 CLR Programming by Derek Comingore and Douglas Hinson, Page 296, Unsafe Permission Level.
"We cannot say this any more plainly: Do not be fooled by these SQL CLR permission level names. It is a common misunderstanding to assume EXTERNAL access permissions level opens up some of the permissions and then UNSAFE opens up some more. The reality is that the EXTERNAL access permissions level gives away most of the operating system -level permissions. The only additional thing the UNAFE permission level does is allow access to unmanaged code. The reason for this is that the permission setsare named from the perspective of SQL Server. Assemblies are external because the access external resources to the SQL Server. Assemblies are UNSAFE because, like extended stored procedures, they can call code outside the CLR, can corrupt memory buffers in use by SQL Server (think blue screen of death), and can subvert security mechanisms."
And from the link posted above:
Security Note
SAFE is the recommended permission setting for assemblies that perform computation and data management tasks without accessing resources outside SQL Server. EXTERNAL_ACCESS is recommended for assemblies that access resources outside SQL Server. EXTERNAL_ACCESS assemblies by default execute as the SQL Server service account. It is possible for EXTERNAL_ACCESS code to explicitly impersonate the caller's Windows Authentication security context. Since the default is to execute as the SQL Server service account, permission to execute EXTERNAL_ACCESS should only be given to logins trusted to run as the service account. From a security perspective, EXTERNAL_ACCESS and UNSAFE assemblies are identical. However, EXTERNAL_ACCESS assemblies provide various reliability and robustness protections that are not in UNSAFE assemblies. Specifying UNSAFE allows the code in the assembly to perform illegal operations against the SQL Server process space, and hence can potentially compromise the robustness and scalability of SQL Server. For more information about creating CLR assemblies in SQL Server, see Managing CLR Integration Assemblies.
Hi Elliot. Thank you for sharing that info. I definitely agree with the security aspect with regards to the second paragraph (the MSDN quote) as was not trying to state or imply something different with my earlier statement. But I find what they said in the first paragraph interesting and a little surprising so I will do a little more investigation.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 9, 2012 at 12:47 pm
Solomon Rutzky (2/9/2012)
3) You also get to contain and manage the dependency on the database functionality. Basically you make a truly external dependency into mostly an internal one. For example, people install 7zip.exe to compress files as it is easy to call from a command line and handles many formats and zip64 for files larger than 4 GB. If you are responsible for backing up and recreating this system either for testing or disaster recovery, it is another step to be concerned with as you have to have that EXE in your backup plan and make sure to install it either in the right place or make sure the PATH environment variable is set correctly. With SQLCLR assmblies, they are part of the database backup and you are not going to restore a database with any procs / functions / views that reference assemblies and somehow not have that assembly. It simplifies / centralizes the code making for easier management. And if you are creating external .Net programs to do things that cannot be done in T-SQL, you might as well just put that same code (in most cases) in the DB and remove a point of failure. Going back to the 7zip example, gzip compression is built into .Net so it does not take too much effort to write an internal zip function. And if you need more than the basic gzip (maybe you need zip64 for 4+ GB files or RAR format) then you can find ways to get that as well (for example, the FULL / Paid For version of SQL# includes a gzip function that handles zip64 for 4+ GB files :-)).
You could use SharpZipLib[/url] instead. It's free and open source. It has support for most types of compression.
Jedak
February 9, 2012 at 10:38 pm
Jedak (2/9/2012)
You could use SharpZipLib[/url] instead. It's free and open source. It has support for most types of compression.Jedak
True, SharpZipLib and a few other open source libraries are free alternatives. But each of those options still requires someone to make some minor updates (assuming EXTERNAL_ACCESS is preferred over UNSAFE as most of the libraries I have seen make use of constructs that are only valid with the UNSAFE permission level). And then you have a library but still need to do the work of exposing that library to SQL Server. For anyone with the time and skill to do this it is a very good option. But to be fair, "free" is a matter of perspective. For people lacking either the skill or the time to implement the code, these are not truly "free" options.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 10, 2012 at 8:06 am
Getting that library accessible to SQL is a challenge, I had to do that in 2008. I'll have to see what I can do about releasing the code on CodePlex. I'm sure I committed a number of sins in writing it but it worked pretty good. It was a bit of a memory hog when doing large files, but that could have been my fault..
I also believe that it was an UNSAFE assembly.
As a side note, I'm not sure I have seen this in MS docs but I would suggest segragating UNSAFE objects together, SAFE objects together, and EXTERNAL objects together. I got this recommendation during a small group discussion at PDC05 specifically talking about SQLCLR best practices when they were new in SQL. YMMV..
CEWII
February 10, 2012 at 4:35 pm
Solomon Rutzky (2/9/2012)
Hi Jeff. Yeah, I kinda realized as I was writing my reply that you could just put the xp_cmdshell calls into Procs and control access via Roles in the same way that I was saying would be done with the SQLCLR procs. Ok, so fair enough that there is not a huge difference there, but I did think of some other benefits:
1) If you need to pull in the output of the DOS command into SQL Server for processing (for example, the article this forum is associated with does a DIR and pulls in that output), you get one row per line and have to do your own text parsing, which is error prone. Where I currently work a DBA needed to get a list of backup files and parse their create dates and on his machine the dates were written as DD/MM/YYYY as he was in Ireland but our servers in the US were showing a date format of MM/DD/YYYY so his process broke once he tried to roll it out. Getting a directory listing via SQLCLR gives you full control over that data in rich datatypes. Interacting with that output is even easier if you do User-Defined Functions as you can simply SELECT from them rather than having to dump their output into a temp table (or even call from OPEN QUERY).
Since you brought up international time differences for doing a DIR in two different countries, let me ask... what did you do with your CLR to solve this problem? Heh... me? I just had two different versions of the code and sent the correct version to the correct place.
2) You get more functionality than you would using standard DOS commands as DOS is not a very rich language. For example, if you wanted to allow deleting of files but only with a particular name pattern that was complex enough to not work with the basic ? and * wildcards in DOS, you could easily filter the directory list in C# or VB using Regular Expressions. Sure, you might be able to do this with a powershell script or a call to some other shell (some people might have PERL scripts to do more complex stuff or have even written C# or VB.Net EXE's to do this), but that is just another level removed from SQL Server (the issue here is explained in the next item).
Heh... why does everyone think that xp_CmdShell is limited to calling DOS? I can call EXEs, VBS, Perl Scripts, and much, much more. I can even create new batch files from T-SQL based on "conditions" and execute them. I can even call Powershell Script files using xp_CmdShell.
3) You also get to contain and manage the dependency on the database functionality. Basically you make a truly external dependency into mostly an internal one. For example, people install 7zip.exe to compress files as it is easy to call from a command line and handles many formats and zip64 for files larger than 4 GB. If you are responsible for backing up and recreating this system either for testing or disaster recovery, it is another step to be concerned with as you have to have that EXE in your backup plan and make sure to install it either in the right place or make sure the PATH environment variable is set correctly. With SQLCLR assmblies, they are part of the database backup and you are not going to restore a database with any procs / functions / views that reference assemblies and somehow not have that assembly. It simplifies / centralizes the code making for easier management. And if you are creating external .Net programs to do things that cannot be done in T-SQL, you might as well just put that same code (in most cases) in the DB and remove a point of failure. Going back to the 7zip example, gzip compression is built into .Net so it does not take too much effort to write an internal zip function. And if you need more than the basic gzip (maybe you need zip64 for 4+ GB files or RAR format) then you can find ways to get that as well (for example, the FULL / Paid For version of SQL# includes a gzip function that handles zip64 for 4+ GB files :-)).
While that's all true and good, it also means that I have to maintain externally managed code in a language that I prefer not to work with. ๐ Just including EXEs and DLLs in a new instance of SQL Server isn't all that needs to be done. Of course, that would be true of BAT, VBS, PS Scripts, etc. I do understand your point, though.
4) A tentative additional benefit would be that security can be more fine-grained. I say "tentative" as I have not tried this and I doubt it is used very often, but it is still available. Using xp_cmdshell the default security context that the OS sees is the service account that the SQL Server (DB Engine) process is running as. There is the lesser used (I assume) option of having a proxy account, but it seems that the proxy account is used in place of the "run as" account for users who are not a member of the sysadmin fixed server role and the caller of xp_cmdshell doesn't get to specify which account to run under (either "run as" account or the proxy account, if configured). But with CLR code, it is possible (again, I haven't yet tried) to set up impersonation to allow the caller of the CLR proc/function (assuming a Windows login) to use their own credentials when interacting with external resources such as the filesystem and internet. Again, I have not tried this so I hope I am not misstating how the CLR impersonation works, but that is my understanding from what I read and I hope to soon prove this out as it does seem rather interesting.
Understood but, again, the security is controlled by what the procs do, not who does them. If someone needs to do something specific, they do it be being given privs to run a particular stored procedure.
Hope this makes sense and helps. Take care, Solomon...
It actually does help and, just to be sure, I'm not trying to be argumentative or contrary. I'm just explaining the different ways I do things. ๐
--Jeff Moden
Change is inevitable... Change for the better is not.
February 10, 2012 at 4:54 pm
Jeff Moden (2/10/2012)
I can call EXEs, VBS, Perl Scripts, and much, much more. I can even create new batch files from T-SQL based on "conditions" and execute them. I can even call Powershell Script files using xp_CmdShell.
That's what makes it so vulnerable in the wrong hands.
I don't generally veto it's use, just make your code switch it on when it needs it and then ensure you switch it off afterwards.
The problem with a vendor supplied CLR is you don't have access to the source, it could be doing literally anything, you place an awful lot of trust in them :w00t:
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" ๐
February 11, 2012 at 9:46 am
Perry Whittle (2/10/2012)
Jeff Moden (2/10/2012)
I can call EXEs, VBS, Perl Scripts, and much, much more. I can even create new batch files from T-SQL based on "conditions" and execute them. I can even call Powershell Script files using xp_CmdShell.That's what makes it so vulnerable in the wrong hands.
I don't generally veto it's use, just make your code switch it on when it needs it and then ensure you switch it off afterwards.
The problem with a vendor supplied CLR is you don't have access to the source, it could be doing literally anything, you place an awful lot of trust in them :w00t:
Very good point. I am not in favor of the turn on/off methodology, I don't believe you need it. As part of my standard SQL audit policies it checks to make sure it is disabled. In the case of in-house developed SQLCLR, policy requires submission of the source code for review with enough lead time to actually review it. In case where a call to the command line is required it is handled by a SQLCLR that calls ONLY what is explicitly required. I don't want arbitrary calls to the command line. An upside is that I don't run SQL as a highly privileged user so what an xp_cmdshell call CAN do is more limited.
CEWII
February 11, 2012 at 11:10 am
Jeff Moden (2/10/2012)
Since you brought up international time differences for doing a DIR in two different countries, let me ask... what did you do with your CLR to solve this problem? Heh... me? I just had two different versions of the code and sent the correct version to the correct place.
That is the beauty of dealing with information in native/rich datatypes: I didn't have to do anything. The issue was self-correcting because in both the C# code and what the C# passed back to the query, the date value was a real DATETIME. Using a directory listing method like Jonathon's example in the main article, you just add a regular WHERE condition and filter as you would normally in T-SQL. This is much more elegant and maintainable than having two different versions of the code like you had to do. Text-parsing localized strings--whether for dates, currencies, numbers, etc.--is highly error-prone as any new localization can break your code. Meaning, if a third date format is introduced in your scenario, your code will break and you will need to add a third variation of the text parsing which increases the number of areas of code that can have bugs. It also means added complexity which means less maintainable code, especially if someone other than yourself has to make updates and is not entirely sure how your process is parsing some values one way and other values another way.
Jeff
Solomon
2) You get more functionality than you would using standard DOS commands as DOS is not a very rich language. For example, if you wanted to allow deleting of files but only with a particular name pattern that was complex enough to not work with the basic ? and * wildcards in DOS, you could easily filter the directory list in C# or VB using Regular Expressions. Sure, you might be able to do this with a powershell script or a call to some other shell (some people might have PERL scripts to do more complex stuff or have even written C# or VB.Net EXE's to do this), but that is just another level removed from SQL Server (the issue here is explained in the next item).Heh... why does everyone think that xp_CmdShell is limited to calling DOS? I can call EXEs, VBS, Perl Scripts, and much, much more. I can even create new batch files from T-SQL based on "conditions" and execute them. I can even call Powershell Script files using xp_CmdShell.
I am unsure if you are implying that I was only thinking in terms of calling DOS or if you are speaking of other people given that I did mention calling EXEs, PERL, PowerShell, etc. Regardless, my point is still that if you are needing the power of those languages then you might as well just do the same things without ever leaving SQL Server, especially if you are calling custom written .Net code as most likely that would work with little or no modification in SQLCLR.
AND, an added benefit was what I discussed above in terms of the interface bewteen your query and the external code you are calling. If you have an XML document or any number of large strings, binary data, etc., you can pass those in natively as input parameters. Using xp_cmdshell you need to construct a DOS command-line which might require translations and the entire command to be submitted to xp_cmdshell is limited to varchar(8000) or nvarchar(4000). There are no limits with SQLCLR and you can even pass in custom CLR datatypes to CLR functions and procs. And getting data out, whether from a result set or output parameter or scalar return value, can be done inline and without text parsing.
Jeff
Solomon
3) You also get to contain and manage the dependency on the database functionality. Basically you make a truly external dependency into mostly an internal one. If you are responsible for backing up and recreating this system either for testing or disaster recovery, it is another step to be concerned with as you have to have that EXE in your backup plan and make sure to install it either in the right place or make sure the PATH environment variable is set correctly. With SQLCLR assmblies, they are part of the database backup and you are not going to restore a database with any procs / functions / views that reference assemblies and somehow not have that assembly. It simplifies / centralizes the code making for easier management. And if you are creating external .Net programs to do things that cannot be done in T-SQL, you might as well just put that same code (in most cases) in the DB and remove a point of failure.While that's all true and good, it also means that I have to maintain externally managed code in a language that I prefer not to work with. ๐ Just including EXEs and DLLs in a new instance of SQL Server isn't all that needs to be done. Of course, that would be true of BAT, VBS, PS Scripts, etc. I do understand your point, though.
I am not saying that there is nothing to manage in the case of SQLCLR assemblies, just that it is less (possibly MUCH less) to manage.
Jeff
Solomon
4) A tentative additional benefit would be that security can be more fine-grained. I say "tentative" as I have not tried this and I doubt it is used very often, but it is still available. Using xp_cmdshell the default security context that the OS sees is the service account that the SQL Server (DB Engine) process is running as. There is the lesser used (I assume) option of having a proxy account, but it seems that the proxy account is used in place of the "run as" account for users who are not a member of the sysadmin fixed server role and the caller of xp_cmdshell doesn't get to specify which account to run under (either "run as" account or the proxy account, if configured). But with CLR code, it is possible (again, I haven't yet tried) to set up impersonation to allow the caller of the CLR proc/function (assuming a Windows login) to use their own credentials when interacting with external resources such as the filesystem and internet. Again, I have not tried this so I hope I am not misstating how the CLR impersonation works, but that is my understanding from what I read and I hope to soon prove this out as it does seem rather interesting.Understood but, again, the security is controlled by what the procs do, not who does them. If someone needs to do something specific, they do it be being given privs to run a particular stored procedure.
Not entirely understood ;-). What I am speaking of here is an option that CLR code can do to interact with external resources based on the Windows account of the person running the proc / function / trigger. When calling xp_cmdshell, and by default with SQLCLR code, the execution context is of the account that SQL Server is running under. Assuming that SQL Server is running as a restricted account that cannot do much else outside of manage its own datafiles, this is an option to allow people to interact with the system without granting extra rights to the SQL Server account.
Jonathon has an article here that explains more.
Jeff
Solomon
Hope this makes sense and helps. Take care, Solomon...It actually does help and, just to be sure, I'm not trying to be argumentative or contrary. I'm just explaining the different ways I do things. ๐
Understood. I am just explaining some of the benefits of SQLCLR over xp_cmdshell as I see them. But I am not saying or implying that you or anyone needs to agree or ever use SQLCLR.
And I would be fine even if you were being argumentative and/or contrary as I know I am right about this ;-).
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 11, 2012 at 3:17 pm
Perry Whittle (2/10/2012)
The problem with a vendor supplied CLR is you don't have access to the source, it could be doing literally anything, you place an awful lot of trust in them :w00t:
Hi Perry. I have heard this concern voiced once or twice before and I always felt it was rather mind-boggling for several reasons:
1) Unless are you using mostly or all open source software (e.g. Linux/Apache/MySQL/PHP), you generally do not have any source code to the major software that you are using. Not having source code to Windows, SQL Server, .Net, Visual Studio, or even smaller tools like SQL Prompt, .Net Reflector, Fiddler, etc. doesn't seem to bother most people.
2) Why would there be an assumption that anything malicious is being done in the code? Sure, I suppose it is always possible, but how often has this really happened within the realm of business software? Maybe more often in free software as that is a wider audience and more likely to accomplish whatever goal of the malicious code, but for paid software? What incentive would there ever be for that? In fact, as a software vendor I have every incentive to NOT do anything malicious and to make sure that my code performs as best as possible.
3) If you want to claim that some business software vendors still can, or maybe even have in the past, put in malicious code, you still can't distinguish between large/established companies and smaller/newer companies. Microsoft Word had personal identifying information in it, at least as of Office 2000 or 2003 which is how they tracked down the author of one of the more famous viruses. And Excel (I think also Office 2003) had a flight simulator Easter Egg in it?
4) What would most people do even if they had the source code? Would they be able to read it? Depending on how much there was, would they have time to read through it? Sure, for a project that has its source code available for all to see online, then theoretically over time you would expect that eventually someone will find it and point it out. But that doesn't guarantee that anyone who previously downloaded it would be notified of that.
5) Being developed in-house in either SQLCLR or even T-SQL doesn't truly guarantee safety. Yes, more likely to be safe as you at least have the option to review the code, but in larger shops with many developers it is highly unlikely that the DBAs ever see even the T-SQL code that is running until it shows up in search for longest-running queries or seen in Profiler as taking up too much CPU, memory, or reads. But we have well over 10,000 stored procs, functions, views, triggers, etc and there is no real practical way to guarantee against bad code, whether malicious or accidental. And I have seen DBAโs write absolutely horribly inefficient SQL code and it didnโt help that other DBAโs had seen it as none of them knew exactly why it was inefficient.
It seems that there are really only two reasons to want or even need the source code for externally produced software:
1) Performance. You want code that goes into Production to perform well. If you do not have the source code you most likely canโt see what it is doing, BUT you can test the effects of the code on the system before it goes to Production. This is something that should be done for inhouse code as well. Hence this is not a true reason to shy away from vendor supplied software as you would have fair warning against poor performing code.
2) Maintenance. Some vendors do go out of business and it is reasonable that people incorporating someone elseโs software as part of their software that generates their revenue would want some safety net in case a problem is found and needs to be fixed. In these cases many companies provide a source-code license for an additional cost that mitigates this particular concern while still protecting the software vendor.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
February 11, 2012 at 11:43 pm
Thanks for the feedback, Solomon. Mighty kind of you to spend the time you did explaining what you thought.
Shifting gears a bit... On the subject of 3rd party CLR and having the source code. We all put up with the faults of a lot of 3rd party software (and a lot of it is "shrink wrapped"). Avoiding CLR just because you don't have the source code is a bit like saying you don't want to use PKUnZip because you don't have the source code. I understand the fear there, though. This is code that touches "our" database(s) and a lot of us would really like to know what it's going to do, how many resources it's going to use to do it, and how long it's going to take to run.
Of course, a lot of folks blindly installed sp3 for SQL Server 2000 so maybe there's not enough fear. ๐
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 106 through 120 (of 168 total)
You must be logged in to reply to this topic. Login to reply