November 23, 2013 at 7:40 am
Thierry,,
Intellectual property comes in different forms. Their are copyrights and patents, and then there are trade secrets. All of them are easy and frequently violated. The redress isn't technical but legal.
One of the appeals of hosted model is that it is easier to protect intellectual property that way.
November 23, 2013 at 10:07 am
Jeff Moden (11/22/2013)
Sorry, Elliott. I absolutely agree in most cases and didn't mean to make it sound contrary to what you said. Rather, my intent was to support your position on the use of CLR as I've allowed it into my environment long before I've allowed 3rd party code on the servers.
No problem Jeff.
CEWII
November 23, 2013 at 11:49 am
There has been a bit of discussion following this post so I will start here, rather than assume people have been keeping up with the thread.
thierry.vandurme (11/22/2013)
CLR is all nice but I still think it's not secure when you have 3rd-party assemblies that require the use of UNSAFE.
Hi there. This is actually a rather vague statement, even if most people don't recognize it as such, and without context, is a bit unfair. What packages / products require UNSAFE Assemblies? Are the required UNSAFE Assemblies vendor-produced or are they .Net Framework DLLs that are required for functionality that the vendor is needing? The list of "tested and approved" system DLLs is actually quite small, so something like image manipulation, which requires System.Drawing, is not natively available. Getting that to work requires importing the System.Drawing DLL into SQL Server, but this is only allowed if it is marked as UNSAFE. However, marking it as UNSAFE doesn't mean that it is necessarily doing unsafe things. And it is no less secure when it is sitting inside SQL Server than it is when it is in its natural home within C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (path varies by OS version and 32 vs 64 bit) and available to all .Net-based applications.
How can we tell there's no backdoor in the assembly?
I feel that this is an odd and unfounded fear that many people seem to have. Why would you even be thinking along these lines? How sure are you that any software you run doesn't have backdoors? Are you certain that Microsoft doesn't have backdoor in both Windows and SQL Server, just in case they want to check up on your licensing or whatnot? Do you think that some vendors would never do such a thing? LG is a well known and well respected company so nobody expected this[/url]. So sure, it is always theoretically possible, but are there instances of this actually happening?
And there are means of seeing what is happening, even if you don't have the code:
As far as I can tell, the only real concern with 3rd Party Assemblies is poorly written code that might have performance issue. And of course, the simple fact that all (or nearly all) software has some number of bugs, even if very minor. But the issue of bugs is moot given that there are bugs already in Windows and SQL Server, we don't have that source code, and still use those products.
.NET Reflector is ok but what if they obfuscate the assembly? I'm not a .NET developer but I assume there are ways to protect one's code?
As someone else mentioned, obfuscation just makes the code hard to read; it should not be confused with encryption. Phil mentioned that Smart Assembly by Red-Get can obfuscate. And to answer his question about anyone actually trying it: I did, although it was over a year ago and I don't recall what version they were on, maybe 5? So I haven't tried with version 6. But I do remember that parts of it worked and parts of it didn't. I think scalar UDFs worked but for some reason TVFs did not. I would love to test it again on Version 6 but am so swamped with stuff to do that I just don't have the time :-(.
If you can regenerate (reverse-engineer) the code using the .NET Reflector how can you protect your software? Ok, if it's obfuscated then -if I'm not mistaken- variables, methods... have no meaning but one can still get to the logic (if it's really worth it). Right?
Correct, there is no true way to protect your code from people seeing it, outside of never giving it to them; if people can copy not just Apple products, but also their stores[/url], then they can likely get your code. I completely agree with both points that Robert Sterbal made: 1) Protection is more of a legal issue than a technical one, and 2) a benefit of the SaaS / hosted model is that the company has the only copy of the software. To me, the best protections one has are to keep innovating and be very responsive to customers.
So I'm still not convinced CLR is more secure than xp_cmdshell when it comes to assemblies you don't own.
Well, the context of this article was using CLR yourself to replace xp_cmdshell, and at least in that context CLR is often, but not always, better. When it comes to 3rd party assemblies, clearly there are no guarantees, but I also don't see any real cause for concern.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 23, 2013 at 4:16 pm
Solomon Rutzky (11/23/2013)
How can we tell there's no backdoor in the assembly?
I feel that this is an odd and unfounded fear that many people seem to have. Why would you even be thinking along these lines? How sure are you that any software you run doesn't have backdoors? Are you certain that Microsoft doesn't have backdoor in both Windows and SQL Server, just in case they want to check up on your licensing or whatnot? Do you think that some vendors would never do such a thing? LG is a well known and well respected company so nobody expected this[/url]. So sure, it is always theoretically possible, but are there instances of this actually happening?
Heh... I got major whiplash from that paragraph. You first claim that it's an "odd and unfounded fear" only to prove that it happens by the link you provided.
"I feel" that concern on the potential for backdoors and other questionable code should be a common and well founded fear especially since you just proved that it DOES happen.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 23, 2013 at 11:20 pm
Jeff Moden (11/23/2013)
Solomon Rutzky (11/23/2013)
How can we tell there's no backdoor in the assembly?
I feel that this is an odd and unfounded fear that many people seem to have. Why would you even be thinking along these lines? How sure are you that any software you run doesn't have backdoors? Are you certain that Microsoft doesn't have backdoor in both Windows and SQL Server, just in case they want to check up on your licensing or whatnot? Do you think that some vendors would never do such a thing? LG is a well known and well respected company so nobody expected this[/url]. So sure, it is always theoretically possible, but are there instances of this actually happening?
Heh... I got major whiplash from that paragraph. You first claim that it's an "odd and unfounded fear" only to prove that it happens by the link you provided.
"I feel" that concern on the potential for backdoors and other questionable code should be a common and well founded fear especially since you just proved that it DOES happen.
Yeah, I guess that wasn't the clearest. I had typed a response the night before and when I clicked "post" it went to that lovely error page as I must have gone past the 30 minute session timeout. Only so much energy to retype it. So I will try to explain again.
The stated concern is that 3rd party code might have a backdoor (or some other malicious code). Ok. So you don't want to run any 3rd party code without having the source code and compiling it yourself? Ok. Do you have source code for 3rd party apps? Do you use the native SQL Backup or a 3rd party backup program. If this a real concern, then shouldn't it be applied consistently? It sounds like some software is assumed to be safe while others are assumed to be high risk. I am curious as to why there is that distinction. I referenced the article about LG to show that while yes, it does happen, it seems more likely to be done by one of those companies that is assumed to be trustworthy. And at the same time, I have never heard anyone say that they had used some compiled 3rd party code from a much smaller, and possibly unknown, company and that it had malicious code in it. I want someone to prove that this happens.
The other point I was trying to make was why leave it up to conjecture? There are tools to monitor some activity (e.g. Fiddler) and some best practices (such as NTFS permissions and SQL Server service login) that can prevent some damage by any such malicious code. I am not saying don't be cautious or concerned; I am saying that one should be a) consistent, and b) make decisions based on what they observe more than based on what they think might happen.
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 24, 2013 at 11:51 am
Solomon Rutzky (11/23/2013)
So you don't want to run any 3rd party code without having the source code and compiling it yourself?
Correct.
Ok. Do you have source code for 3rd party apps?
Almost never.
Do you use the native SQL Backup or a 3rd party backup program.
Native only. There's no need for 3rd party backup programs. It's way too easy to write your own or crib a script that you can verify before implementing.
If this a real concern, then shouldn't it be applied consistently?
I absolutely do apply it consistently. The only thing I can't apply it to is SQL Server itself because it's necessary to use it and I obviously can't get the source code for such a thing. Same goes for Windows.
It sounds like some software is assumed to be safe while others are assumed to be high risk.
Not on my part. I assume that all software is basically unsafe. There's just certain software, such as SQL Server and Windows, that I can't do anything or very little about. As a bit of a safeguard (for example), I can tell you that none of my SQL Servers have the ability to get out to the internet on their own. Yes, we do have web servers that can use and write data to the servers but, again, there's not much that I can do about that except to review the code we've written to do that. Even then, there is the potential for risk because we don't have the source code for, say, 3rd party controls or the ORM software. It's not that I accept that risk... it's that I can't do anything about those things except to attempt to monitor.
I am curious as to why there is that distinction.
Others may make that distinction. I do not.
I referenced the article about LG to show that while yes, it does happen, it seems more likely to be done by one of those companies that is assumed to be trustworthy.
That's part of the reason why I make no such distinction.
And at the same time, I have never heard anyone say that they had used some compiled 3rd party code from a much smaller, and possibly unknown, company and that it had malicious code in it. I want someone to prove that this happens.
I don't have to prove to me that it happens because we've actually seen proof of it in the media for the larger companies, like the one that you provided a link to. Why would one assume that a smaller company wouldn't do the same? ๐
As to "malicious code", it comes in many forms. I won't provide the URL for it and I won't reveal the name of the well known author that wrote what I read because I attribute it to a youthful indescretion, but that author publically admitted to intentionally writing slower code in his younger days so that he would "look good" when he was advised of low performance and was able to quickly fix it. My impression was that he was a consultant at the time and, while I'm certainly not implying that most consultants would resort to such ill-conceived tactics, it certainly does seem to indicate that even small businesses are capable of some form of malicious code.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 25, 2013 at 4:44 am
Hi everyone
the point I was trying to make is to not trust anyone or any software just like that. I'm not saying CLR should be avoided at all costs, just to be careful.
Jeff's replies mostly express what I think of it as well (get well soon with that whiplash :-))
Let me provide a bit more detail on our environment. Maybe this helps to understand the UNSAFE need. Or maybe someone comes up with a better solution ๐
We host a clustered SQL instance for a customer. We're in charge of keeping things go smooth, backups, security etc.
The customer runs CLR code that accesses a mainframe (network connectivity => EXTERNAL_ACCESS). The CLR code uses static fields (static fields => UNSAFE).
I don't get to see the customer's code, for one because the data it accesses on the mainframe hosts payrolls. I suppose I'd even risk getting fired if they noticed me trying to get to the code.
On the other hand, I have to ensure reliability, security... for our ENTIRE environment. Because even though this cluster is customer dedicated, there are always shared resources such as our AD.
I believe in this case I had no other way than to trust the developer and allow his UNSAFE assembly.
"What about Windows?" someone stated. Well, it's in my nature. Call me paranoid, but when I installed Windows on my laptop, the first thing I did was turn off any unnecessary services (remote registry for example), TS, the option to get remote assistance...
I also regularly check what's going on on my PCs at home using task manager and netstat. Yep, definitely paranaoid...:-D
Thanks for all the interesting thoughts
Safety first!
November 25, 2013 at 2:20 pm
Hi Jeff and Thierry. Thanks for the feedback. I appreciate that you both are being consistent in your approach. I know that many others are not.
And I completely understand being very diligent with regards to security, but I still feel it is being over-cautious. Yes, I did link an example of it happening so it does happen, but at what rate? Ten occurrences across thousands, or tens of thousands of devices and programs seems to indicate that it is unlikely to happen. And again, it is something that can be monitored. So why deprive yourself / you company of the benefit of some of these programs on the off-chance something bad might happen, especially when it can be detected?
Jeff, I see no real reason to correlate someone intentionally writing slow code with people writing malicious code (though when performance is key, intentionally slow could be seen as slightly malicious, but again performance can be tested before putting into production). A few bad apples does not make everyone else guilty (much like an extremely low incidence rate of voter fraud does not require voters to present IDs to prevent it; the consequences outweigh the benefits).
And regarding "Why would one assume that a smaller company wouldn't do the same?": as a small company / vendor (i.e. SQL#[/url]), I feel that I have less leeway to make such mistakes. Larger / more established companies can survive the fallout from these bad decisions, but I feel it would be far more damaging to my project to have negative blog posts if I were to pull such a stunt. I have no reason to do anything malicious and every reason to be extra careful to not even be perceived as having the possibility of such a risk in my code.
And to be clear, I am not being argumentative. I am just sharing thoughts from a different angle.
Thierry, I don't see that there is much risk for your organization. I assume the account you are running SQL Server as does not have write access to Active Directory, nor to anything on the network or anything outside of the clustered machines that SQL Server is running on. Because these instances are dedicated to the customer, it certainly seems like the risk is entirely theirs for any malicious code, but it's their code to begin with. Sure, maybe it would be possible to read some Active Directory info and attempt to transmit that to an external location, but I would hope that those machines are also confined to your internal network.
Regarding this specific use of UNSAFE, the only reasons that I know of to use static variables is to cache values for performance and/or share values across sessions. There is no way for either of us to know if it is really necessary for your client to need such things, but could be something you bring up to them if there is a chance they will be receptive of suggestions to at least get it down to EXTERNAL_ACCESS. For example, static fields marked as "readonly" do not require UNSAFE mode. But again, going by your description of the situation, it does not seem like your organization is incurring any risk as long as you have done those few things (logon account is read-only for AD, logon account can't access network or even local directories that SQL Server doesn't need, and no internet access).
Take care,
Solomon...
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 25, 2013 at 6:06 pm
All it has to do is happen once on my machines and I'm screwed. And monitoring for something means that it has already happened and the monitor log then becomes nothing more than fine testament that I couldn't do my job correctly. ๐
It also seems that you trust people more than I. Nothing wrong with that on your part but it's not likely I'll ever trust people as much as you seem to indicate.
As to missing out on certain benefits of 3rd party software, yes, I miss out on some but not many. I've written my own backups and monitoring routines, for example. No... they don't have all the bling with a wad of charts and graphs but, if I need to graph something, Excel does a fine job on the aggregated data I auto-magically collect and perfmon/alerts does a fine job with a lot of the real time stuff. Of course, I'm fortunate... I don't have hundreds or even dozens of servers to worry about but, if I did, it wouldn't take that much to make something a little more far reaching.
And, yes, I agree... my way is not for everyone. It's only for those of us that are, as you say, a bit "over cautious".
--Jeff Moden
Change is inevitable... Change for the better is not.
November 26, 2013 at 2:09 am
Hi Solomon
you assumed correctly: the SQL account is a plain user and we try to make it as useless as possible ๐
Yes, I discussed the use of static variables with the developer and he was sure to take a look at it... 2 years ago. It's understandable, we don't get to dictate what to spend their time on.
But as you say, this is a pretty isolated customer dedicated cluster and they can only shoot their selves in the foot.
Thanks again for sharing your thoughts!
Regards
Thierry
November 26, 2013 at 8:09 am
Jeff Moden (11/25/2013)
All it has to do is happen once on my machines and I'm screwed. And monitoring for something means that it has already happened and the monitor log then becomes nothing more than fine testament that I couldn't do my job correctly. ๐
Agreed. You can ask Cupid Media or Adobe about how well their monitoring for something helped them after they lost tens of millions of rows of information on users - 42 million for Cupid Media, and over 150 million for Adobe (including email addresses and passwords).
Security isn't done well by thinking that 1 in N odds of one specific breach aren't bad, it's done well by reducing the chances for problems at every level and layer, and working to ensure that when a breach does happen, the damage is as small as possible. There are many, many ways to be breached, and each and every one has their own "odds".
Looking at some intrusion detection logs, the #1 attack I actually see being tried is on port 1433, the default SQL Server port!
As far as third party software, I also agree with Jeff - I write my own, or use scripts I can check (Ola Hallengren's and Adam Machanic's, primarily, with a dose of Jeff's for tally tables, and several from the SQLServerCentral community), none of which need CLR. I do use some other third party tools (Hashcat) for password audits, but those don't need any special access - just CPU or GPU time.
September 14, 2021 at 3:44 pm
The links in this article are dead. I especially want to see the articles linked here:
For further coverage of this topic please see Setting Database Trustworthy On or Signing Assemblies with Certificates. To learn how to sign an assembly with a key please read Signing an Assembly with a Certificate (Visual Studio)
Can you please forward me the correct links?
September 14, 2021 at 4:14 pm
Stephen: I don't know about those 2 links, but I do know that you neither need, nor want, to enable TRUSTWORTHY
. Please see the following two resources (both mine) for more details on signing assemblies as well as stored procedures, triggers, scalar UDFs, and TVFs (multi-statement only, not inline):
Take care,
Solomon...
P.S. I will also be presenting on the topic of Module Signing at the upcoming PASS Data Community Summit 2021: Module Signing: Use Certificates to Grant Higher-level Permissions Without Compromising Security. Please keep in mindย that the focus will not be on signing assemblies, but you can find info on that topic via the two links I noted above.
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
September 14, 2021 at 10:50 pm
P.S. I will also be presenting on the topic of Module Signing at the upcoming PASS Data Community Summit 2021: Module Signing: Use Certificates to Grant Higher-level Permissions Without Compromising Security. Please keep in mindย that the focus will not be on signing assemblies, but you can find info on that topic via the two links I noted above.
Will that include anything on the use of WITH EXECUTE AS OWNER?
--Jeff Moden
Change is inevitable... Change for the better is not.
September 15, 2021 at 1:21 pm
Thanks for that. I ran through the examples shown here. First I tried the TSQL code at the top. This would not work without one or two alterations. This worked:
DROP TABLE #MyTable
DECLARE @cmd varchar(100)
DECLARE @path varchar(100)
SET @Path = 'c:\Windows\'
SET @cmd = 'dir ' + @path + ' /A /OS /-C'
CREATE Table #MyTable
(Results varchar(500))
INSERT INTO #MyTAble
EXEC XP_CMDSHELL @cmd
select
Results,
LTRIM(RTRIM(SUBSTRING(Results, 21, 18))) as MyBit,
left(LTRIM(RTRIM(SUBSTRING(Results, 21, 18))), patindex('%[^0-9]%', LTRIM(RTRIM(SUBSTRING(Results, 21, 18)))+'.') - 1) as Numbers from #MyTable
SELECT LEFT(Results, 20) [create_date],
CASE LTRIM(RTRIM(SUBSTRING(Results, 21, 18))) WHEN '<DIR>' THEN 1 ELSE 0 END as is_directory,
--CASE LTRIM(RTRIM(SUBSTRING(Results, 21, 18))) WHEN '<DIR>' THEN NULL ELSE CAST(LTRIM(RTRIM(SUBSTRING(Results, 21, 18))) AS BIGINT) END as size_in_bytes,
CASE LTRIM(RTRIM(SUBSTRING(Results, 21, 18))) WHEN '<DIR>' THEN NULL ELSE CAST(
LTRIM(RTRIM(left(LTRIM(RTRIM(SUBSTRING(Results, 21, 18))), patindex('%[^0-9]%', LTRIM(RTRIM(SUBSTRING(Results, 21, 18)))+'.') - 1))
) AS BIGINT) END as size_in_bytes,
SUBSTRING(Results, 37, Len(Results)) AS [name]
FROM #MyTable
where Results is not null
and ISNULL(PATINDEX('%__/__/____%', Results), 0) != 0
--DROP TABLE #MyTable
select * from #MyTable
Then I had a go at the .NET code. I started to code it myself, then noticed you have provided the code. I have downloaded SQLCLRNet_DirectoryBrowser.zip. Having loaded it into Visual Studio 2019, how can it be built into a DLL? This doesn't seem complete. What do I have to do to get it running?
Viewing 15 posts - 151 through 165 (of 168 total)
You must be logged in to reply to this topic. Login to reply