February 21, 2014 at 9:16 am
My experience will echo a lot of the comments already mentioned. I use T-SQL whenever possible because, in most cases, T-SQL performs faster, it's easier to maintain and our clients’ IT departments are not as verse with the CLR so they cannot support it. With that said, there has been a few times where T-SQL fell short of our needs and CLR external objects were needed.
I needed a common interface for a collection of applications to easily extract data from a historian which has a proprietary backend database and merge that data with a MES system. The proprietary backend database is a glorified key/value system that is great for tending data over time, but cannot be used for aggregating data or linking multiple keys together to make correlations in the data, which a relational database does naturally. The vendor has built components to overcome these limitations, but I shy away from them because of lower performance, the additional cost and/or they do not really fit into our current architecture. I could have created a .Net DLL for the applications, but by using the SQLCLR, other applications (database applications and .Net applications) could leverage the assemblies. Plus, it was much easier to write a suite of CLR objects to expose the data in the proprietary database and then use T-SQL stored procedures on the exposed data.
Another reason for using the CLR was for reporting. I have written a number of external objects to help format data for SSRS report and to expose analysis services metadata via AMO (primarily role members and role permissions).
There's my 2 cents - Tom
Many of our clients have multiple SQL Servers and I try to install the CLRs on as few servers as possible; usually the servers used for reporting. Having few instances of the assemblies makes it easier to perform updates.
February 21, 2014 at 9:28 am
In my shop CLR functions have a limited use, primarily for complex financial computations. One crucial best practice when coding CLR objects should include mechanisms to avoid runaway CLR processes (calling sleep functions every x iterations). Especially true for looping procedures. A non-yielding CLR process in SQL Server can have disastrous results when it happens to be on the same scheduler as the Log Writer process. Let's just say I have first hand experience here :).
February 21, 2014 at 9:32 am
jmoneysteph (2/21/2014)
In my shop CLR functions have a limited use, primarily for complex financial computations. One crucial best practice when coding CLR objects should include mechanisms to avoid runaway CLR processes (calling sleep functions every x iterations). Especially true for looping procedures. A non-yielding CLR process in SQL Server can have disastrous results when it happens to be on the same scheduler as the Log Writer process. Let's just say I have first hand experience here :).
I, for one, would like to see bad practice/good practice examples of this. Any change of either a quick posting or an article?
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
February 21, 2014 at 9:47 am
From an application developer's perspective, I tend to think of using SQLCLR as a violation of the three-tier model. The data tier really should be abstracted from the rest of the architecture. Some advantages of this abstraction are performance, scalability, and vendor-neutrality (MySQL? Anyone? Anyone?).
Additionally, I'm not now, nor even have been, a proponent of using the .Net framework, in *any* application tier. HTML, JavaScript, Ajax, and CSS have been my preferred technologies for the presentation and business logic tiers since .Net's initial release.
February 21, 2014 at 9:53 am
Gary Varga,
In my experience I use the following as a best practice:
•Validate all input parameters and throw the appropriate exception when the parameter is not correct
•Use error handling and re-throw exceptions to the caller. Do not try and log exception in the CLR - have the T-SQL (or caller) do that.
•Write multiple smaller precise CLR methods verses one big one that does everything. This way you could take advantage of both worlds (T-SQL for set operations and CLR for everything else). I have also found that performance degrades on larger CLR objects. Using debuggers, it appears that SQL Server tries to determine how to perform a ROLLBACK on all CLR calls. The larger the CLR object, the longer it takes before the code is actually executed.
•When writing aggregations, build the result as the aggregation moves through the group verses storing all group members than performing the action. This will improve performance and reduce the required cache on large groups.
I'm sure there are more, but those are my big ones.
Tom
February 21, 2014 at 10:00 am
Does the CLR handle the where clause properly or does it always return all sub-values?
February 21, 2014 at 10:08 am
I've been pleasantly surprised by what I've seen for the most part over the years. Even at my last employer (20K+ employees, IT centric shop) there wasn't much abuse of the CLR. Most engineers were using it sensibly for high volume string manipulation and other function oriented work where T-SQL isn't so strong.
February 21, 2014 at 10:55 am
I had to Bing search it, so no, I don't use it. 😛
February 21, 2014 at 11:19 am
Hi,
I too added two functions for regular expressions. One to test and return a 0 or 1. And one to return the matching string. I tried to minimize the amount of work in the CLR and only use the functions when I really need them.
http://www.sqllion.com/2010/12/pattern-matching-regex-in-t-sql/
and
http://blogs.msdn.com/b/sqlclr/archive/2005/06/29/regex.aspx
I have added these to all my servers in a separate database so that I can call the same function from each of the servers.
Thanks,
Peter
February 21, 2014 at 3:00 pm
I use CLR extensively, and while it may be possible to kludge some convoluted solutions together to accomplish the same thing, if coded correctly, and used appropriately, it makes code immensely simpler and allows for solutions to be deployed quicker. I have created a couple of very specialized CLR's for specific encryption requirements (the requirements dictated the usage of a specific encryption methodology, the key length, the seed requirements, # of hash iterations, etc).
But my biggest time saver has been using the SQL# libraries from www.sqlsharp.com . ** disclaimer: I am in no way affiliated with SQLSharp, and paid full price for my license **
I have been using the library for years now, and Solomon Rutzky (the creator) is immensely helpful when it comes to adding functions, procs, features, and/or fixing issues. Here's some sample examples of what I use it for:
Development
* Reqular Expressions (try validating (or even semi-validating) email formats without it)
* String splitting (yes, there are fast Tally solutions out there, but they're still slower than CLR (small and large strings))
* String concatenation (yes, there are other ways to accomplish this, but the CLR function is simple and fast)
* Validation of data type conversions (Mostly used before the SQL 2012 TRY_CONVERT function existed)
* Creating ranges of Dates/Times (very handy for some scheduling applications)
* Creating ranges of numbers (always useful, especially if you want the numbers to be stepped (e.g. 5,10,15,etc)
* Date manipulation and formatting (you can do all of this manually, but...why?)
DBA
* File/Directory manipulation (yes, you can enable xp_cmdshell, but the results are cleaner and easier to work with in CLR, and so are the error messages)
* Internet access (the ability for me to easily call a CLR proc that goes and retrieves OR posts!! something across the internet is to me worth the price right there!)
* XML/XSL transformations (I actually requested this functionality, and Solomon graciously added it to SQL#) (For me, this comes in handy when I want to send a formatted html email message that is created based upon data in the db. It's basically a way to do mail merges. I can retrieve the necessary data elements (some scalar, some set based) and then apply an XSLT to generate a HTML email subject. This is a PITA to do manually)
ETL
* FTP (yes, you can do all of this via xp_cmdshell, but it's easier through SQL#)
* Internet access (see above) (some examples are retrieving a file from a website over port 80 through the firewall)
* Dynamic bulk importing and exporting of data (SSIS, BULK INSERT, bcp all have their pros and cons, there ARE ways to get the best of both worlds!)
The list goes on. Obviously, I'm a fan of SQL# and highly recommend it! I look at it this way: If Microsoft announced that there were 230+ new functions within SQL Server, the blogosphere would go crazy. To me, having this library is like having the complete tool box that everybody else either wishes they had, or doesn't even know they need because they've been hammering nails with a screwdriver so long, they just think that's *the way* to do it.
Now, I'm not advocating install SQL# on every database and let your developers go crazy with it, but it seems short sighted to not at least examine it, test it out, profile it, and release only what is necessary in your environment. Once you do, you'll wonder why you didn't sooner.
I've done a few presentations at our local user groups and SQL Saturdays and I have to say that I see people's eyes light up with ideas on how they can use this library and the time that it could save them.
February 21, 2014 at 3:48 pm
At one time I used a CLR string aggregator (catenate with commas between) with code taken from the SQL Server developer samples, and for a while I used an open source CLR geospatial extension (can't remember what it did). I never developed SQL Server CLR code from scratch.
These uses were not terribly important, but when I tried moving a subset of the database to SQL Azure (using the limited usage that came with my MSDN Premium subscription), I couldn't do it because of those extra assemblies. All I was doing was experimenting to become more familiar with cloud services, but I ended up concluding that, at least at that time CLR and cloud did not mix.
February 21, 2014 at 3:57 pm
eric.notheisen (2/21/2014)
I worked on a project for Wells Fargo a few years ago. We had a database developer who used CLR procedures for 60 percent of his stored procs. We found the performance hit to be minimal and [font="Arial Black"]the flexibility to be superior.[/font] In my current work I have used CLR procedures several times when I needed to recursively look at or process .data
Flexible how and superior to what?
--Jeff Moden
Change is inevitable... Change for the better is not.
February 21, 2014 at 4:19 pm
Never personally used it. Haven't even booted up the interface. Never had to.
One of my vendors who runs our SoR seems to use it pretty heavily. Haven't seen any particular performance concerns due to the CLR on that system. That's mostly because the bottlenecks on that one are elsewhere.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
February 21, 2014 at 5:40 pm
We installed one - a RegEx string handler to replace unacceptable characters. I want to put another in - a splitter, but there's no pressing need, so it's not on the radar yet.
Mark
Just a cog in the wheel.
February 21, 2014 at 6:55 pm
We've implemented a couple CLR functions for string concatenation, string splitting and parsing and that is it. They are rarely used. We don't see a great need to use CLR.
Viewing 15 posts - 31 through 45 (of 51 total)
You must be logged in to reply to this topic. Login to reply