March 15, 2010 at 7:23 am
Grinja (3/15/2010)
Hi all,I've been away on leave and have only recently had time to catch up.
Thanks for all the code samples you were extremely thorough :w00t:
Thank you for the interesting question.
April 1, 2010 at 11:53 am
Paul,
when I run your example code, I get a separate resulting row for every character I'm trying to exclude. I don't see the same behavior in Jeff's code. Are you sure these are equivalent solutions, or am I doing something wrong?
It looks like each individual row returned by your procedure has removed one of the different excluded characters.
Thanks,
Carl
April 2, 2010 at 12:53 am
carl.anderson-1037280 (4/1/2010)
when I run your example code, I get a separate resulting row for every character I'm trying to exclude. I don't see the same behavior in Jeff's code. Are you sure these are equivalent solutions, or am I doing something wrong? It looks like each individual row returned by your procedure has removed one of the different excluded characters.
Hey Carl,
Thanks so much for posting here - it sent me an email notification! I have been trying to track this thread down for a few weeks now, since I first realised that the in-line TVF is not equivalent after all - the trick with the multiple-variable assignment is the thing.
I am busy with something else just at the moment, but will correct my posts and submit a revised version in the next few days.
Thanks again!
Paul
April 2, 2010 at 4:34 am
Ok, I have fixed the code in the original post to avoid anyone else using it by mistake.
The new version uses a nested REPLACE:
GO
CREATE FUNCTION dbo.IF_CleanWithReplace
(
@SomeText VARCHAR(100)
)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT cleaned =
REPLACE(
REPLACE(
REPLACE(
@SomeText COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS,
'A', SPACE(0)),
'E', SPACE(0)),
'-', SPACE(0));
GO
Thanks again, Carl.
April 2, 2010 at 8:32 am
I see. So if you want to remove the full set of 37 unprintable characters, you would have to create 37 levels of nesting with REPLACE. I guess the computer scientist in me is concerned about 37 full string scans per value when Jeff's code only does one. However, Jeff also uses the PATINDEX function and double wildcards, so I guess it says something about the performance of PATINDEX if the nested REPLACE functions are still faster.
Seems like there should be a better way! CLR maybe?
Maybe a better question is, why is a deeply nested REPLACE so much faster than Jeff's solution?
April 2, 2010 at 9:05 am
carl.anderson-1037280 (4/2/2010)
I see. So if you want to remove the full set of 37 unprintable characters, you would have to create 37 levels of nesting with REPLACE. I guess the computer scientist in me is concerned about 37 full string scans per value when Jeff's code only does one.
REPLACE is pretty fast, especially when a binary or SQL collation is used.
However, Jeff also uses the PATINDEX function and double wildcards, so I guess it says something about the performance of PATINDEX if the nested REPLACE functions are still faster.
Seems like there should be a better way! CLR maybe?
Maybe a better question is, why is a deeply nested REPLACE so much faster than Jeff's solution
The answer is that optimizing the replacement operation is much less important than optimizing the overall query plan. Let me explain that...
The nested REPLACE uses in in-line table-valued function, which is logically just a parameterized view. The definition of the 'function' is expanded in-line into the query plan of the calling statement before SQL Server optimization occurs. This is the reason that the whole operation can be performed with a single scan and a Compute Scalar.
Scalar T-SQL functions cannot be in-lined in this way, and are invoked once per row using a relatively slow interface. There overheads are such as to completely dominate the overall cost.
A CLR scalar function uses a much faster invocation path, but the overhead of passing the values row-by-row to and from the hosted environment again dominate the overall cost.
Believe me (and Jeff will attest to this) when I say I am very keen on appropriate use of SQLCLR integration. Unfortunately, an in-line function will out-perform a CLR scalar function every time in this sort of task. The difference can be made small (and CLR is much faster than a T-SQL scalar function) but never faster than the fully in-lined solution.
Paul
April 2, 2010 at 6:34 pm
Paul White NZ (4/2/2010)
Believe me (and Jeff will attest to this) when I say I am very keen on appropriate use of SQLCLR integration. Unfortunately, an in-line function will out-perform a CLR scalar function every time in this sort of task. The difference can be made small (and CLR is much faster than a T-SQL scalar function) but never faster than the fully in-lined solution.Paul
Heh... attest and certify. The cool part is that you're not one of those folks that thinks SQLCLR is a panacea and you use it wisely.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 5, 2010 at 3:58 am
The REPLACE solution can be made to run an order of magnitude faster if we use an in-line table-valued function instead of an evil scalar function:
I am not getting the exact meaning of this statement.
what do you mean by order of magnitude ?
you mean to say even though if we use REPLACE function in an in-line table valued function the performance will be good. Am i correct?
karthik
April 5, 2010 at 4:32 am
karthikeyan-444867 (4/5/2010)
The REPLACE solution can be made to run an order of magnitude faster if we use an in-line table-valued function instead of an evil scalar function:
I am not getting the exact meaning of this statement. what do you mean by order of magnitude?
Order of magnitude = 10 times faster.
you mean to say even though if we use REPLACE function in an in-line table valued function the performance will be good. Am i correct?
Yes. Test it, and examine the query plan to understand it fully.
April 6, 2010 at 2:11 pm
Hi Paul,
thanks for your replies and corrections. I actually put together a version of your example that nests 31 REPLACE functions and then tested it against Jeff's while loop. As it turns out, they are closer performance-wise than you think.
I'm testing on real-life medical data, so I can't share it. But I can tell you that it's varchar(1200) columns containing RTF with very few actual unprintable characters per row. I'm stripping unprintables so I can use a FOR XML query to concatenate RTF fragments into whole documents; unprintable characters are apparently not allowed.
I'm selecting the top 60,000 from a table of 13,000,000 rows and it takes about 30 seconds for the nested REPLACE function to complete, whereas it takes about 40 seconds for Jeff's WHILE loop function.
Interestingly enough, when I select INTO a temp table, either solution completes in almost exactly the same amount of time, or about 26 seconds each.
By selecting into a temp table I can clean the RTF for the full table in about an hour. Is this the best we can do??? Maybe I'm missing a query option or something???
Thanks again,
Carl
Here's my code for your reference:
CREATE FUNCTION dbo.IF_CleanWithReplace
(
@SomeText VARCHAR(MAX)
)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT
cleaned =
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
@SomeText COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS,
char(0) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(1) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(2) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(3) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(4) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(5) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(6) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(7) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(8) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(9) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(10) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(11) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(12) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(13) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(14) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(15) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(16) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(17) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(18) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(19) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(20) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(21) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(22) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(23) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(24) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(25) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(26) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(27) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(28) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(29) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0)),
char(128) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, SPACE(0));
GO
select top 60000 clean.* from dev.Y_RTF_NOTE_TEXT cross apply dbo.IF_CleanWithReplace(NOTE_RICH_TEXT) as clean
April 6, 2010 at 2:29 pm
Carl,
Two suggestions:
1. Define @SomeText AS VARCHAR(8000) instead of VARCHAR(MAX)
2. Use the LATIN1_GENERAL_BIN collation - and only on @SomeText, not the replace chars.
VARCHAR(MAX) variables require a tempdb worktable, whereas VARCHAR(8000) does not.
A binary collation is even faster than a SQL collation, and safe to use with your replacement characters, since you are using code point values.
Paul
April 6, 2010 at 3:24 pm
Excellent suggestions Paul!
I changed the argument varchars from MAX to 1200 and that improved performance marginally ~5%. I switched the collation type and selected the top 60,000 again but found it still took between 30 and 35 seconds. However, when I selected INTO a temp table I saw that performance had jumped by a factor of 4! I went from 26 seconds to just 6!
This means I can clean the 13 million rows in about 25 minutes now. Ideally I would like to cut this in half, but it's still a big win. I guess I have to read up on collations now!
Carl
Here's the updated code for anyone interested:
CREATE FUNCTION dbo.IF_CleanWithReplace
(
@SomeText VARCHAR(1200)
)
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
SELECT
cleaned =
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(
@SomeText COLLATE LATIN1_GENERAL_BIN,
char(0),''), char(1),''), char(2),''), char(3),''), char(4),''),
char(5),''), char(6),''), char(7),''), char(8),''), char(9),''),
char(10),''), char(11),''), char(12),''), char(13),''), char(14),''),
char(15),''), char(16),''), char(17),''), char(18),''), char(19),''),
char(20),''), char(21),''), char(22),''), char(23),''), char(24),''),
char(25),''), char(26),''), char(27),''), char(28),''), char(29),''),
char(128),'');
GO
PS> I replaced the SPACE(0) with '' - it didn't seem to change performance. What's the idea behind SPACE(0)??
April 6, 2010 at 3:35 pm
Thank you, Carl.
As far as the actual requirement is concerned, concatenating RTF fragements (I'm not even going to ask why they are like that to start with) might be best done outside the database. RTF processing solutions I have seen before (admittedly converting from RTF to another format) used SSIS to bulk-read the data from SQL Server, and a C# or VB.NET component called from a Script Task to do the hard work. There is some kind of RTF support in the standard library IIRC, and plenty of code samples out there.
One notable advantage of this approach is that an SSIS package could take advantage of parallelism, and involve more than one machine in the effort.
Just some ideas.
Paul
April 6, 2010 at 3:41 pm
carl.anderson-1037280 (4/6/2010)
I guess I have to read up on collations now!
Worth doing, but the reason it is important here is that the sorting and comparison rules are very complex for Windows collations, much less so for SQL collations, and pretty simplistic for binary collations. See https://connect.microsoft.com/SQLServer/feedback/details/512459/replace-function-extremely-slow-with-non-binary-windows-collation
PS> I replaced the SPACE(0) with '' - it didn't seem to change performance. What's the idea behind SPACE(0)??
I just find SPACE(0) more explicit and easier to read than ''.
The difference between '' and ' ' is not always easy to spot in scripts, whereas SPACE(0) is clearly not the same as SPACE(1) 🙂
December 6, 2010 at 10:52 am
Here's another approach:
http://iso30-sql.blogspot.com/2010/10/remove-non-printable-unicode-characters.html
Viewing 15 posts - 16 through 30 (of 47 total)
You must be logged in to reply to this topic. Login to reply