November 27, 2018 at 12:03 am
Comments posted to this topic are about the item Converting Binary Numbers To Hex
November 27, 2018 at 5:45 am
Yes, it is an interesting topic with some useful applications
I did a bit on this a while back in Bitwise Operations in TSQL
Best wishes,
Phil Factor
November 27, 2018 at 8:45 am
Phil Factor - Tuesday, November 27, 2018 5:45 AMYes, it is an interesting topic with some useful applications
I did a bit on this a while back in Bitwise Operations in TSQL
Great work especially with the consideration for datatype and 2's compliment. I'm thinking you missed the point of Steve's article when it comes to the use of loops, though. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
November 27, 2018 at 9:23 am
I'm curious, Steve. With the understanding that the largest positive value you can calculate for BIGINT only occupies 63 bits (and you are only taking positive values into consideration in your conversion and the 64th bit from the right for BIGINTs would be the sign bit), why have you assigned @Bin a datatype of VARCHAR(MAX)?
And, no... I wouldn't recommend to anyone to use Mark's function and not just because it has a WHILE loop in it.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 27, 2018 at 9:35 am
Jeff Moden - Tuesday, November 27, 2018 9:23 AMI'm curious, Steve. With the understanding that the largest positive value you can calculate for BIGINT only occupies 63 bits (and you are only taking positive values into consideration in your conversion and the 64th bit from the right for BIGINTs would be the sign bit), why have you assigned @Bin a datatype of VARCHAR(MAX)?And, no... I wouldn't recommend to anyone to use Mark's function and not just because it has a WHILE loop in it.
I wasn't thinking of bit size, just of storing a long string. It could be a more limited datatype, and I don't know if it matters. If this is in memory as storage, is there a downside?
BTW, the person who needed this actually had a larger than 64 bit string, so my solution didn't work for them. Not sure what they're doing, or if they just have some mistake in there. I didn't want to do the string conversion, because that's a pain. I had to do that in university, multiply 64 bit numbers at a time when we had 16 bit architectures. Not fun.
November 27, 2018 at 9:38 am
Hi Steve. Interesting function (I like it more than the one you adapted it from). I found one bug: it does not handle negative numbers. In order to handle negative values, the algorithm has to account for bit # 64. Both your function and the one that you linked to (from Mark) both get arithmetic overflows when including the 64th bit.
That said, I was able to make some improvements to increase performance:
And that leaves us with:
CREATE OR ALTER FUNCTION dbo.BinaryToHex
(@bin VARCHAR(64))
RETURNS TABLE
WITH SCHEMABINDING
AS RETURN
WITH myTally (n)
AS
-- SQL Prompt formatting off
(SELECT TOP (64) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM (VALUES (1), (2), (3), (4), (5), (6), (7), (8)) a(n)
CROSS JOIN (VALUES (1), (2), (3), (4), (5), (6), (7), (8)) b(n)
),
-- SQL Prompt formatting on
binCTE (i)
AS (SELECT POWER(CAST(2 AS BIGINT), n - 1)
*
CAST(SUBSTRING(REVERSE(@bin), n, 1) as BIGINT)
FROM myTally
WHERE n <= LEN(@bin))
SELECT CONVERT(VARBINARY(8), SUM(i)) AS [HexVal]
FROM binCTE
;
GO
Earlier this year I included functions to convert between a Base-2 string and BIGINT in the Free version of SQL#: Convert_Base2ToBase10 and Convert_Base2ToBase10. These two functions do properly handle the 64th bit and hence negative values:
DECLARE @TestVal VARCHAR(64);
SET @TestVal = '111111111111111111111111111111111111111111111111111111111111111'; -- 63 digits
SELECT [HexVal] FROM dbo.BinaryToHex(@TestVal); -- 0x7FFFFFFFFFFFFFFF
SELECT CONVERT(VARBINARY(8), SQL#.Convert_Base2ToBase10(@TestVal)); -- 0x7FFFFFFFFFFFFFFF
SET @TestVal = '1000000000000000000000000000000000000000000000000000000000000000'; -- 64 digits
SELECT [HexVal] FROM dbo.BinaryToHex(@TestVal);
-- Arithmetic overflow error converting expression to data type bigint.
SELECT CONVERT(VARBINARY(8), SQL#.Convert_Base2ToBase10(@TestVal)); -- 0x8000000000000000
SET @TestVal = '1111111111111111111111111111111111111111111111111111111111111111'; -- 64 digits
SELECT [HexVal] FROM dbo.BinaryToHex(@TestVal);
-- Arithmetic overflow error converting expression to data type bigint.
SELECT CONVERT(VARBINARY(8), SQL#.Convert_Base2ToBase10(@TestVal)); -- 0xFFFFFFFFFFFFFFFF
-- Verify:
SELECT CONVERT(BIGINT, 0xFFFFFFFFFFFFFFFF);
-- returns: -1
Finally, in order to help folks better understand how the 1001100... syntax relates to the BIGINT value, I provided a visual representation of it here:
Binary / Base2 / BitMask Notes
Take care, Solomon..
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 27, 2018 at 9:49 am
Thanks and that's a good point. I'll update the article to note this is positives only.
November 27, 2018 at 11:02 am
Solomon Rutzky - Tuesday, November 27, 2018 9:38 AM
- and not sure how much this helps, but I changed the "NULL" to "0" in the "(ORDER BY (SELECT 0))" for the ROW_NUMBER function. I figure that a zero is non-ambiguous for it's datatype (parsed as an INT) whereas NULL could be any type.
Unless CAST (either implicitly or explicitly), a literal NULL is an int. There is a lot of implicit casting that happens, and as far as I can tell from experimentation the compiler does a pretty good job of inferring an appropriate type for the NULL. That said, SELECT NULL AS A INTO #temp will create column A as an int.
November 27, 2018 at 2:54 pm
t.ovod-everett - Tuesday, November 27, 2018 11:02 AMSolomon Rutzky - Tuesday, November 27, 2018 9:38 AM
- and not sure how much this helps, but I changed the "NULL" to "0" in the "(ORDER BY (SELECT 0))" for the ROW_NUMBER function. I figure that a zero is non-ambiguous for it's datatype (parsed as an INT) whereas NULL could be any type.
Unless CAST (either implicitly or explicitly), a literal NULL is an int. There is a lot of implicit casting that happens, and as far as I can tell from experimentation the compiler does a pretty good job of inferring an appropriate type for the NULL. That said, SELECT NULL AS A INTO #temp will create column A as an int.
I was guessing it would be INT as that is the easiest, most sensible choice, but wasn't sure. Thanks for doing that test to prove it :). I will still use 0 as it seems more readable, even if only a little bit (to me, at least).
Take care, Solomon..
SQL# — https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
Sql Quantum Lift — https://SqlQuantumLift.com/ ( company )
Sql Quantum Leap — https://SqlQuantumLeap.com/ ( blog )
Info sites — Collations • Module Signing • SQLCLR
November 27, 2018 at 3:05 pm
Solomon Rutzky - Tuesday, November 27, 2018 2:54 PMI was guessing it would be INT as that is the easiest, most sensible choice, but wasn't sure. Thanks for doing that test to prove it :). I will still use 0 as it seems more readable, even if only a little bit (to me, at least).
For what it's worth, I took the query in https://stackoverflow.com/questions/10066819/what-does-order-by-select-null-mean and generated estimated execution plans for it with ORDER BY (SELECT NULL) and ORDER BY (SELECT 0) under SQL 2016. I saved the XML plans and diffed them and - voila - identical plans (well, except for insignificant variation in CompileTime, CompileCPU, and MaxCompileMemory).
With that in mind, the one advantage I can see for ORDER BY (SELECT NULL) is that there's a cool blog by that name: https://orderbyselectnull.com/about/ .
November 27, 2018 at 9:31 pm
Steve Jones - SSC Editor - Tuesday, November 27, 2018 9:35 AMJeff Moden - Tuesday, November 27, 2018 9:23 AMI'm curious, Steve. With the understanding that the largest positive value you can calculate for BIGINT only occupies 63 bits (and you are only taking positive values into consideration in your conversion and the 64th bit from the right for BIGINTs would be the sign bit), why have you assigned @Bin a datatype of VARCHAR(MAX)?And, no... I wouldn't recommend to anyone to use Mark's function and not just because it has a WHILE loop in it.
I wasn't thinking of bit size, just of storing a long string. It could be a more limited datatype, and I don't know if it matters. If this is in memory as storage, is there a downside?
BTW, the person who needed this actually had a larger than 64 bit string, so my solution didn't work for them. Not sure what they're doing, or if they just have some mistake in there. I didn't want to do the string conversion, because that's a pain. I had to do that in university, multiply 64 bit numbers at a time when we had 16 bit architectures. Not fun.
I was just considering the reason why I didn't ever publish the MAX version of DelimitedSplit8K... LOBs don't like the JOIN between the Tally object and the LOB at the substring level and tend to be 2 to 3 times slower just by using MAX even if it contains a value less than 8K bytes. I've not proven it specifically with your function but I've also seen where not rightsizing inputs to functions does cause excessive memory allocation.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 9, 2021 at 1:07 pm
One small correction I see. In the section "Setting Up a Test" you have your initial version of the BinaryToHex function. I believe for this to work as written you'll want to change the calculation of the @return value from
SET @return = CONVERT(VARBINARY(100), 15)
to
SET @return = CONVERT(VARBINARY(100), @bin)
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply