September 24, 2003 at 11:00 am
Question: what is the 'N' for in the where clause for this query?
SELECT CustomerID, CompanyName, ContactName
FROM Customers
WHERE CustomerID = N'CACTU'
September 24, 2003 at 12:22 pm
It designates UNICODE.
-SQLBill
September 24, 2003 at 1:54 pm
As Bill stated it designates unicode. What it really means is that if your where clause to to contain ansi characters that can only be represented as 2 byte characters (IE: Asian characters or characters with accents on them) you have to use the N before the value or SQL Server will replace them with garbage (usually question marks). I get bit by this all the time as I work with Japanese and Chinese characters a lot.
Gary Johnson
Microsoft Natural Language Group
DBA, Sr. DB Engineer
Gary Johnson
Microsoft Natural Language Group
DBA, Sr. DB Engineer
This posting is provided "AS IS" with no warranties, and confers no rights. The opinions expressed in this post are my own and may not reflect that of my employer.
September 25, 2003 at 1:29 am
You can see Sql books online, there is some examples like this:
Use WHERE and LIKE syntax with Unicode data
This example uses the WHERE clause to retrieve the contact name, telephone, and fax numbers for any companies containing the string snabbköp at the end of the company name.
USE Northwind
SELECT CompanyName, ContactName, Phone, Fax
FROM Customers
WHERE CompanyName LIKE N'%snabbköp'
ORDER BY CompanyName ASC, ContactName ASC
September 25, 2003 at 2:50 am
As far as you can you should avoid using UNICODE.
It makes your database bigger and therefore slower.
Do not forget that for example in a nvarchar field you only can store 4000 chars instead of 8000 in a "normal" varchar field.
Also if you have in index on a nchar or nvarchar field it will take twice of the size therefore reduce the performance.
We are in Hungary, we have some special characters and we DO NOT use UNICODE!
Bye
Gabor
Bye
Gabor
September 25, 2003 at 3:52 am
hi!
nyulg is absolutely right! i'm from austria (hi neighbor!), we've at least some "special" characters as well, not using unicode anyway. To my experience, as long as you stay with all variations of "roman" characters, you won't have to use unicode at all.
the standard 8-bit character sets SQL server offers will just be sufficient for that. don't forget, that SQL server unicode is strictily bound to using 2 bytes for each character, which means that you'll have doulbe storage cost for each character.
and the 'N' all the times doesn't quite add to the readability of your SQL statements anyway. the only thing we once had was an application that had to function both with "roman" characters and some languages from south-east/east asia (eg. korean/japanese). then you won't have any other choice.
best regards,
chris.
September 25, 2003 at 6:28 am
If you do use Unicode, all Unicode is not the same. The Win32 API, Java, ODBC/OLEDB, etc all use UCS-2/UTF-16 as their
representation of Unicode (UCS-2 encodes Unicode using 16 bits per
character instead of the variable 8-24 bits used by UTF-8).
But, Oracle, among others, use UTF-8, which is a multi-byte representation of Unicode. So, in UTF-8, typical ASCII codes are single byte, and double-byte is only used if a Unicode character is interjected.
What's the business problem you're trying to solve?
What's the business problem you're trying to solve?
September 25, 2003 at 7:43 am
Wow! What a wealth of information I have gained! Thank you all, I raise a toast to you all. Next time you're in Kansas City I'll buy you a beer!
There was no business problem, I couldn't find the answer in BOL.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply