list of Fields in a table

  • How can we list the number of Unicode fields(columns) in a table where the Unicode(nvarchar,nchar) column is actually in use.

    select col from tablename where col<>cast(col as varchar(max)) gives u all the rows from a particular field where unicode is in use. But i wanna be able to find out all the fields in that table that actually is taking advantage of unicode fields.

    (ps. unicode is in use means it has different language characters) thnks.

  • Do you mean something like this?

    CREATE TABLE [dbo].[Table_3](

    [Col1] [nvarchar](50) NULL,

    [Col2] [nvarchar](50) NULL,

    [Col3] [nvarchar](50) NULL

    ) ON [PRIMARY]

    INSERT INTO dbo.Table_3

    SELECT NULL,NULL,'d' UNION ALL

    SELECT 'X',NULL,NULL UNION ALL

    SELECT NULL,NULL,NULL UNION ALL

    SELECT NULL,'X','C' UNION ALL

    SELECT NULL,NULL,NULL --UNION ALL

    SELECT COUNT(*) AS 'All Rows',COUNT(Col1)AS 'Col1',COUNT(Col2) AS 'Col2',COUNT(Col3) AS 'Col3'

    FROM dbo.Table_3

    SELECT Col1,col2,Col3 FROM dbo.Table_3

    Results:

    All RowsCol1Col2Col3

    5 1 1 2

    ----------

    Col1col2Col3

    NULLNULLd

    XNULLNULL

    NULLNULLNULL

    NULLXC

    NULLNULLNULL

    If you have a very large number of rows you may want to use

    COUNT_BIG, instead of COUNT.

    If everything seems to be going well, you have obviously overlooked something.

    Ron

    Please help us, help you -before posting a question please read[/url]
    Before posting a performance problem please read[/url]

  • bitbucket-25253 (4/9/2012)


    Do you mean something like this?

    CREATE TABLE [dbo].[Table_3](

    [Col1] [nvarchar](50) NULL,

    [Col2] [nvarchar](50) NULL,

    [Col3] [nvarchar](50) NULL

    ) ON [PRIMARY]

    INSERT INTO dbo.Table_3

    SELECT NULL,NULL,'d' UNION ALL

    SELECT 'X',NULL,NULL UNION ALL

    SELECT NULL,NULL,NULL UNION ALL

    SELECT NULL,'X','C' UNION ALL

    SELECT NULL,NULL,NULL --UNION ALL

    SELECT COUNT(*) AS 'All Rows',COUNT(Col1)AS 'Col1',COUNT(Col2) AS 'Col2',COUNT(Col3) AS 'Col3'

    FROM dbo.Table_3

    SELECT Col1,col2,Col3 FROM dbo.Table_3

    Results:

    All RowsCol1Col2Col3

    5 1 1 2

    ----------

    Col1col2Col3

    NULLNULLd

    XNULLNULL

    NULLNULLNULL

    NULLXC

    NULLNULLNULL

    If you have a very large number of rows you may want to use

    COUNT_BIG, instead of COUNT.

    Not quite.

    Let me elaborate.

    create table #Nepal ( cola nvarchar(10),colb nvarchar(10), colc nchar(10), cold nvarchar(10))

    insert into #Nepal

    select N'????',N'AAA',N'ZZZ',N'PPP' union all

    select N'???????',N'AAA',N'?????',N'PPP' union all

    select N'TTT',N'BBB',N'DDD',N'????????'

    So now I need a query that displays cola,colc,cold because these columns are taking advantage of the unicode property of the field.

  • If I understand your question, you must first determine what collation, and code page you are using, In my case the collation is:

    SELECT SERVERPROPERTY ( 'collation' ) -- Result: SQL_Latin1_General_CP1_CI_AS

    I then executed the following to obtain a better understanding of what I was attempting to work with:

    Greatly simplified - for my understanding

    SELECT UNICODE(cola) AS 'Unicode',NCHAR(UNICODE(cola))AS 'NChar cola' FROM #Nepal

    Results:

    UnicodeNChar cola

    3616?

    2711?

    84T

    I then executed:

    SELECT COUNT_BIG(Cola) AS 'Count Cola' FROM #Nepal WHERE UNICODE(cola) > 127

    Result:

    Count Cola

    2

    Admittedly NOT a complete answer to your question, but I hope enough to get you started on a solution.

    If everything seems to be going well, you have obviously overlooked something.

    Ron

    Please help us, help you -before posting a question please read[/url]
    Before posting a performance problem please read[/url]

  • To make it clear, you want a list of columns which anywhere contains a unicode character, right?

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • Seems that the bigger question is what are hoping to do with this information? I have a feeling that you want to see if the data right now contains anything from the extended character set and if not change the datatype to varchar. This is not a big deal with the data as it is currently but be careful. It is very possible you may be introducing issues into the system because of other applications that accept data as insert/update values for these columns will probably still allow for extended characters.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Sean Lange (4/10/2012)


    Seems that the bigger question is what are hoping to do with this information? I have a feeling that you want to see if the data right now contains anything from the extended character set and if not change the datatype to varchar. This is not a big deal with the data as it is currently but be careful. It is very possible you may be introducing issues into the system because of other applications that accept data as insert/update values for these columns will probably still allow for extended characters.

    Sean you are right. I am trying to isolate columns that might as well just be varchar type if they have no unicode data coming in them. And those for columns that have unicodet values in them -just let them be. But the table I am working on is massive and I need an algorithm to find/locate and have some kind of looping mechanism so once the first unicode character is located in that column, it displays that field and moves to the next.

  • Murphy'sLaw (4/10/2012)


    Sean Lange (4/10/2012)


    Seems that the bigger question is what are hoping to do with this information? I have a feeling that you want to see if the data right now contains anything from the extended character set and if not change the datatype to varchar. This is not a big deal with the data as it is currently but be careful. It is very possible you may be introducing issues into the system because of other applications that accept data as insert/update values for these columns will probably still allow for extended characters.

    Sean you are right. I am trying to isolate columns that might as well just be varchar type if they have no unicode data coming in them. And those for columns that have unicodet values in them -just let them be. But the table I am working on is massive and I need an algorithm to find/locate and have some kind of looping mechanism so once the first unicode character is located in that column, it displays that field and moves to the next.

    Unless you are 100% certain that all locations that can insert/update to these tables will not allow extended characters you are creating a time bomb. Not saying anything catastrophic will happen but an insert will someday fail when somebody puts in a character that a varchar column can't handle. Most front ends are not going to care what characters are typed in a textbox or other entry points.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Building off the previously posted data.

    create table #Nepal ( cola nvarchar(10),colb nvarchar(10), colc nchar(10), cold nvarchar(10))

    insert into #Nepal

    select N'????',N'AAA',N'ZZZ',N'PPP' union all

    select N'???????',N'AAA',N'?????',N'PPP' union all

    select N'TTT',N'BBB',N'DDD',N'????????'

    select *

    from #Nepal

    I think something like this should work...

    First create an itvf to split your value into a table like this.

    CREATE FUNCTION ParseCharactersToTable

    (

    @Input nvarchar(max)

    )

    RETURNS @table TABLE

    (

    CharVal nchar(1)

    )

    AS BEGIN

    declare @Temp table (CharVal nvarchar(max))

    insert @Temp select @Input

    insert @table

    select SUBSTRING(@Input, N, 1)

    from @Temp

    join Tally t on t.N <= len(@Input)

    RETURN

    END

    So far all we have done is to split a nvarchar into a table. The next step is to figure out how this can be applied to help us determine if a column contains characters from the extended set.

    The primary goal here is to look at each character in a number of columns. We could do this via loops. If we went that path we could go out for dinner while we waited. Instead we can use our function from above like this.

    select max(unicode(ColA.CharVal)) as MaxColA,

    max(unicode(ColB.CharVal)) as MaxColB,

    max(unicode(ColC.CharVal)) as MaxColC,

    max(unicode(ColD.CharVal)) as MaxColD

    from #Nepal

    cross apply dbo.ParseCharactersToTable(cola) ColA

    cross apply dbo.ParseCharactersToTable(colb) ColB

    cross apply dbo.ParseCharactersToTable(colc) ColC

    cross apply dbo.ParseCharactersToTable(cold) ColD

    Now if any of our columns are > 127 we know they are in the extended set. See if that might help you.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

  • Here is a method, probably not the most efficient but it works for me

    CREATE TABLE #C(Num INT,Nam VARCHAR(5))

    INSERT INTO #c

    SELECT COUNT(Cola),'Cola' FROM #Nepal WHERE UNICODE(Cola) > 127 UNION ALL

    SELECT COUNT(Colb),'Colb' FROM #Nepal WHERE UNICODE(Colb) > 127 UNION ALL

    SELECT COUNT(Colc),'Colc' FROM #Nepal WHERE UNICODE(Colc) > 127 UNION ALL

    SELECT COUNT(Cold),'Cold' FROM #Nepal WHERE UNICODE(Cold) > 127

    SELECT num,nam FROM #C WHERE num > 0

    Results:

    numnam

    2Cola

    1Colc

    1Cold

    If everything seems to be going well, you have obviously overlooked something.

    Ron

    Please help us, help you -before posting a question please read[/url]
    Before posting a performance problem please read[/url]

  • bitbucket-25253 (4/10/2012)


    Here is a method, probably not the most efficient but it works for me

    CREATE TABLE #C(Num INT,Nam VARCHAR(5))

    INSERT INTO #c

    SELECT COUNT(Cola),'Cola' FROM #Nepal WHERE UNICODE(Cola) > 127 UNION ALL

    SELECT COUNT(Colb),'Colb' FROM #Nepal WHERE UNICODE(Colb) > 127 UNION ALL

    SELECT COUNT(Colc),'Colc' FROM #Nepal WHERE UNICODE(Colc) > 127 UNION ALL

    SELECT COUNT(Cold),'Cold' FROM #Nepal WHERE UNICODE(Cold) > 127

    SELECT num,nam FROM #C WHERE num > 0

    Results:

    numnam

    2Cola

    1Colc

    1Cold

    Ron the problem with this approach is the UNICODE function will only look at the first character.

    create table #Nepal ( cola nvarchar(10),colb nvarchar(10), colc nchar(10), cold nvarchar(10))

    insert into #Nepal

    select N'TTT',N'BBB',N'DDD',N'P????????'

    SELECT COUNT(Cold),'Cold' FROM #Nepal WHERE UNICODE(Cold) > 127

    This would say there are 0 rows in cold with unicode, but it is only the first character 'P' not the whole string.

    _______________________________________________________________

    Need help? Help us help you.

    Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.

    Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.

    Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
    Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
    Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
    Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply