Simplest way to (automatically) output query results to csv

  • With so much on deadlines looming I'd prefer not to have to wrestle with SSIS and all the issues I always seem to run into.

    With that said. I have a query that extracts data from AD. I'd like to run this as an automated task and have the results written to a csv file and saved (with column headers) to a predetermined folder location (on the same server as SQL). This would overwrite the copy created the day before. Is this something that can be done as an agent job or windows scheduled task running a sql script or something else? What is the best approach and based on what I'm trying to achieve what would be the syntax?

    Script: ADextract.sql

    File output: host-list.csv

    Destination: C:\temp\ (local server)

    Overwrite existing: Yes

  • I don't think you can get column headers natively in T-SQL.  You'd need to use sqlcmd to do the export and then PowerShell or your favourite scripting language to add the header row.

    John

  • Maybe this 'short' script can function as a starting point :

    ---------------------------------------------------------------------------------------------------------
    -- 20190411 ben brugman
    --
    -- A short CSV generation :
    --
    DECLARE @T CHAR(99)= 'A',@L VARCHAR(MAX) = '',@S CHAR(999)='+''"''+REPLACE(CONVERT(VARCHAR(MAX),Q,121),''"'',''""'')+''";'''
    SELECT @L=@L+REPLACE(@S,'Q',Column_name) FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME=@T
    EXEC('SELECT'+@L+' FROM '+@T)

    Ben

     

    • This reply was modified 5 years, 7 months ago by  ben.brugman.
  • And a longer solution.

    (Extra headers can be removed other adaptations are possible).

    ---------------------------------------------------------------------------------------------------------
    ---------------------------------------------------------------------------------------------------------
    -- Generate and execute CSV code. Header and Data type lines include (optional). --
    ---------------------------------------------------------------------------------------------------------
    DECLARE @table_name varchar(300)= 'A'
    DECLARE @TextQ varchar(30) = '"';
    DECLARE @Delimiter varchar(30) = ';'
    DECLARE @EOLDelimiter varchar(30) = char(13)+char(10)
    DECLARE @To_Quote varchar(30) = '%['+@Delimiter+@TextQ+@EOLDelimiter+']%'
    DECLARE @F VARCHAR(30) = '+' + char(13)+char(10) -- Use a comma ',' for distinct fields for inspection.
    -- Use a '+' for strings for CSV files.
    -- + char(13)+char(10) to get more readable script.
    DECLARE @Header VARCHAR(max) = 'LineNr'
    DECLARE @WORK VARCHAR(4000)
    DECLARE @Lijstje VARCHAR(max) = ''
    --
    -- Be carefull maximum header length as a single column_name is 128 characters.
    -- Print header as a distinct line.
    -- Then use an empty header or a dummy header
    --
    -- select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'csv_table'

    ---------------------------------------------------------------------------------------------------------
    -- Header and datatypes lines. This part is optional.
    ---------------------------------------------------------------------------------------------------------
    --
    DECLARE @Datatypes VARCHAR(max) = 'Int'
    select @Header = @Header+@delimiter+Column_name FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = @table_name
    select @Datatypes = @Datatypes+@delimiter+DATA_TYPE+COALESCE('('+REPLACE(CONVERT(VARCHAR(6),CHARACTER_MAXIMUM_LENGTH),'-1','MAX')+')','') FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = @table_name

    print @header -- First Line
    print @datatypes -- Second line
    -- Header as column_name -- Supplied by executed SELECT
    -- CommentLine -- Supplied by executed SELECT
    -- DataLines -- Supplied by executed SELECT

    IF DATALENGTH(@Header) >=128 BEGIN -- If the header goes beyond 128 characters, it is not accepted as a 'column_nam'
    DECLARE @NR_COLUMNS INT
    SELECT @NR_COLUMNS = COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = @table_name
    SET @Header = SUBSTRING('LineNr;A;B;C;D;E;F;G;H;I;J;K;L;M;N;O;P;Q;R;S;T;U;V;W;X;Y;Z',1,@NR_COLUMNS*2+6)
    SET @Header = REPLACE(@Header,';',@Delimiter)
    IF @NR_COLUMNS > 26 SET @Header = '-- Dummy Header' -- Maybe the number of columns as A,B,C,D,.......
    END

    ---------------------------------------------------------------------------------------------------------
    -- END Header and datatypes lines. Optional part (can be removed).
    ---------------------------------------------------------------------------------------------------------

    ---------------------------------------------------------------------------------------------------------
    --
    -- @SS Quoted when a special character is used.
    -- @DD Date format 121 for 'DATETIME','DATETIME2','DATE', this results in YYYY-MM-DD hh:mi:ss.mmm YYYY-MM-DD hh:mi:ss.mmmmmmm YYYY-MM-DD hh:mi:ss.sssssss
    -- 2017-03-10 17:13:07.397 2017-03-10 17:13:07.3970000 2017-03-10 17:13:07.3970000
    -- @QS,@NS USED TO BUILD @ss
    --
    -- @pp Standard convert. (NOT USED)
    -- @qq Standard convert always use double qoutes. (NOT USED)
    --

    DECLARE @qs varchar(8000) = ''''+@Delimiter+@TextQ+'''+REPLACE( CONVERT(VARCHAR(MAX),[<COLUMN_NAME>]),'''+@TextQ+''','''+@TextQ+@TextQ+''')+'''+@TextQ+''''
    DECLARE @Ns varchar(8000) = ''''+@Delimiter+'''+COALESCE(CONVERT(VARCHAR(MAX),[<COLUMN_NAME>]),'''')'
    DECLARE @SS varchar(4000) = @F+'CASE WHEN [<COLUMN_NAME>] LIKE '''+@To_Quote+''' THEN ' +@QS+' ELSE ' +@ns+ ' END'
    DECLARE @pP varchar(8000) = @F+''''+@Delimiter+'''+COALESCE(CONVERT(VARCHAR(MAX),[<COLUMN_NAME>]),'''')'
    DECLARE @DD varchar(4000) = @F+''''+@Delimiter+'''+COALESCE(CONVERT(VARCHAR(MAX),[<COLUMN_NAME>],121),'''')'
    DECLARE @QQ varchar(8000) = @F+''''+@Delimiter+@TextQ+'''+REPLACE( CONVERT(VARCHAR(MAX),[<COLUMN_NAME>]),'''+@TextQ+''','''+@TextQ+@TextQ+''')+'''+@TextQ+''''

    -- select * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'csv_table'
    select @Lijstje = @Lijstje +
    CASE
    WHEN DATA_TYPE IN ('CHAR','VARCHAR') THEN REPLACE(@SS, '<Column_name>', Column_name)
    WHEN DATA_TYPE IN ('DATETIME','DATETIME2','DATE','TIME') THEN REPLACE(@DD, '<Column_name>', Column_name)
    ELSE REPLACE(@DD, '<Column_name>', Column_name) END
    FROM INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = @table_name

    SET @work = 'SELECT RIGHT(''0000000''+convert(varchar(9), ROW_NUMBER() OVER(PARTITION BY 1 ORDER BY (SELECT NULL))),8) '+@Lijstje+' AS['+@Header+']FROM '+@table_name
    --PRINT @WORK
    exec (@WORK)

    --
    ---------------------------------------------------------------------------------------------------------

     

    Ben

     

    When presenting code, there is a risc that the formatter 'interpreteds' part of the code and those parts do not represent correctly

    I' try to point these points out in the next message.

     

  • The short solution contains :

    ,@S CHAR(999)='+''"''+REPLACE(CONVERT(VARCHAR(MAX),
    ,121),''"'',''""'')+''";'''

    The symbol Q was Bracketed, in the row below the same line, but there the Q is not bracketed. The Q should be bracketed, so that column names containing special characters are processed correctly.

    ,@S CHAR(999)='+''"''+REPLACE(CONVERT(VARCHAR(MAX),Q,121),''"'',''""'')+''";'''

     

    Ben

     

    • This reply was modified 5 years, 7 months ago by  ben.brugman.
    • This reply was modified 5 years, 7 months ago by  ben.brugman.
  • A testset, containing linebreaks, delimiters within fields, textqualifiers within fields. A a long list of characters within fields.

    Greetings,

    Ben

    ---------------------------------------------------------------------------------
    -- Minimalistic CSV_Table --
    ---------------------------------------------------------------------------------
    EXEC sp_drop CSV_TABLE
    CREATE TABLE CSV_TABLE(
    comment varchar(300),
    longtxt varchar(max),
    fl float,
    lint bigint,
    dt datetime,
    dt2 datetime2,
    d date,
    t time
    )


    -- SELECT N FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0))XXX(N)
    insert into CSV_TABLE SELECT * FROM (VALUES
    ('A ''first'' line.','"TEKST"',1.123,12345678,getdate(),getdate(),getdate(),getdate())

    , ('A ''first'' line.','xyz0',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('Single qoute' ,'This is a tekst with a single quote before '' this word.',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('Single double qoute' ,'This is a tekst with a single double quote before " this word.',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('Single comma' ,'This is a tekst with a single comma before , this word.',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('Single semicolon' ,'This is a tekst with a semicolon before ; this word.',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('Comma semicolon' ,'This is a tekst, with a semicolon before ; this word.',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('time','xyz1',1.123,12345678,'1999-09-09 19:19:19.789','1999-09-09 19:19:19.789','1999-09-09 19:19:19.789','1999-09-09 19:19:19.789')
    -- , ('Has NULL''s in the ""dt2"" ','veldleeg',1.123,12345678,getdate(),NULL,getdate(),getdate())
    -- , ('Has a null in a fieldd','Leegveld',1.123,12345678,getdate(),getdate(),NULL,getdate())
    , ('Long text with everything','xyz4',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    , ('Long text with everything','xyz5',1.123,12345678,getdate(),getdate(),getdate(),getdate())
    ) xxx (A,B,C,D,E,F,G,H)

    UPDATE CSV_TABLE SET
    longtxt =
    'Hello dear Reader,
    This is a "tex" containing an number of special
    characters.
    For example; a comma, a semicolon.
    Also six qoutes in a row '''''''''''' here
    Also six double qoutes in a row """""" here'
    WHERE longtxt = 'xyz1'

    UPDATE CSV_TABLE SET
    longtxt =
    '


    Few empty lines
    Hello dear Reader,
    This is a "tex" containing an number of special
    characters.
    For example; a comma, a semicolon.
    Also five qoutes in a row '''''''''' here
    Also five double qoutes in a row """"" here'
    WHERE longtxt = 'xyz2'


    UPDATE CSV_TABLE SET
    longtxt =
    '


    Few empty lines
    Hello dear Reader,
    This is a "tex" containing an number of special
    characters.
    For example; a comma, a semicolon.
    Also five qoutes in a row '''''''''' here
    Also five double qoutes in a row """"" here'
    WHERE longtxt = 'xyz3'

    ;
    WITH
    L0 AS(SELECT 0 AS c UNION ALL SELECT 0 UNION ALL SELECT 0 UNION ALL SELECT 0), -- 4
    L1 AS(select 0 as x from L0 A, L0 B, L0 C, L0 D), -- 4 ^4 = 256
    L9 AS(Select *, row_number() OVER(PARTITION BY x order by x )-1 as nr from L1), -- voeg rijnummers toe
    I AS(select 'Special Char' TEKST1,
    'Dit is char nr :'+RIGHT('000'+convert(varchar(6),NR),3)+'>>>'+char(nr)+'<<< ' TEKST,1 x ,1y,GETDATE() a,GETDATE()b,GETDATE()c,GETDATE()d FROM L9)
    insert into CSV_TABLE select * from I

    SELECT * FROM CSV_TABLE

     

  • Thanks. I might be missing something here but we seem to be creating a table and populating it - I don't see how we end up with a physical file. The solutions above all seem to give me what I already have. A resultset with headers in SSMS?

    As this has to happen nightly maybe I should just bite the bullet and use SSIS

  • Thanks. I might be missing something here but we seem to be creating a table and populating it – I don’t see how we end up with a physical file. The solutions above all seem to give me what I already have. A resultset with headers in SSMS?

    As this has to happen nightly maybe I should just bite the bullet and use SSIS

    The complexity in SSIS comes from having to create distinct files or archiving - or other file operations.  If all you need to do is create a file in a specific location with a pre-defined name that never changes - and overwrites the destination file it is quite simple.

    In the control flow - you will place a data flow.  In the data flow you will have an OLEDB Source that selects the data and a flat file destination.  The flat file destination uses a connection manager that defines the file layout - and a checkbox on the destination will allow for the file to be overwritten.

    If you are dead set against using SSIS - then look at Powershell.  In Powershell - you can do something like this:

    PS> $results = Invoke-SqlCmd -ServerName yourServer -Database yourDatabase -Query "your extract query"

    PS> $results | Export-Csv -NoTypeInformation \\sharedfolder\filename.csv

    In fact, you could query ADSI directly from powershell without having to access SQL Server.  But that can be a bit more difficult than using a linked server since it won't be the same as issuing a straight SELECT statement.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Thanks, the previous headaches I've had with SSIS have admittedly been between SQL and SQL and problems with the conversion step to get the destination to accept the source string formats.

    That said I just had a play with the import/export wizard and this has now created a package that successfully gets my result set, deletes the existing rows in an existing csv file and then populates with the fresh data.

    Excellent, however the file is in a format that the ultimate destination cannot accept (I upload the file automatically to a supplier using Curl). When I manually paste the results (SQL) grid into excel and save as csv then it accepts it. However the generated file is comma separated and is not accepted.

    When going through the flat file set-up in the export wizard is there a way I can specify that each field lives in a cell rathern commar delimited? I see many delimiting options but am not sure which I need.

  • You want the text qualifier to be set to double quotes.  That is set in the connection manager for the file.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Unfortunately I only see CR/LF, CR, LF, semicolon, colon, comma, tab, vertical bar

     

     

  • I see that I can just type in there so I did.

    Before A1 (spreadsheet) contained Email, First Name, Last Name....

    Now it contains "Email" "First Name" "Last Name"....

    What I need is: A1 = Email B1 = First Name C1 = Last Name.

    Each time it hits a comma I want the next string to go in the adjacent cell just like I'd pasted my results into the spreadsheet directly from SSMS

  • Bit late maybe.     😉

     

    Powershell gives posibilities to execute code and get the result of that in a file. Executing the supplied code and using Powershell you can create a file from that. (Both headers and data can be output as text to supply a CSV file).

     

    For within SQLServer stored procedures, you could look into :

    SP_WriteStringToFile

    SP_WriteBinaryToFile

     

    SP_WriteStringToFile can be found online. I think they have to be adjusted for length. Not sure I think I derived the 'binary' variant from the string variant. This to have more control over the output format, I think with the string version the output was unicode. With the binary variant you could force UTF-8. Not sure though.

     

    Ben

     

     

  • From SQL you can try to use ACE driver. Something like this:

    INSERT INTO OPENROWSET('Microsoft.ACE.OLEDB.12.0',
    'Text;Database=C:\Temp\;HDR=YES;FMT=CSVDelimited',
    'SELECT * FROM [FileName.csv]')
    SELECT Field1, Field2, Field3
    FROM MyTable

    Please note that target file must already exist and have header matching your columns in SELECT.

    --Vadim R.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply