Resultset Compression

  • Hello!

    Is there anyone that knows if it is possible to compress the 'over the wire' return data (in an easy was that does not break normal connectivity)? I.e. say you connect with a client app (should also work for connecting with Query Analyzer), you execute a query that returns, for example 1000 rows, is there a way of getting 'automatic' compression of this data during the time it is 'traveling' to your client computer?

    Regards,

    Hanslindgren

  • why ? Faster wider pipes, better queries that don't return 1000 rows???

    Note that data is returned in packets, packet size can help, even if you return a bit it still usually needs a 4k packet and an 8k read on the server. It's unlikely to be a bandwidth issue.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Okay, say more then 1000 rows. Say an OLAP user want's extract data from the datawarehouse with adhoc statements. Say in this case you need 2.000.000 rows traveling over the wire, rowlength on avarage is 400byte. This is a bandwidth issue that I can see could be solved by compression. So far if it has been adhoc enough I have extracted it on the server, compressed the file and sent it to the user (or shared it via a folder; we are on the same intranet). There are not many of these requests per month but I posed the question out of curiosity (I am projecting SSL encryption of our intranet SQL traffic and if there was a similar easy solution for implementing adhoc encryption I might do that at the same time).

    (Think HTML over-the-wire compression for your Web frontends)

  • If it's only a few times a month then I'd say the users may have to wait a bit for you to extract and send the file. If the user wants to do it you are talking 2 million rows of about 400 bytes or very roughly 800 Mb of data. On a full duplex 100 BaseT client connection you're talking about 10 minutes under optimal conditions. Most client connections are 10 BaseT and half duplex.

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

  • http://openvpn.net/ is an open source VPN solution that includes compression.  It might do what you want.

    hth jg

     

     

  • It's a matter of scale and design. What's a user logically going to do with 2 million rows/ 800mb data from a query? read them all? You also mentioned "ad-hoc" - allow users this facility unchecked and eventually they'll bring the server down. I could mention a well known BI tool that I've seen bring a true 8 proc hefty server to its knees with one user query!!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Thanx Jeff!

    I will look into it now!

    Regards, Hanslindgren

  • Thanx. Yes, you are theoretically right... That is under no user load, unfortunately my client is quite under dimensioned with regards to network bandwidth... The users are distributed around the country and say we get around 100kbyte/s and that suddenly means 2+ hours for the transfer...

    Tragic, I know. But unfortunately even if I have tried it seems unfeasible to get them to upgrade their core network throughout the country.  

    Regards,

    Hanslindgren

  • Okay. You are right. One could ask oneself what someone would like to do with this data. Unfortunately this is also out of my control. Maybe one thing people have heard about Italy is that they are disorganized (even though you can bring organization to Italy on a small scale) and my client is no different. Our different departments are quite like different companies (if they had been in another country) and even if they have the same boss it is hard to get them to cooperate (and in this case the only boss these departments have in common is quite high up and cannot be 'bothered' about these things).

  • Your other option is not to send data at all but publish at a central location and run the queries through a browser, that way you never have to move data only publish results - like an amazon or google lookup. ProClarity has this ability and/or you could write your own.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply