need community help on 1 final cdata odbc driver product eval question

  • Hi, for whatever reason,  CDATA is conflicted about answering 1 remaining disclosure type question before we conclude our eval of their netsuite odbc product.   Their product is installed on our sql server.     together with its related objects (dsn's etc as describe in paragraph 5) that are also installed on our sql server,  their purpose is to pull info from multiple instances of oracle netsuite.

    Can the community help from its own experience with this or similar cdata products?

    We learned during our testing that performance of their product  is somewhere around 48 times worse than WE (that's our opinion) expected for our most important warehouse extract query.   We asked and they suggested its because their driver needs to shred queries that would exceed netsuite api limits (i've heard 4000 record limit but i've also heard an upper byte limit too) into manageable bite sizes.

    Ok, better them than us (I thought) writing custom code to mitigate oracle's restrictions.  we can probably afford less than ideal performance during our once nightly extract.

    i muddled thru setting up a 2nd "dsn" under their odbc product on our sql server.   dsn's are conduits/bridges to specific netsuite instances.    CDATA helped (a lot) on the first dsn.  The best i can tell , when you set up a 2nd, 3rd etc  dsn in their product in the odbc administrator app, you then go to another app (seems like their portal) to add a 2nd, 3rd etc  cdata service instance which gets plumbed to the new dsn you just created also on your sql server.   Then you address the necessary linked server plumbing on your sql server.

    Needing to stand up a new instance of their service for every dsn surprised me a little.  I suspect its those services that are shredding the queries but there is no one to ask.   Does the community know what we should think about the footprint of these services both at rest and when they are shredding, and what i think (not sure) is a requirement for a new PORT assignment on every "dsn"?   Has anyone in the community experienced or seen examples of there being too many of these on one sql server?  ie thresholds we should watch out for?

     

    • This topic was modified 1 day, 22 hours ago by  stan.
  • I'm confused.  You're talking about Oracle but you're posting in an SQL Server 2019 forum.

    Since I'm not privy to your previous attempts and have never used "CDATA", what is it that you're trying to do?  Read data from Oracle tables into SQL Server by Linked Server?

    Another thing to consider is that if you can't easily find support for a given product, consider failing the eval and looking elsewhere.  That and a "4000 record" limit?  That's totally and woefully under-powered.  I forget the name of the built in capability of Oracle (I think it's just called EXPORT but not sure... haven't used in since ~2008) but I found it to be quite fast to export Oracle to a file as a CSV or TSV and then import that into SQL Server.  It's not "real time" but it IS "nasty fast" on both ends.  Using "Hot Swapable" table pairs, you can have one copy of (say) the Customers table online while you're loading a fresh copy and then just flop some synonyms to bring the refreshed tables online.  The old copy is also a "safety" for if (when) something goes haywire with the newly refreshed data.

     

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • thx jeff, their product gets installed on our sql server.   various related objects are then defined on our sql server under that product.  thru these objects, yes , we send queries to netsuite "pulling" erp data for loading into sql server wh tables.

    this seems like a great place to post the question.   im aware that we may not get an answer here .  but i doubt we'd get it anywhere else.  either way we get "forest from the trees" common sense statements like you've already shared.

    the 4,000 record limit is a netsuite limit.  CDATA's product is attractive because it circumvents that limit.    but apparently at a cost.

    we extract often from other oracle sources lightning fast  but this is slightly different.   in this case we have to go thru the netsuite erp layer's apis unless we use netsuite's cabinet file capability.   I dont want to go there.  the only other solution we know of is oracle's "suite ql connect " product which requires a license for every netsuite instance.     we currently have 5 netsuite instances (and growing) on 2 netsuite boxes.    the saving s we could see going with cdata is currently over $110K over 4 yrs.

    before kicking CDATA to the curb, i think it makes sense to do some due diligence thru this post.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply