Practical Hekaton

  • I believe you should update statistics on all tables after a restore from backup migration. That's not done automatically.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I think its great that Microsft does the early (not always perfect) for us early adopters. We have SQL 2014 in production utilizing Column store index, filetables, and In-memory. And in all cases, even with the limitations it has been a game changer for the end use of our systems.

    KEEP up the GREAT work MS.

    Dan McClain

  • DanMcClain (6/12/2015)


    I think its great that Microsft does the early (not always perfect) for us early adopters. We have SQL 2014 in production utilizing Column store index, filetables, and In-memory. And in all cases, even with the limitations it has been a game changer for the end use of our systems.

    KEEP up the GREAT work MS.

    Dan McClain

    Dan McClain, I believe you have just volunteered your services! :hehe:

    The absence of evidence is not evidence of absence.
    Martin Rees

    You can lead a horse to water, but a pencil must be lead.
    Stan Laurel

  • DanMcClain (6/12/2015)


    I think its great that Microsft does the early (not always perfect) for us early adopters. We have SQL 2014 in production utilizing Column store index, filetables, and In-memory. And in all cases, even with the limitations it has been a game changer for the end use of our systems.

    KEEP up the GREAT work MS.

    Dan McClain

    I'll secound that emotion. Bill Gates and company have been very good to me.

    Regarding one large data warehouse in particular that I'm involved with, I wish Microsoft would end support for v2008R2 so executive management would be compelled to upgrade to v2014. Based on some prototyping I've done on an old server with v2014 Dev edition, porting the tables to Clustered ColumnStore would speed up quries by a factor of 3x. Unfortunately, the decision makers have stated they won't spend money on updrading until support for the current version ends.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • DanMcClain (6/12/2015)


    I think its great that Microsft does the early (not always perfect) for us early adopters. We have SQL 2014 in production utilizing Column store index, filetables, and In-memory. And in all cases, even with the limitations it has been a game changer for the end use of our systems.

    KEEP up the GREAT work MS.

    Dan McClain

    Want to share how you're using In-Memory OLTP?

  • SURE!

    Using it for ASPstate database, now keep in mind the web dev's need to make a few changes to utilize ASPstate in memory. here is a link to the SQL team BLOG http://blogs.technet.com/b/dataplatforminsider/archive/2014/07/10/new-asp-net-session-state-provider-for-sql-server-in-memory-oltp.aspx?wc.mt_id=Social_SQL_announce_DI&WT.mc_id=Social_TW_OutgoingPromotion_20140710_70473692_SQLServer&linkId=8785334

    We are also using it for tables that have more configuration type data basically less data changes.

    Using this in conjunction with FileTables to replace 100's of SSIS packages for importing data.

    Also we have a setup that utilized in-memory as a data set storage area. Like a poor mans caching appliance, but with SQL server. You make a T-SQL request and the results are returned but are also retained in the In-Memory table(s) so the next request for the same data call is return sub-second without disk IO.

  • Hi Steve,

    I've rewritten a follow article on Hekaton which I'll submit soon, you can see a preview here though if interested - https://paulbrewer.wordpress.com/2015/06/27/hekaton-the-good-the-bad-and-the-ugly/[/url]

    Hekaton is very, very fast when processing lots of small batch requests, it's not so quick in relation to native SQL Server tables and stored procedures as the logical reads start to increase (batch size) and the number of calls (number of batches) starts to decrease. The next article is an OLTP / OLAP spectrum analysis identifying where Hekaton works well, and not so well.

    Regards

    Paul

  • We are also using it for tables that have more configuration type data basically less data changes.

    Using this in conjunction with FileTables to replace 100's of SSIS packages for importing data.

    I realise that this is slightly tangential to the thread (apologies), but I'd be interested to hear a bit more detail about what you've done here.

    The absence of evidence is not evidence of absence.
    Martin Rees

    You can lead a horse to water, but a pencil must be lead.
    Stan Laurel

  • This is a little old, but I wanted to give an in-production use case with where we use it and where it fell short from our initial expectations.

    We use memory-optimized objects in two places: one staging table to load data warehouse fact tables, and various memory-optimized table-valued parameters as part of warehouse loading (streaming certain dimension rows from .NET code into our warehouse).

    My original plan was to use memory-optimized queue tables in our OLTP system to feed memory-optimized TVPs into memory-optimized staging tables, and possibly convert a couple of our more heavily-read OLTP tables to become memory-optimized in the process. That fell apart for a few reasons.

    The best (i.e., most nefarious) problem that we've had with In-Memory OLTP is XTP_CHECKPOINT issues. This happened even before we had a single memory-optimized object; we just had the filegroup. Aside from that, the hard limitation on ~8000 bytes per row and no LOB meant that some of the places where I might have wanted to use a memory-optimized TVP (like TVPs for some of the larger dimensions) wouldn't work out.

    We ended up not pushing the memory-optimized filegroup out to production on our OLTP instances because you cannot take a database snapshot if the database has a memory-optimized filegroup, and we use database snapshots for some system processes. On the OLAP side, we did implement memory-optimized objects, and saw significant performance improvements. The TVPs were significantly faster than their on-disk counterparts and our staging table--which previously had been one of the top sources of deadlocks--zooms. Unfortunately, even with these performance benefits, the limitations have been too strict for adoption across our platform. Even if I got the go-ahead to move memory-optimized objects to our production OLTP system (and not just our OLAP system), there would have been another problem preventing general adoption: the inability, within an explicit transaction, to use memory-optimized tables as well as tables in external databases on the same instance. We use replication to push static data around in our environment, and all of that static data goes into one database on each instance. Many of the queries whose performance I wanted to improve join to this static data database, and I did not want to put all of the in-memory work in a separate transaction from the static data work.

    I'm still hopeful that 2016 will solve at least some of these problems and make memory-optimized objects practicable, but to be honest, even coming in knowing most of the code-writing limitations of memory-optimized objects and natively compiled stored procedures (of which we have zero), I think In-Memory OLTP really burned me, and even if 2016's version is a lot better, I'd have to think long and hard before building a project around it.

Viewing 9 posts - 16 through 23 (of 23 total)

You must be logged in to reply to this topic. Login to reply