August 3, 2014 at 2:23 am
Comments posted to this topic are about the item Natively Compiled Stored Procedures: What they are all about!
August 4, 2014 at 1:51 am
Thank you for the article and especially the attention to the restrictions.
Which sadly brings me to stating my opinion that Microsoft spend a lot of time adding a near unusable and sometimes just inaccessible features. It seems they lost track of their product development and have no clue what is important and what is not. There are so many things that can make SQL Server a lot more usable to a lot more people. And most definitely, pleasing the anti-SQL/anti-RDBMS crowd isn't one of them.
August 4, 2014 at 4:11 am
No reflection on the article, which is thorough & well written, but I went from thinking that natively compiled sprocs sounded great to wondering in what circumstances I'd ever be able to use them. The list of restrictions is just incredible. You can work round most of them, but mostly in ways that sound a retrograde step (my particular favourite is using a While loop in place of an outer join :crazy: ). Hope the feature becomes more usable in later releases.
August 4, 2014 at 7:43 am
icocks (8/4/2014)
No reflection on the article, which is thorough & well written, but I went from thinking that natively compiled sprocs sounded great to wondering in what circumstances I'd ever be able to use them. The list of restrictions is just incredible. You can work round most of them, but mostly in ways that sound a retrograde step (my particular favourite is using a While loop in place of an outer join :crazy: ). Hope the feature becomes more usable in later releases.
Makes me wonder about the reasons behind the restrictions. Surely a first iteration of a new product feature can dictate a list of gotchas, so it will be interesting to see how this new functionality progresses, which restrictions are eliminated, and which stick around for what reasons (if they're ever discussed that is).
August 4, 2014 at 7:50 am
peter-757102 (8/4/2014)
Thank you for the article and especially the attention to the restrictions.Which sadly brings me to stating my opinion that Microsoft spend a lot of time adding a near unusable and sometimes just inaccessible features. It seems they lost track of their product development and have no clue what is important and what is not. There are so many things that can make SQL Server a lot more usable to a lot more people. And most definitely, pleasing the anti-SQL/anti-RDBMS crowd isn't one of them.
Actually, pleasing the NoSQL crowd could actually be driving some development efforts on Microsoft's part, and not just with this feature set we're discussing, look at maybe Hadoop on Azure for instance. Whether this is strictly "anti-SQL/anti-RDBMS", well thats probably a conversation that will bring out the "troll label" again, so no thanks on that one!
August 4, 2014 at 7:59 am
Perhaps I missed it, but can a Natively Compiled Stored Procedure (NCSP) be called from within a Non-NCSP?
I am sure some kind of 'wrapper' will be developed in the future, to make the implementation of NCSPs a bit easier.
August 4, 2014 at 8:17 am
As a retired DBA and having done extensive development of SQL code, the prohibition of foreign keys is a show stopper. Responsible designers and developers need to keep data integrity and reliability as perhaps the uppermost priority, even at the expense of performance and other consideration. invalid information is worthless at any speed, and dangerous to companies making decisions based on it. Sometimes progress can be in the wrong direction. Things that are fun and interesting to technical folks are not always the most beneficial to enterprises. We all know that code is almost never perfect and must constantly be reproved by any means possible.
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
August 4, 2014 at 8:20 am
andrewj29 (8/4/2014)
Perhaps I missed it, but can a Natively Compiled Stored Procedure (NCSP) be called from within a Non-NCSP?I am sure some kind of 'wrapper' will be developed in the future, to make the implementation of NCSPs a bit easier.
You definitely can call a NCSP from a non-NCSP if you want. The NCSP is an atomic set of TSQL and will occur on its own, in-memory, without interference from any other TSQL in the calling proc.
August 4, 2014 at 8:27 am
The gotcha with in-memory OLTP is that it needs to be completely contained in-memory. A natively compiled stored proc can't go out to disk for other data, or even for TempDB usage (temp tables, hash joins, etc...). Therefore, any in-memory schema will be somewhat isolated from the rest of your database.
In general, I think the key to in-memory OLTP is to only use it in areas where it will provide the greatest gain. Places where contention and IO are very heavy can benefit greatly from its use---if all of the restrictions aren't show-stoppers.
I've taken the time to convert some real world production schema into in-memory OLTP w/ a few natively compiled procs, and while it took a while, the results were impressive.
Can everyone live without left joins or foreign keys? Maybe not...and Microsoft's workarounds provided for each of these cases are a bit hacky at times, but there's potential here that I expect will become more useful in future versions of SQL Server. Columnstore indexes got quite a bit of extra love in 2014, and I am sure in-memory OLTP will see similar actions taken, given the common theme of responses by DBAs to the new feature.
August 4, 2014 at 8:32 am
Great, great article Ed!
-- Itzik Ben-Gan 2001
August 4, 2014 at 1:42 pm
Great article, thanks!
I, as others have commented, were hoping that this new feature might be useful.
Unless I am missing something, if the server has sufficient memory, then the data tends to stay 'in-memory' anyway. And if the server has too little memory, than it is great that it can cache.
Personally, I like the idea that if the dB exceeds the memory, the system will not crash (unlike the 'in-memory' databases).
Going back to my early Computer Science days, all processes happen in memory, and this has not changed. So while overly simplifying the matter a bit, more memory is good.
Ok, so it sounds like the SQL Server dev team would have been better to find ways to optimize the existing stored procedures. And finding ways to fully compile the standard proc. And to tweek (optimize) the existing data structures that are already in memory (if it is really needed)
The more you are prepared, the less you need it.
August 5, 2014 at 3:45 pm
The article is well written and the author must be commended for it.
I tried implementing in-memory tables and compiles stored procedures a couple of months ago, when SQL Server 2014 was first released. The restrictions were so great that I decided to ignore for now and go about with a mixed of standard in-disk tables and interpreted procedures with C++ for very heavy duty CPU intensive data manipulation. It's unfortunate, but that what I found with my brief experience.
One thing however I did not try, did not see in the article, and would appreciate a comment from the author - is it possible to create in memory tables with table types, and pass them as parameters to the stored procedure? That may improve usability by having a wrapper interpreted procedure to one or several compiled procedures.
August 6, 2014 at 1:49 am
Great article, thanks ed
One question: You said the DLL are machine code. So just to clarify you get a machine code dll and not a clr assembly?
August 6, 2014 at 5:26 am
gary.strange-sqlconsumer (8/6/2014)
Great article, thanks edOne question: You said the DLL are machine code. So just to clarify you get a machine code dll and not a clr assembly?
That's correct---and the reason why these things (if you can make good use of them) are so darn fast!
August 6, 2014 at 7:12 am
N_Muller (8/5/2014)
The article is well written and the author must be commended for it.I tried implementing in-memory tables and compiles stored procedures a couple of months ago, when SQL Server 2014 was first released. The restrictions were so great that I decided to ignore for now and go about with a mixed of standard in-disk tables and interpreted procedures with C++ for very heavy duty CPU intensive data manipulation. It's unfortunate, but that what I found with my brief experience.
One thing however I did not try, did not see in the article, and would appreciate a comment from the author - is it possible to create in memory tables with table types, and pass them as parameters to the stored procedure? That may improve usability by having a wrapper interpreted procedure to one or several compiled procedures.
Yes! You can definitely create an in memory table using a table type and pass it to a natively compiled stored proc. The only restriction is that the table type must be passed into the stored proc as READONLY. This is probably OK for whatever applications you have in mind, but the compiler will throw an error if you leave the keyword off of the parameter.
Viewing 15 posts - 1 through 15 (of 25 total)
You must be logged in to reply to this topic. Login to reply