December 5, 2013 at 9:31 pm
Thanks for the Hekaton question, Steve. I'm looking forward to trying this out eventually!
December 5, 2013 at 10:25 pm
this is one feature that i am eagerly waiting,,, wanna try my hand out on this.
😎
December 6, 2013 at 12:53 am
This was removed by the editor as SPAM
December 6, 2013 at 1:26 am
Thanks for the QOTD Steve.
December 6, 2013 at 5:31 am
While I'm anxious to try out in-memory OLTP, it leaves me with two questions.
1. How are the transaction logs stored? If they aren't written to disk, we'll lose durability, which would be a deal-breaker. While I'm sure they took this into account, I really hope it's solid and would like to learn more.
2. How much memory are we going to need to be able to run a 1TB database? Not all shops can afford to have 80 cores and 4TB of memory. This will be the most interesting part to me.
Thanks for the question, Steve. Maybe I'll start searching when I have some time to do research.
December 6, 2013 at 5:48 am
Mr. Kapsicum (12/5/2013)
this is one feature that i am eagerly waiting,,, wanna try my hand out on this.😎
+1
December 6, 2013 at 6:09 am
Carlo Romagnano (12/6/2013)
Mr. Kapsicum (12/5/2013)
this is one feature that i am eagerly waiting,,, wanna try my hand out on this.😎
+1
Steve thank you. I have been waiting for this for sometime as opposed to writing my own custom in-memory "logical" devices and once it goes live, MSSQL can finally put to bed (at least through Gartner) one of the negative points they keep referencing (via customers).
+1 😀
December 6, 2013 at 6:18 am
When I first saw the question (without seeing possible answers), I thought this was about the source of the term "Hekaton". (Megaton, Gigaton, ... Hekaton???) Glad it was easier than that. Thanks for the question Steve.
[font="Verdana"]Please don't go. The drones need you. They look up to you.[/font]
Connect to me on LinkedIn
December 6, 2013 at 6:52 am
I did not see the answer I was looking for. Something along the lines of "Godzilla's new enemy in Godzilla vs. Hekaton".
Tom
December 6, 2013 at 9:39 am
You made this one really simple, Steve. Thanks for the question and for the point. 🙂
December 6, 2013 at 9:52 am
OCTom (12/6/2013)
I did not see the answer I was looking for. Something along the lines of "Godzilla's new enemy in Godzilla vs. Hekaton".Tom
How about "Teradata vs Hekaton"? This could probably spawn a fun Friday conversation on monster movies or sci-fi flicks with technology names.
December 6, 2013 at 10:25 am
KWymore (12/6/2013)
OCTom (12/6/2013)
I did not see the answer I was looking for. Something along the lines of "Godzilla's new enemy in Godzilla vs. Hekaton".Tom
How about "Teradata vs Hekaton"? This could probably spawn a fun Friday conversation on monster movies or sci-fi flicks with technology names.
Hekaton and the Battle of Hadoop
Not all gray hairs are Dinosaurs!
December 6, 2013 at 2:26 pm
Ed Wagner (12/6/2013)
While I'm anxious to try out in-memory OLTP, it leaves me with two questions.1. How are the transaction logs stored? If they aren't written to disk, we'll lose durability, which would be a deal-breaker. While I'm sure they took this into account, I really hope it's solid and would like to learn more.
2. How much memory are we going to need to be able to run a 1TB database? Not all shops can afford to have 80 cores and 4TB of memory. This will be the most interesting part to me.
Thanks for the question, Steve. Maybe I'll start searching when I have some time to do research.
The answer to question 1 is that the transaction log is still used to ensure durability. But the log is used more efficiently than before. For starters, only changes to the base data are logged. (For standard tables, all changes to all pages are logged, so an update to a heavily indexed table produces log records for the data, and for all indexes; with Hekaton, only the base data changes are logged, because the indexes are not persisted anyway, they are reconstructed in-memory when the database loads). Also, the format of the log records for Hekaton tables is designed to be as efficient as possible. (Details can be found at http://research.microsoft.com/pubs/193594/Hekaton%20-%20Sigmod2013%20final.pdf - paragraph 7.1.
For question 2, the answer is "as much as you can afford". There is no need to store the whole database in-memory. Hekaton is designed to work just fine in an environment where the most "hot" tables are stored as "in-memory optimized" tables, and the rest of the data is stored in normal, disk-based tables. According to a demo of Hekaton I saw at the 2012 PASS Summit, Microsoft will even supply procedures to help you find which tables would benefit most from moving to Hekaton.
But obviously, more memory allows to put more tables in Hekaton, so more memory is always better. Nothing new, here - thaht's always been the case with SQL Server.
Oh and to answer your actual question - if you want to have the whole 1TB database in memory, then (asuming that 1TB is actual data size, not data + nonclustered indexes) you will need way more than 1TB of memory. When data changes, Hekaton creates new versions of the rows and does not immediately remove the old versions, so depending on how high your update rate is, the actual size may be much higher than the pure data size.
December 6, 2013 at 3:04 pm
KWymore (12/5/2013)
Thanks for the Hekaton question, Steve. I'm looking forward to trying this out eventually!
+1
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply