September 27, 2018 at 12:27 pm
Steve Jones - SSC Editor - Wednesday, September 26, 2018 2:29 PMchristopher.ivens - Wednesday, September 26, 2018 10:17 AMI'm an old guy, so I am not as gullible as I sound (when the infrastructure team asked me how much memory I wanted on the new data warehouse prod server, I said, straight-faced, 128-GB...after a long silence he countered with the 36-GB we ultimately got...why even ask?). Our infrastructure guys and one of our SQL developers are cooking this up. I am skeptical, but I am willing to run the experiment. One of the drivers is this video from Robert Martin: https://www.youtube.com/watch?v=Nsjsiz2A9mg (don't watch the whole thing, just the last few minutes will get you the point).Start at about 42:00, but that appears both naive and moronic to me.
He's got an interesting take, at least starting at 42:00, and I think he's got a point, but I also think his presentation would be easy to misapply as it seems that Chris's infrastructure guys might be doing. Thanks for the link in any case!
September 27, 2018 at 12:29 pm
ZZartin - Wednesday, September 26, 2018 4:38 PMHe seems to have confused SSD's are faster than spinning drives with SSD's are infinitely fast so it doesn't matter how we use or access them.In other words he's an idiot.
Heh I love how folks are calling Bob Martin an idiot, its sort of like how Jeff can fix the entire eBay site with stored procedures LOLOL
July 6, 2023 at 3:06 pm
One of my connections on LinkedIn is fond of pointing out that most data warehouses are below 1TB in size.
1 TB RAM used to be science fiction, now it can be had for a relatively trivial sum.
July 6, 2023 at 5:46 pm
Do we really need to keep the entire database in memory? It depends on what problem you're trying to solve.
We already have in-memory option for tables, which is good for application workloads with high volume inserts and selects. SQL Server is also smart about how it keeps frequently accessed pages in buffer, and well designed indexes can prevent full scans. For TB sized data warehouses, SSD storage is maybe 10x slower than RAM - but it's getting relatively faster.
You can spin up a VM in Azure with 1024 MB RAM and 64 CPU for about $250 / day, if you're curious. Actually, for a [spot] VM, it would be about $25 / day, if all you're wanting to do is non-production load testing.
https://learn.microsoft.com/en-us/azure/virtual-machines/spot-vms
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
July 7, 2023 at 12:05 pm
Eric, my experience has been that spinning rust and 256GB of RAM remains plenty for most applications and customers.
There's need, and then there's want. No-one needs a Porsche, a lot of people would love to own one.
July 7, 2023 at 6:15 pm
Eric, my experience has been that spinning rust and 256GB of RAM remains plenty for most applications and customers.
There's need, and then there's want. No-one needs a Porsche, a lot of people would love to own one.
Yeah, some folks have a rarely used Porsche (DW appliance) rusting in their driveway (on-prem data center), which makes the year to year lease on a TB scale Azure VM or Azure Synapse database an attractive option.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
July 10, 2023 at 8:02 pm
I don't think most apps need everything in RAM, but some might need certain sections. That's one reason the In-Memory OLTP tables came about, though those have too many restrictions to be really useful.
More often I find people using Redis for transient stuff that needs very high concurrency and low latency and then they write final stuff to SQL Server (or some RDBMS)
Viewing 7 posts - 31 through 36 (of 36 total)
You must be logged in to reply to this topic. Login to reply