January 24, 2014 at 6:24 am
Greg Edwards-268690 (1/24/2014)
Koen Verbeeck (1/24/2014)
Anyone with a better grasp of SQL Server security and auditing can take this one?http://www.sqlservercentral.com/Forums/Topic1534372-2799-1.aspx?Update=1
Trust Me, but audit all the other Admins!
Something missing in that picture.
Sounds like pretty standard security set ups. Very first major system I developed, lo these many years ago, I went to the five business groups that were going to be customers to get specs. First group says, "I need to see all documents, but I don't want the other groups to see mine." Second group says, "I need to see all documents, but I don't want the other groups to see mine." Repeat through all five groups. Solved it by not putting in security.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 24, 2014 at 6:29 am
Grant Fritchey (1/24/2014)
Greg Edwards-268690 (1/24/2014)
Koen Verbeeck (1/24/2014)
Anyone with a better grasp of SQL Server security and auditing can take this one?http://www.sqlservercentral.com/Forums/Topic1534372-2799-1.aspx?Update=1
Trust Me, but audit all the other Admins!
Something missing in that picture.
Sounds like pretty standard security set ups. Very first major system I developed, lo these many years ago, I went to the five business groups that were going to be customers to get specs. First group says, "I need to see all documents, but I don't want the other groups to see mine." Second group says, "I need to see all documents, but I don't want the other groups to see mine." Repeat through all five groups. Solved it by not putting in security.
That scenario would sound funny if it didn't sound so very familiar. 😛
January 24, 2014 at 9:14 am
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?
January 24, 2014 at 9:32 am
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?
I have not. In fact, I don't know how that would be possible. Some databases get pretty huge.
I recently attended a seminar on 2014 that focused on in-memory tables. He recommend that you have memory available that's twice the expected size of the table. It isn't a cure-all for everything, though. There are restrictions that are focused on high performance and it's really only for your hottest tables where being written to disk isn't critical. They seem cool, but their use is pretty limited.
January 24, 2014 at 9:42 am
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?
No, but I've heard plenty of people try to justify that you can't possibly use more RAM than the sum of the database size.
January 24, 2014 at 9:47 am
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?
That seems to fall into the "hell yeah, if we can get it" category.
There's a faction in my company that insists on prototyping no-SQL database solutions that they claim are much faster than relational DBs for our purposes. So far, they've never been faster than the T-SQL code we currently run and have always relied on an in-memory data structure holding about 1/1000 of the entire database. My response has been, "*If* you get your process running faster than mine, give me enough memory to hold even half of the database in the buffer cache and see if you can still beat me."
Jason Wolfkill
January 24, 2014 at 10:00 am
Ed Wagner (1/24/2014)
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?I have not. In fact, I don't know how that would be possible. Some databases get pretty huge.
I recently attended a seminar on 2014 that focused on in-memory tables. He recommend that you have memory available that's twice the expected size of the table. It isn't a cure-all for everything, though. There are restrictions that are focused on high performance and it's really only for your hottest tables where being written to disk isn't critical. They seem cool, but their use is pretty limited.
Some of us may be old enough to remember loading the whole game (Doom comes to mind) into RAM Disk. 😀
Yes, with in memory technology you need more RAM.
And if may differ between what you need for SQL vs. OLAP.
That would be a pretty severe limitation on size for most if you needed that much RAM.
January 24, 2014 at 11:51 am
Greg Edwards-268690 (1/24/2014)
Ed Wagner (1/24/2014)
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?I have not. In fact, I don't know how that would be possible. Some databases get pretty huge.
I recently attended a seminar on 2014 that focused on in-memory tables. He recommend that you have memory available that's twice the expected size of the table. It isn't a cure-all for everything, though. There are restrictions that are focused on high performance and it's really only for your hottest tables where being written to disk isn't critical. They seem cool, but their use is pretty limited.
Some of us may be old enough to remember loading the whole game (Doom comes to mind) into RAM Disk. 😀
Yes, with in memory technology you need more RAM.
And if may differ between what you need for SQL vs. OLAP.
That would be a pretty severe limitation on size for most if you needed that much RAM.
Doom was revolutionary at the time. I know what you mean about loading everything into RAM. When you ran out, just create a RAM disk and you're off to the races. I think those days are gone except for the page file. With databases, I don't know of any company that could afford as much RAM as they have database size. I know we can't.
January 24, 2014 at 1:53 pm
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?
Yeah, it's called SQL Server 2014.
But seriously, a best practice? No. Not at all. Wouldn't it be nice? yes. But I've dealt with 600gb of data on Windows XP. Tweren't no 600gb of memory on that machine, I assure you.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 24, 2014 at 2:17 pm
Thanks to all who answered my question so far. I was just curious as I overheard one of our SAs make the comment that it was a best practice to have sufficient memory to hold the entire database in memory. I just wonder where they come up with these ideas.
January 24, 2014 at 2:20 pm
Lynn Pettis (1/24/2014)
Thanks to all who answered my question so far. I was just curious as I overheard one of our SAs make the comment that it was a best practice to have sufficient memory to hold the entire database in memory. I just wonder where they come up with these ideas.
Probably after reading a SQL CAT paper which had something along these lines:
"We made sure that the database was small enough to entirely fit in memory with plenty of room for the expected cache for our plans, memory usage, and tempdb usage, to attempt to reduce the impact of I/O in determining the impact of these modifications..."
which ends up as:
"Put everything in memory and we don't care about disks!!1! B3ST PR4CT!C3!!1!"
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
January 24, 2014 at 2:26 pm
Actually, I thought I faintly remembered reading something similar at some point; after doing some digging, it turns out it was (incorrectly) attributed to Brent Ozar's site in my head:
There's a snippet in there saying that, if you can fit the database into memory, the I/O need overall should be reduced (paraphrasing). Somehow or another, that got twisted into the "it's best to fit the database into RAM" idea in my head. Not too big of a logical leap to make; perhaps others have made similarly misconstrued assumptions.
- 😀
January 24, 2014 at 6:03 pm
HowardW (1/24/2014)
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?No, but I've heard plenty of people try to justify that you can't possibly use more RAM than the sum of the database size.
You can never have too much memory.
Wait... what was I saying?
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 25, 2014 at 1:34 pm
Evil Kraig F (1/24/2014)
Lynn Pettis (1/24/2014)
Thanks to all who answered my question so far. I was just curious as I overheard one of our SAs make the comment that it was a best practice to have sufficient memory to hold the entire database in memory. I just wonder where they come up with these ideas.Probably after reading a SQL CAT paper which had something along these lines:
"We made sure that the database was small enough to entirely fit in memory with plenty of room for the expected cache for our plans, memory usage, and tempdb usage, to attempt to reduce the impact of I/O in determining the impact of these modifications..."
which ends up as:
"Put everything in memory and we don't care about disks!!1! B3ST PR4CT!C3!!1!"
Or maybe something that dates from a lot earlier. Between 1986 and 1995 I was involved in 3 projects aiming at in store databases (all parallel systems using interesting networks to handle the RAM), there was a lot of interest in that sort of thing as part of the general push into High Performance Computing and Networking in Europe during that decade, a lot of people were looking into it, some systems were even sold (not very many, I think), national governments had been subsidising research HPCN research in response to the perceived economic threat from the Japanese computer industry from 1984 or thereabouts and the CEC ran with it from a bit later, various things were published by various people and various things were published. I don't think anyone ever claimed that it was always best practise to have the whole DB in RAM, though - but for databases up to about a Terabyte (including log files as well as data files) it was eminently feasible for suitable applications using a network of say 256 pretty pedestrian cheap computers (given the right network, of course, and a lot of conditions that probably applied for OLTP but didn't cover complex queries involving more than a small percentage of the data). Picking up some of the results from that work from published reports might well have given some people some pretty crazy ideas, perhaps ending up with a all databases best practise claim.
Tom
January 25, 2014 at 4:57 pm
Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.
The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.
Adding more memory isn't going to fix that kind of code for performance or throughput.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 42,631 through 42,645 (of 66,712 total)
You must be logged in to reply to this topic. Login to reply