Hi we run 2019 standard. I'm torn between asking my dept head for 3 test sql servers for use by everybody so that we can test 3 projects at once vs enough horsepower on my local to run 3 instances. I'm leaning toward one that is shared and whatever it takes to run 3 instances on my local. Once shared resources are requested, its my understanding that they are tough to give back. But maybe entire servers are easy to give back, just not their memory if you keep them around.
One of the biggest concerns i have for 3 shared is that when a project gets delayed, we still face running daily jobs there and tying up space till the project picks up again. I suppose the same would hold true even if such an instance was on my local but i suspect it would be far less expensive.
My biggest concern putting 3 instances on a high horsepower local is that sql grabs storage willy nilly and i hope there is a way to limit each instance from bleeding over to the next.
I also think about the ramifications of having 3 instances locally. One concern i had was with power bi (pbi) reports and dashboards and a local laptop i'd like to take home with me. But since i can store data with pbi report files themselves and publish that way i'm not overly concerned. I'd want ssas on the local instances also but i'm not super worried there. I might want ssrs also which as i recall is painful to set up locally because of that one piece of software that will need to see my pc as a web server, but i suppose with proper naming, i could use the shared dev server's ssrs capabilities to point during the day at the appropriate instances on my pc.
does anybody have any thoughts? what do you do at your company to run multiple projects simultaneously?
April 2, 2024 at 1:37 pm
Have you considered using multiple local VMs, one per project?
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
April 2, 2024 at 4:37 pm
thanks phil, do you mean all on the same single pc? That's the kind of feedback i was hoping for. That way no one instance of sql could interfere with what else is going on. Do you know of a link that would describe high level prereqs and how to go about setting one or more of these up?
thanks phil, do you mean all on the same single pc? That's the kind of feedback i was hoping for. That way no one instance of sql could interfere with what else is going on. Do you know of a link that would describe high level prereqs and how to go about setting one or more of these up?
Yes, all on the same PC. I haven't operated in this way for a few years, so my detailed knowledge is out of date, but VMware is the product I used. It does, of course, add quite a maintenance overhead – software updates have to be performed separately on each VM, for example. Understanding software licensing is also important.
Apart from the total separation you mention, you would be able to start and stop them individually, lessening the need for your machine to be super high spec.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
April 3, 2024 at 5:21 pm
Just want to add that you MAY need or want to discuss with your IT dept as well. Hosting stuff locally on a "take home" machine (work laptop for example) COULD violate some data protection policies.
Now to counter Phil's approach of spinning up 3 VM's on your local box to host the SQL instances, this leads to overhead not just for licensing as each VM needs resources (disk, memory, cpu), so if you turn on the VM, you will have a performance hit to your machine and lost disk space.
The Server VS local hosting for my company came down to disk space. We have some SQL instances that were (at the time) over 1 TB. In order for us to host things locally, each box would have needed at least 3 TB to host all of our SQL instances. Spinning up 1 server that has enough power to host the multiple instances plus having enough disk to host them was the cheaper option. Our devs know that the TEST/DEV instances are less powerful than live, so when they are benchmarking things, we have rough estimates for them to use to know if their query is going to be too slow on prod.
There are other things to think about too like how bad would it be if the laptop got lost or stolen? If the DB is hosted on-site in a VM, then it's not a huge loss. If the DB's are hosted on the laptop, that could be catastrophic or it could be not a big deal (depending on the data masking that happens during refreshes). Or what if the laptop disk dies, is that going to be catastrophic to a project or is it quick to get back up and running again? Last concern I can think of is how often are devs needing data refreshes? If it is frequent, then having a set of shared DB's for the developers may be unreasonable as there will be a lot of impact to them if frequent refreshes happen. Now if refreshes are rare (once a month or less frequent), the impact may be small. I know my team shares the test/dev DB's as we don't often work on the exact same thing so there is little risk of us clobbering changes.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
April 4, 2024 at 12:30 pm
thx Brian, that makes a lot of sense and we already nixed the idea of a laptop for other reasons. But your reasons are better than ours. We started talking about a desktop to which we would log into instead. But now the lines are less clear.
April 4, 2024 at 2:05 pm
Just wanted to point out some downsides to server hosted too. First there are licensing costs as you need to license a new OS (similar to the VM approach BUT if you have a Visual Studio subscription, then the OS costs may go away). Plus the administration overhead. Someone needs to maintain that server and install patches and such. Hosting on your own laptop means that patches are applied by the end user and if a reboot is required, the impact is to 1 developer. Server hosting, getting downtime for a reboot can be tricky as it impacts a lot more users.
Now on the other hand, getting UAT signoff on changes is a LOT easier with a shared server than with hosting it on your own laptop, but I also don't really like doing UAT on a development instance because someone else MAY be making changes to the same application, but fixing different bugs. If your team is small, then that risk is low.
Another pro to hosting on your own laptop is that consumer disk is cheap. Picking up a 4+ TB USB drive is MUCH cheaper than picking up extra storage for a SAN.
One concern I see with using a shared desktop to host it is that desktop OS's only allow 1 user connection at a time to RDP in or be physically logged in. So if your developers need to do things on the actual server (outside of SSMS as you won't hit limits there), you are going to hit snags. What I mean is if 2 or more devs need to install a new instance of SQL Server or install patches to their instances or anything at the same time, they need to wait their turn.
I don't think there is a "one solution works for everyone"... like most things SQL - it depends :).
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
April 16, 2024 at 8:18 am
Hi,
I hope the data you are brining home is not at least Confidential/Secret.
Quick question, what's the problem using one server for all databases, where you set Max DOP for example to 2, this way a query will use paralelismus at max 2 CPU cores, so other statements can be run as well.
We have around 1500 Databases over 50 SQL servers. The production servers are running like:
One VMware instance with 1 SQL server in it. Multiple databases in the server, if there is not a specific requirement where there will be just 1 DB.
The testing servers are:
One VMware instance with 2 to 4 SQL servers running on it, where each SQL gets equal RAM, and almost the max amount of CPU's. The benefit of this shared HW between few servers is that the Computing power can be really high, and to be honest, how often there are few people using heavy Statements at the same time?
Based on your post, I understood that you are worried about Storage? Well we have few NAS servers, and we are mapping storage to the server, with the following logic:
Each database has it's own logical drive with some recommended size. Each DB also has it's own logical drive for Transaction Logs.
This way, if something goes out of control, the only database which will be affected is the one goes out of control because uses it's own logical disk (when I say out of control, I mean doubles the size for example unexpectedly or something like that). Once the disk is full, the rest of the databases are working fine, and they are not affected. The only thing that we are facing as a problem without stable solution from time to time, is TEMPDB, which gets full of bad SQLs.
Also when we have project which is going to be post bone, we do backup, which is about 10 times smaller, and delete the database, once we need to continue, we are restoring the database back into the server.
And another thing, we are working with TBs of data, mine server currently has 26TB of databases in total, and now I just started working on compression on big tables which are not frequently used, or not used at all, or even if they are used, when there is good indexation, and there is not high variability of Statements, can be also effective.
To be honest, what we have here seems pretty normal to me, nothing difficult, and I feel like this is kind of the bare minimum how those situations are supposed to be handled in each firm.
Best Regards,
Martin
April 17, 2024 at 5:57 pm
thx Martin
Quick question, what's the problem using one server for all databases, where you set Max DOP for example to 2, this way a query will use paralelismus at max 2 CPU cores, so other statements can be run as well.
This post addresses needs in development where multiple projects are occurring at the same time. And we already nixed bringing home pc's with confidential data. And are leaning more toward desktops with the dw developer's desktop having only his stuff on it and the erp developer having only his stuff on it.
the reason i may not be able to backup a postponed project is because i might still want to see if certain feeds , interfaces etc to it are going to break the part of the project that we were able to fully or partially complete.
i have to ask why tempdb is singled out in your post as a problem. Wouldnt each server's instance be managing its own tempdb within the confines of that instance's allocated resources?
April 17, 2024 at 8:02 pm
Each INSTANCE of SQL Server would have it's own tempdb. I think Martin's suggestion was to have a single INSTANCE with multiple DATABASES. This would end up sharing the same tempdb using that approach.
One thing about MAX DOP though is that it isn't the MAXIMUM threads per database or per instance, but it is the maximum threads per query.
One problem too that can come up with single server, shared between devs is dev A changes stored procedure B on their instance/database. Dev Z changes stored procedure B on their instance/database. Dev Z pushes to prod before dev A and now Dev A's work is "out of date" since they were both working on the same stored procedure. Same thing happens with the devs having the test systems on their local boxes.
I personally don't have a good solution to that problem. My team's approach is to double check that the rollback script matches production prior to pushing to live, but it's a manual process and manual processes are error prone processes. Plus my team VERY VERY RARELY has 2 devs fixing 2 different bugs in 1 stored procedure.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
April 18, 2024 at 5:02 am
thx Martin
i have to ask why tempdb is singled out in your post as a problem. Wouldnt each server's instance be managing its own tempdb within the confines of that instance's allocated resources?
Yes each server has it's own tempdb, but when you have multiple databases on one server, easly you can ran out of resources (TempDB, CPU, not correctly configured storage, where multiple databases share the same disk and/or transaction logs..)
We were able with a single script to ran out of tempDB, starting with 200GB, over the time went to 1600GB, didn't help, so we changed the whole logic and now we are writing it into a table instead of tempdb.
Each INSTANCE of SQL Server would have it's own tempdb. I think Martin's suggestion was to have a single INSTANCE with multiple DATABASES. This would end up sharing the same tempdb using that approach.
Yes that's exactly what I meant. Thank you for the clarification.
One thing about MAX DOP though is that it isn't the MAXIMUM threads per database or per instance, but it is the maximum threads per query.
Yes, my bad, but on the other hand when one 1 person works on one DB is not going to be able to run multiple Queries, usually you are focused on one, and does not happen often to have more than 1 query running, so the my assumption was that from each db will come single query. 🙂
April 19, 2024 at 2:34 pm
Yes, my bad, but on the other hand when one 1 person works on one DB is not going to be able to run multiple Queries, usually you are focused on one, and does not happen often to have more than 1 query running, so the my assumption was that from each db will come single query. 🙂
That is usually the case, but I've seen scenarios where a developer (myself included) is working on multiple things at once and if the query is a long running query (ETL type work), I may have multiple queries running at once. OR if I am working on an SSIS package, I may have things running in parallel. My scenario MAY not be that common, but is something to watch out for.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
May 21, 2024 at 2:05 pm
i'm going to post a general question about using azure instead. some hesitation but it seems like a way to address 2 challenges at the same time. one of them is paying for servers.
May 21, 2024 at 11:18 pm
Virtual desktop infrastructure is a common usage for exactly this, it keeps the organisation’s data where it should be and still easily accessible to those who have dev and test tasks to complete
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply