Is that a reason DBA should Install SQL Server

  • Hello:

    I got a question from upper manager and he wanted to know why DBA should install SQL Server?

    Should Window Server Team install SQL Server?

    I tried to explain and clarify to him.

    Would you please give me a good reason why DBA should install SQL Server?

     

    Thank you in advance.

    Edwin

  • My opinion - either can do the install, but the DBA needs to do the configuration.  The Server team should be able to handle an install as long as you can provide them with all the data they need (features to include, who the sysadmins should be, where the data files and log files are, number of tempdb files, etc).  But post install, the DBA will need to go in and finish the install.

    It is that last bit which is why the DBA at my workplace does the installs as well as updates.  No point having 2 people do the work when 1 can do it just as well. Plus, if you are using a Visual Studio license for the instance (for example), you NEED to have a Visual Studio license for the person doing the install.  That is, if you are licensed for Visual Studio BUT the server team is not, the server team is not licensed for the install.

    Alternately, it may make more sense for the server team to do the install.  If you have only 1 DBA, you may want to offload application installation to the server team so the DBA has more free cycles for work DBA work.  Alternately, if you have large teams, the server team may not want to give the DBA team admin rights to the servers and without that, the DBA cannot install the software.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Well unless it's the most basic of installs there's going to be a lot more set up than just clicking next next next in the install and then forgetting about it.  The DBA would be making sure various drives for data and logs are set up right, setting up any security that needs to be set in SQL Server depending on how it'll be used, likely creating whatever user databases are going to be used, setting up backup etc.....

  • Either one can do the install.  The DBA will need to provide some settings for the Windows team, but that team should be able to do an install as well.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • If the installation process is well documented - then anyone can perform the install, especially where the required components for installation have been identified.  I have found that most system administrators don't know what components need to be installed - so you end up with everything.

    There is no reason to install integration services, data services, replication - or other items when they are not going to be utilized.

    The other reason you want to be careful with who is performing the installations - is who is given sysadmin access during the installation.  During that portion there are 2 items that need to be determined - the first is whether or not you enable SQL logins (and set an SA password) and which users/groups to be added as sysadmins.  On a system that contains any type of PII/PHI or other confidential data you need to make sure permissions are set appropriately.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • The windows teams have a nasty habit of installing SQL Server on the "C:" drive.  They have the same nasty habit for the page file, which really needs to have its own drive letter.

    My recommendation is to take advantage of the skills and manpower of the Windows Server Team but in a collaborative way.

    C: Drive should be only for the operating systems stuff and should be large enough to handle all the junk that windows updates adds to the drive for 7 years.  With that, I recommend at least 200GB (300 would be better) for this drive.

    D: Drive - Windows page file drive.  Presize it for the max and preallocate all the space to just the page file.

    E: Drive - Extra programs - any  other programs other than SQL Server or the operating system.

    K: Drive - This drive is for DBA usage only and for whatever they want or need for it.  In my Prod box, this is a 2TB drive for me to use during special index rebuilds (long story there), temporary single table copies as pre-depolyment backups, whatever, etc, etc.  This drive has saved our keesters more than once and make it a whole lot easier to do a litany of other "DBA" things including making a copy of a whole database for "emergency uses/testing".

    L: All SQL Server database log files.  Yeah, I know there's no huge advantage to having them on a separate drive.  It's just a great logical separator.  This may or may not include log files from the system databases.

    M: Drive - All SQL Server database data files (MDF/NDF). This may or may not include database files from the system databases.

    N: Drive (Optional) - Extension of the M: Drive

    O: Drive (Optional) - Extension of the M: Drive

    P: Drive (Optional) - Extension of the M: Drive

    R: Drive - No drive space on the box.  Use it as a pointer to where the backups are stored on another box if you want to easily standardize backup code no matter where the backups end up being.

    S: Drive - Scratch or Sandbox drive.  Use for "real but not permanent" work files.

    T: Drive - TempDB... I do things a little differently than most... I put the MDF/NDF and LDF files on this single drive because it usually needs to be faster than the rest.

    X:, Y:, Z:  Special purpose drives depending on the server.

     

    For the SQL Server programs themselves, I put those right on the M: Drive.  You can do what you want but there's no way I'd put them on the C: drive.

    Same goes with the System databases.  It treat them like any user database but you might want to put them on their own drive.

     

    After that, do a writeup of what you want for standard installations.  I wouldn't have them go through configurations like what the maxdop or threshold of parallelism or trace flags or startup setts or any of that type of stuff.  You should have a script that'll do most of that for you.

    And then, ask them nicely if they do SQL Server CU's when they do their monthly Windows Updates.  You and the Windows Server Team can really make life a pleasure for each other.  It can and should be one of the most symbiotic relationships in the entire company.

    The other such relationship needs to be between the DBAs and the Developers.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Installation process has increased in its complexity on newer versions. But with increased complexity, it maintains the ease of use that Microsoft has offered. What I mean is that you can still install SQL Server by pressing the next button on each screen, something that with Oracle is almost impossible.

     

    MyBalanceNow

    • This reply was modified 3 years, 3 months ago by  Nicholaus.
  • If you have a standard server build (things like the same drive letters on each server) you could have a standard config file that is used by the installer to keep things consistent across servers.   That way there are no settings that need to be entered or changed other than passwords for the accounts.

    You can install using a config file via the GUI installer (Advanced Install - Install based on configuration files) or from setup.exe command (/CONFIGURATIONFILE)

    Automate as much as possible!

  • hello it  basic of installs there's going to be a lot more set up than just clicking next next next in the install and then forgetting about it.

    freelancer | service | Jobs

  • My two cents depending on the kind of organization. I don't mean "mature" or "less mature" in any judgmental way; I just mean the level of automation that the organization can support in deploying SQL Server.

    If the organization is so mature that the DBA can provide the Windows team with a scripted installation and configuration setup, then either DBA or Windows can run the script. In the scenario, I bet management already understands the tradeoffs and would have backed a setup where the DBA provides the agreed upon SQL settings so the Windows team doesn't have to worry about them. That also could help set up disaster recovery as well.

    Otherwise, in a less mature organization, I say the Windows team should install up to SQL Server and then hand it off to the DBA. Either (1) let the DBA install SQL Server or (2) the DBA provides the basic install details as others have said (getting the drives correct), then still hand the server to the DBA to configure RAM, security, load the databases, jobs, and so on.

    I think management should basically understand that, like other complex applications, SQL Server *could* be installed with defaults, but in practice, SQL needs pre-planning. Otherwise, the instance can be messed up (like with System, TempDB, and user databases all on C along with the application) before the DBA even gets to configure it.

     

    -- webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • SQL Server will install some things on the C: / main Windows drive regardless of what you do.

    Jeff Moden wrote:

    The windows teams have a nasty habit of installing SQL Server on the "C:" drive.  They have the same nasty habit for the page file, which really needs to have its own drive letter.

    My recommendation is to take advantage of the skills and manpower of the Windows Server Team but in a collaborative way.

    C: Drive should be only for the operating systems stuff and should be large enough to handle all the junk that windows updates adds to the drive for 7 years.  With that, I recommend at least 200GB (300 would be better) for this drive.

    D: Drive - Windows page file drive.  Presize it for the max and preallocate all the space to just the page file.

    E: Drive - Extra programs - any  other programs other than SQL Server or the operating system.

    K: Drive - This drive is for DBA usage only and for whatever they want or need for it.  In my Prod box, this is a 2TB drive for me to use during special index rebuilds (long story there), temporary single table copies as pre-depolyment backups, whatever, etc, etc.  This drive has saved our keesters more than once and make it a whole lot easier to do a litany of other "DBA" things including making a copy of a whole database for "emergency uses/testing".

    L: All SQL Server database log files.  Yeah, I know there's no huge advantage to having them on a separate drive.  It's just a great logical separator.  This may or may not include log files from the system databases.

    M: Drive - All SQL Server database data files (MDF/NDF). This may or may not include database files from the system databases.

    N: Drive (Optional) - Extension of the M: Drive

    O: Drive (Optional) - Extension of the M: Drive

    P: Drive (Optional) - Extension of the M: Drive

    R: Drive - No drive space on the box.  Use it as a pointer to where the backups are stored on another box if you want to easily standardize backup code no matter where the backups end up being.

    S: Drive - Scratch or Sandbox drive.  Use for "real but not permanent" work files.

    T: Drive - TempDB... I do things a little differently than most... I put the MDF/NDF and LDF files on this single drive because it usually needs to be faster than the rest.

    X:, Y:, Z:  Special purpose drives depending on the server.

    For the SQL Server programs themselves, I put those right on the M: Drive.  You can do what you want but there's no way I'd put them on the C: drive.

    Same goes with the System databases.  It treat them like any user database but you might want to put them on their own drive.

    After that, do a writeup of what you want for standard installations.  I wouldn't have them go through configurations like what the maxdop or threshold of parallelism or trace flags or startup setts or any of that type of stuff.  You should have a script that'll do most of that for you.

    And then, ask them nicely if they do SQL Server CU's when they do their monthly Windows Updates.  You and the Windows Server Team can really make life a pleasure for each other.  It can and should be one of the most symbiotic relationships in the entire company.

    The other such relationship needs to be between the DBAs and the Developers.

    I guess you never have multiple named instances on the same server, since those would each need their own drive letters?!  If you have a lot of instances -- as we do -- then multiple instances on the same box can save serious money on licensing -- as it does for us -- since once you've licensed the cores, you can have as many instances as you want on them.  Yeah, you do have to buy lots of RAM up front: but that's a one-time cost, and thus well worth it.  Besides, extra RAM is always good for performance.  Since the CPUs are shared, you can also get better overall use of the CPUs, i.e., less idle CPU time on less busy instances and relatively more CPU available for busier instances.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Correct.  We never do multiple named instances on a server.  Heh... they still haven't found the body of the last person that tried to do that on our servers. 😀

    Instead of using multiple instances to save money, we would use multiple databases or even a single database with the right stuff in it to do so.

    W don't have such a need even for that.  We've built a nice box and we keep almost all of our stuff on that one big box.

    YMWV

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    Correct.  We never do multiple named instances on a server.  Heh... they still haven't found the body of the last person that tried to do that on our servers. 😀

    Instead of using multiple instances to save money, we would use multiple databases or even a single database with the right stuff in it to do so.

    W don't have such a need even for that.  We've built a nice box and we keep almost all of our stuff on that one big box.

    YMWV

    One big instance?

    We separate the oltp, data mart and ssrs/reporting into separate instances, for the standard reasons.  And hundreds of dbs on more than one of those instances.  I'd be very leery of trying to put everything into a single gigantic instance with nearly 2,000 dbs on one box.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Our situation appears to be quite a bit different than yours (and no... neither is wrong, IMHO... they're just different because of different business requirements).  My best friend, who is also a DBA, has a situation much like yours in that he has more instances than you can shake a stick at and quite a few of them have more than 100 databases each.  To date (after more than 2 decades), I've not had to work in such an environment.  I'm thinking that makes be pretty lucky DBA-work wise but it also appears that it means that I might be missing some desirable skills industry-wise.  I'm not jealous and I don't suffer FOMO there, though.

    The really cool part about that is when my best friend or I have a cool new proc, technique, or method we made, we can run it past the other to see if it works correctly in both environments.  This has been a really important opportunity for both of us when it comes to index maintenance, backups, and a bunch of other "System Utility" class code as well as some of the programmatic functionality that is common to many environments.  It's been a great opportunity for us both and we continue to take advantage of our good fortune.

    Back to my systems...

    1. We don't have the need for even 10's of DBs for the stuff we do.  In fact, when I first reported for work at the company a decade ago, they had just completed a fair number of consolidations to create the basic environment we still have today.
    2. Our customers are almost exclusively in North America and so we have no need to separate OLTP from our large nightly batch runs of millions of rows for each of many customers.  From a blocking standpoint between OLTP and the large batch jobs, there is usually none but the system does run a bit slower when we're doing the very large nighttime loads.
    3. We have no need for a "data mart".  Despite the sizes of our databases, reporting is within expected performance SLAs and is usually not hampered by or causes hampering of the OLTP or Batch Systems, some of which (smaller runs) run throughout the day.
    4. We got rid of SSIS and SSRS a long time ago.  They were just too much work for what they did, generally couldn't handle some of the import requirements we had, was difficult to customize by client, and, to be honest, was pretty bad when it came to performance.  Most of the stuff we import is CSV or TSV.  The only other stuff we import is document images that need to be OCR'd and parts extracted.  We use separate software for that, which happens to write the extractions and counts to a database.
    5. We have about a dozen main databases that vary in size from a couple of 10's of GB and have one (mostly OLTP) that weighs in at 1.2TB and our main batch processing database which weighs in at 2.2TB.  With that we have another 50 or so databases that serve other minor functionality like system/sql monitoring, a scratch database, and several support databases (some are kind of like a datamart in that they contain materialized data from other sources but no de-normalization or pre-aggregation), Atlassian (Jira), Mantis, and a couple of other tools.

    That OCR stuff is a pain in that it requires a lot of "extractor" boxes but that has nothing to do with SQL.  Each of those boxes write to a single set of tables in our main box on just a couple of small databases (not my doing... it was a vendor requirement) and there are a fair number of Windows Boxes that are dedicated to individual applications like a dedicated set of down-loaders whose only job it is is to get and save the CSV/TSV files and keep track of things by writing info to our main system.

    Our phone system ( a critical part of the business) adds two SQL boxes of 4 core each (1 system for the phone system itself, 1 for a bit of a denormalized reporting system).

    Not including any of the DEV or UAT boxes, we're basically running our enterprise on 3 SQL Servers (2 standard at 4 core each and a main at 16 core Enterprise), a couple of large file servers to store downloaded files on, and a pot wad of specialized windows application systems and doing very well.

    All of that makes Windows Server monthly updates and SQL Server CU's really easy to do and saves on licensing costs pretty much across the board there.  It doesn't help much when it comes to application servers.  We're likely very much like everyone else when it comes to that.  We do have some pretty insane document storage requirements but, with the help of compression technology and some thoughtful "back it up one final time and purge it" actions, it's been fairly easy to manage and no so expensive as many would think.  The company is making noises to start moving processed files (archiving) to the cloud since we have no need to read the files after even just a week (unless there's an audit).

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    Our situation appears to be quite a bit different than yours (and no... neither is wrong, IMHO... they're just different because of different business requirements).  My best friend, who is also a DBA, has a situation much like yours in that he has more instances than you can shake a stick at and quite a few of them have more than 100 databases each.  To date (after more than 2 decades), I've not had to work in such an environment.  I'm thinking that makes be pretty lucky DBA-work wise but it also appears that it means that I might be missing some desirable skills industry-wise.  I'm not jealous and I don't suffer FOMO there, though.

    The really cool part about that is when my best friend or I have a cool new proc, technique, or method we made, we can run it past the other to see if it works correctly in both environments.  This has been a really important opportunity for both of us when it comes to index maintenance, backups, and a bunch of other "System Utility" class code as well as some of the programmatic functionality that is common to many environments.  It's been a great opportunity for us both and we continue to take advantage of our good fortune.

    Back to my systems...

    1. We don't have the need for even 10's of DBs for the stuff we do.  In fact, when I first reported for work at the company a decade ago, they had just completed a fair number of consolidations to create the basic environment we still have today.
    2. Our customers are almost exclusively in North America and so we have no need to separate OLTP from our large nightly batch runs of millions of rows for each of many customers.  From a blocking standpoint between OLTP and the large batch jobs, there is usually none but the system does run a bit slower when we're doing the very large nighttime loads.
    3. We have no need for a "data mart".  Despite the sizes of our databases, reporting is within expected performance SLAs and is usually not hampered by or causes hampering of the OLTP or Batch Systems, some of which (smaller runs) run throughout the day.
    4. We got rid of SSIS and SSRS a long time ago.  They were just too much work for what they did, generally couldn't handle some of the import requirements we had, was difficult to customize by client, and, to be honest, was pretty bad when it came to performance.  Most of the stuff we import is CSV or TSV.  The only other stuff we import is document images that need to be OCR'd and parts extracted.  We use separate software for that, which happens to write the extractions and counts to a database.
    5. We have about a dozen main databases that vary in size from a couple of 10's of GB and have one (mostly OLTP) that weighs in at 1.2TB and our main batch processing database which weighs in at 2.2TB.  With that we have another 50 or so databases that serve other minor functionality like system/sql monitoring, a scratch database, and several support databases (some are kind of like a datamart in that they contain materialized data from other sources but no de-normalization or pre-aggregation), Atlassian (Jira), Mantis, and a couple of other tools.

    That OCR stuff is a pain in that it requires a lot of "extractor" boxes but that has nothing to do with SQL.  Each of those boxes write to a single set of tables in our main box on just a couple of small databases (not my doing... it was a vendor requirement) and there are a fair number of Windows Boxes that are dedicated to individual applications like a dedicated set of down-loaders whose only job it is is to get and save the CSV/TSV files and keep track of things by writing info to our main system.

    Our phone system ( a critical part of the business) adds two SQL boxes of 4 core each (1 system for the phone system itself, 1 for a bit of a denormalized reporting system).

    Not including any of the DEV or UAT boxes, we're basically running our enterprise on 3 SQL Servers (2 standard at 4 core each and a main at 16 core Enterprise), a couple of large file servers to store downloaded files on, and a pot wad of specialized windows application systems and doing very well.

    All of that makes Windows Server monthly updates and SQL Server CU's really easy to do and saves on licensing costs pretty much across the board there.  It doesn't help much when it comes to application servers.  We're likely very much like everyone else when it comes to that.  We do have some pretty insane document storage requirements but, with the help of compression technology and some thoughtful "back it up one final time and purge it" actions, it's been fairly easy to manage and no so expensive as many would think.  The company is making noises to start moving processed files (archiving) to the cloud since we have no need to read the files after even just a week (unless there's an audit).

    I would think you could get rid of the two standard and just put it all on the Enterprise box.  2 Std licences isn't massive, but it isn't chump change either.  But it may not be worth the trouble to you.

    Named instances aren't all that difficult to manage.  CNAME entries (as I understand it, although that is not my area of expertise) can handle not having to change the old server names but still pointing them to the new names.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply