Setup Servers - Cost efective

  • From this post iam bit confused.

    Right now we have all different servers and only 1 instance on each. In our environment we dont have any friont end applications connected every one directly connects to the servers and do thier database work. All of them are database programmers. that is how our business go.

    1. Now when we talk about 2 instances on single server what wud that setup called, is it clustering?

    2.If we can have 2 instances on same sql server what additional resource we need to maintain that.

    3.Can we have both instances on same server like say it is a Dev server I want to name it Dev1 and Dev2. I want the users to connect to only latest updated instance and the second will be in the process of updating which is not accessible to users.

    4. I want the whole server to be named as Dev under which we will be having 2 instances out which only one instance will be current where users can work with latest data but users should feel that as a single server they should not notice switching between instances, is it possible?

  • GSquared

    Another alternative is have two instances of SQL on the server. Load up the staging copy, restore it to the second instance, then change the connection in your front end to that instance. The following month, load the staging copy into the first instance, and change you connection back to that. That will work if you have a single connection definition for the databases.

    this sounds interesting to me, can you please elaborate how i can achive this and what additional resources i need to do this on the existing sql server where i have default instance.

  • clustering is not multiple instances on one server. Clustering is sharing an instance across multiple nodes or servers. Actually the instance is on one server, but if something happens, it moves to the other server.

    changing the connection means changing the connection string so it points to the new instance. You don't need to do this on one server. It's not any easier/harder than having a 2nd server ready.

    Multiple instances gets problematic if the server is under load as you split the memory.

    I'm curious, why load into staging for the data? Why not load into different tables on the production server and copy across?

    We had a large load for a finance DB years ago that happened three times a day. We couldn't have downtime for loading, even though we needed to load a lot of data and not have it used until we were done.

    Our design ended up loading the data into the same tables, but we added a join to every query (using a view). The join was to a LiveData table with a date in it. As we loaded data, there was no new row int he LiveData table, so queries wouldn't see it. Once we were done, data verified, etc., we added a row and all of a sudden the data was "loaded."

    Would that work?

  • Mike - Thanks for the clarification. What GSquared was recommending is as you stated, having 2 instances on the same box. And you could do as he described putting the database up on the second instance one month and then on the other instance the second month.

    Being that you are not using applications other than standard connections via SQL Server tools you would have to use an Alias at the server level in order to have the developers point to the correct instance. Check out Aliases in BOL and post back with any questions that you may have.

    Thanks again.

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Multiple Instances may not work in my environment as i am trying to get rid of performance issues when doing a heavy dataload and multiple instance will share all the resources in common which will not boost perofrmance.

    Anyways Here is the setup am thinking of , let me know if that is a gud idea. May be again thinking of clustering.

    I do have 6 servers and only 1 instance on each. each server may have nearly 10 big sql 2005 databases each db of about 700GB on an average.

    Dev1 Dev2 Dev3 Dev4 Dev5 StageDev

    db1 db11 db21 db31 db41 db5

    db2 db12 db22 db32 db42 db8

    . . . . . db21

    . . . . . db27

    . . . . . db32

    db10 db20 db30 db40 db50 db45

    all these servers will be under one name called " DEV" like clustering have 6 nodes. To acheive my purpose i want to update/load data in StageDev\db8 and once update is done for StageDev\db8 then failover(take offline) just one db Dev1\db8 and bring StageDev1\Dev8. I n this process the end user should not be noticed that he was switched from Dev1 to StageDev. Ofcourse 30min downtime may be accepted in this scenario.

    I am not sure if this setup will workout. If this can be done in that case what hardware/software is needed to make it run smooth.

  • Can some one let me know if i can do this this kind of setup.

  • I was just wondering if i can use database mirroring in this scenario.

  • Mike,

    Database mirroring really wouldn't be an option if I understand things correctly. I also don't believe that the layout that you described three posts back was accurate and would not work as you have defined. It still seems like some of the earlier options would work best.

    Another option would be to consider partitioning. Check out the link here - http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/loading_bulk_data_partitioned_table.mspx

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • I think you might want to look into "Scalable shared databases", an obscure feature in SQL Server 2005/2008

    You might start with looking at this link:

    http://support.microsoft.com/kb/910378

    It's looks a bit hard to implement, and it requires SAN storage, but it seems to be made for exactly your requirements.

    Don't bother asking me how to do it; I've never implemented it and I have never heard of anyone who has. I never heard of it till it came up in a SSC Question of the day.

    BTW: Good luck on being the first one to ever adopt this technology, and maybe write an article about it when you are done to let us know how it worked out.:cool:

  • Another option would be to consider partitioning. Check out the link here - http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/loading_bulk_data_partitioned_table.mspx

    David

    David

    You are option looks interesting to me. HE re is my understanding about that article.

    Steps involved

    1. Create a staging table with partitions identical to production table (partitioned) and bulk load staging table from text file.

    2. After Loading staging table create indexes on staging just as production table.

    3. Then Execute ALTER TABLE PartitionTable(stage) SWITCH PARTITION Jan TO PartitionTable(Production) ;

    I need more clarification on the following

    1. Can the staging table and production table be on different servers/databases and how does that if effect perofrmance with different servers.

    2. Need more clarification on how SWITH works? is it going to move a partiotion of a table to another partition table as new partition.

  • Mike Levan (2/4/2009)


    I need more clarification on the following

    1. Can the staging table and production table be on different servers/databases and how does that if effect perofrmance with different servers.

    2. Need more clarification on how SWITH works? is it going to move a partiotion of a table to another partition table as new partition.

    Mike - First, I haven't used this in a few years and that was only in a conceptual design. I have plans to use something similar to this for something we are looking to do currently but we are not to the point of design yet. So, I am trying to say that I am not really an authority on this.

    1. No, they have to be on the same server and in the same database. Additionally, I believe they have to be in the same filegroup. I don't believe they have to be in the same file though so, you could so some creative layout with files on disks to ensure that you are not competing as much for IO resources as you would be if you were loading in the base table. Honestly, in most cases that coupled with the fact that you are not having to incur the cost of loading while indexes are present will typically afford all the performance that you require.

    2. The switch is really just a metadata pointer which tells the database engine where to find the data that you just loaded. So, in this case it would be switching that pointer from the one physical staging table object to the partitioned production table object.

    I will try to do some more reading a bit later to brush up on this and reply when I can. Sorry that I can't throw a bunch of time at this right now but a bit busy on some other stuff.

    Thanks!

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

  • Just to have an aidea of my setup.

    I have 20 huge databases on 2 servers each db range from 500GB to 900GB. Now we had each table partioned by monthly and we do load data only once in a month in each database.

    As we are facing performance problems when loading data and updating them we are looking for an alternate staging place to do all the updates and swap those to prodcution without much downtime.

    I considered many options before like

    1.attach/dettach

    2.backup/restore

    3.clustering

    4. and now switching partitions as you mentioned.

    I really cudnt decide on best option though am ready spending few bugs.

  • Yeah, the only way to truly offload all the loading performance impact would be to do it on another server and copy the newly loaded database (Options 1 and 2) (either through detach / copy / attach - or - backup restore) to the primary production server. In this configuration you would have to have storage to support both large databases simultaneously which could be somewhat expensive especially if you are going to have to do this for all instances.

    I still don't see how you could use clustering to do anything here but that is another topic.

    The partitioning would still have some performance impact on the server while you do the loading process but that would be primarily in CPU and memory use, and would NOT result in blocking of current activity against production data. So, if there is headroom on the box in the realm of CPU and memory then this would probably be the cheapest and most efficient way to go and could even be automated pretty easily.

    Regardless, you are going to have to weigh all these things out to see which is going to be the best fit for your requirements and based on the comfort and ability to deliver a consistent reliable solution.

    Sounds like a great project!

    David

    @SQLTentmaker

    “He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot

Viewing 13 posts - 16 through 27 (of 27 total)

You must be logged in to reply to this topic. Login to reply