April 4, 2013 at 11:51 am
rajarshi_ghosh_05 (4/4/2013)
{snip}No. We dont bother about the temp table data not if anything happens like network failure or server restart or user system restart! We dont have to preserve that data. Only thing if nothing happens then we need to keep the data till 120 min; because user can download the data in excel at anytime between this.
that doesn't make sense, based on your earlier posts; you were saying that if the global temp table gets dropped, it messes your biz process up.
now you are saying that if something other than the table dropping out of scope (the server restarting for example), it has no affect or doesn't matter to you.
Lowell
April 4, 2013 at 12:04 pm
Lowell (4/4/2013)
rajarshi_ghosh_05 (4/4/2013)
{snip}No. We dont bother about the temp table data not if anything happens like network failure or server restart or user system restart! We dont have to preserve that data. Only thing if nothing happens then we need to keep the data till 120 min; because user can download the data in excel at anytime between this.
that doesn't make sense, based on your earlier posts; you were saying that if the global temp table gets dropped, it messes your biz process up.
now you are saying that if something other than the table dropping out of scope (the server restarting for example), it has no affect or doesn't matter to you.
I have to agree that this is contradictory. What is the difference between a server restart and the table simply getting dropped?
April 4, 2013 at 12:18 pm
Lowell (4/4/2013)
rajarshi_ghosh_05 (4/4/2013)
{snip}No. We dont bother about the temp table data not if anything happens like network failure or server restart or user system restart! We dont have to preserve that data. Only thing if nothing happens then we need to keep the data till 120 min; because user can download the data in excel at anytime between this.
that doesn't make sense, based on your earlier posts; you were saying that if the global temp table gets dropped, it messes your biz process up.
now you are saying that if something other than the table dropping out of scope (the server restarting for example), it has no affect or doesn't matter to you.
Let me clear it out. We have a process to upload bulk data from CSV to the db table. User will login to one web application and then upload the data. The web application will just pass the total csv file to the sp through table variable. Now the entire validation, business logic, internal table update will be done in SP and this is a huge SP. This SP is separated in different modules. One module is the validation part.
In the validation part we are keeping all error details in a temp table.
After the execution over; we show this error details of all rows to user in that webpage. However user also can downloaded entire csv contents along with those error details in a excel. Previously the time limit was set to 20min to download the excel now it has been increased to 120min! And the problem starts!!!
We dont need the error details data as user can upload multiple csv same day. So every time he navigates from error details screen to upload screen; data will be lost. Every time user upload a csv; new error details (if any) will be generated.
April 4, 2013 at 12:20 pm
You're vastly more likely to lose a single connection than have the entire instance go down, which would kill a temp table from that connection as well.
I mean, seriously, if you have entire instances going down enough for it to be a major factor in app design, you've got vastly bigger problems than any one app.
As with any app of this type, you may need a logging/control table that can "follow up" to make sure needed data is available, and clean up data when it is no longer needed. But that is true whether the table is in a user db or tempdb.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2013 at 12:23 pm
It sounded like one of those types of processes, where you basically just need a temporary "cache" of results / report. Typically those are easy enough for a user to re-generate on their own if something goes wrong.
I still like to have some type of clean up process. For some of ours, it only runs nightly, to clean up anything leftover from that day's use.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2013 at 12:57 pm
ScottPletcher (4/4/2013)
It sounded like one of those types of processes, where you basically just need a temporary "cache" of results / report. Typically those are easy enough for a user to re-generate on their own if something goes wrong.I still like to have some type of clean up process. For some of ours, it only runs nightly, to clean up anything leftover from that day's use.
I like the idea of permanent table in tempdb but whatever reason DBAs deny to give me the permission. We can also bring the entire data in web application cache but that will slow down the application performance. We have spent lot of time to increase the performance and the entire process performing really well. N no of user can upload N no of csv files. No limit! Quite challenging to optimize the performance of the entire process; all works well except this situation! Anyway if the permanent table is the only way out then we need to start the process to make it.
April 4, 2013 at 1:06 pm
rajarshi_ghosh_05 (4/4/2013)
ScottPletcher (4/4/2013)
It sounded like one of those types of processes, where you basically just need a temporary "cache" of results / report. Typically those are easy enough for a user to re-generate on their own if something goes wrong.I still like to have some type of clean up process. For some of ours, it only runs nightly, to clean up anything leftover from that day's use.
I like the idea of permanent table in tempdb but whatever reason DBAs deny to give me the permission. We can also bring the entire data in web application cache but that will slow down the application performance. We have spent lot of time to increase the performance and the entire process performing really well. N no of user can upload N no of csv files. No limit! Quite challenging to optimize the performance of the entire process; all works well except this situation! Anyway if the permanent table is the only way out then we need to start the process to make it.
So I guess more training for the "DBAs" is out of the q? 🙂
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2013 at 2:45 pm
ScottPletcher (4/4/2013)
You're vastly more likely to lose a single connection than have the entire instance go down, which would kill a temp table from that connection as well.I mean, seriously, if you have entire instances going down enough for it to be a major factor in app design, you've got vastly bigger problems than any one app.
As with any app of this type, you may need a logging/control table that can "follow up" to make sure needed data is available, and clean up data when it is no longer needed. But that is true whether the table is in a user db or tempdb.
Actually, most environments I have worked in have been extremely stable, however I also know that if you don't plan/develop/design for those eventualities that one day it is going to bite you in the ***. I'd rather take those possibilities into account upfront than have to answer to management why it wasn't considered when the event occurs and causes a problem.
April 4, 2013 at 3:01 pm
Lynn Pettis (4/4/2013)
ScottPletcher (4/4/2013)
You're vastly more likely to lose a single connection than have the entire instance go down, which would kill a temp table from that connection as well.I mean, seriously, if you have entire instances going down enough for it to be a major factor in app design, you've got vastly bigger problems than any one app.
As with any app of this type, you may need a logging/control table that can "follow up" to make sure needed data is available, and clean up data when it is no longer needed. But that is true whether the table is in a user db or tempdb.
Actually, most environments I have worked in have been extremely stable, however I also know that if you don't plan/develop/design for those eventualities that one day it is going to bite you in the ***. I'd rather take those possibilities into account upfront than have to answer to management why it wasn't considered when the event occurs and causes a problem.
If the instance is going to come crashing down, the permanent tables could be damaged as well.
Planning is a good thing, but there's no reason to over-engineer something that's in reality quite simple.
NASA's wasted billions and billions of dollars on things, and failed, that SpaceX has succeeded with vastly less spent.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2013 at 3:10 pm
ScottPletcher (4/4/2013)
Lynn Pettis (4/4/2013)
ScottPletcher (4/4/2013)
You're vastly more likely to lose a single connection than have the entire instance go down, which would kill a temp table from that connection as well.I mean, seriously, if you have entire instances going down enough for it to be a major factor in app design, you've got vastly bigger problems than any one app.
As with any app of this type, you may need a logging/control table that can "follow up" to make sure needed data is available, and clean up data when it is no longer needed. But that is true whether the table is in a user db or tempdb.
Actually, most environments I have worked in have been extremely stable, however I also know that if you don't plan/develop/design for those eventualities that one day it is going to bite you in the ***. I'd rather take those possibilities into account upfront than have to answer to management why it wasn't considered when the event occurs and causes a problem.
If the instance is going to come crashing down, the permanent tables could be damaged as well.
Planning is a good thing, but there's no reason to over-engineer something that's in reality quite simple.
NASA's wasted billions and billions of dollars on things, and failed, that SpaceX has succeeded with vastly less spent.
I'm sorry if you think using a permanent table in the applications database to support a application process is over engineering. Trying to figure out how to keep a global temporary table up and available during a 120 minute period to me is wasting effort.
Obviously you are right and everyone else is wrong if they don't agree with you and your personal philosophy.
April 4, 2013 at 3:18 pm
Lynn Pettis (4/4/2013)
ScottPletcher (4/4/2013)
Lynn Pettis (4/4/2013)
ScottPletcher (4/4/2013)
You're vastly more likely to lose a single connection than have the entire instance go down, which would kill a temp table from that connection as well.I mean, seriously, if you have entire instances going down enough for it to be a major factor in app design, you've got vastly bigger problems than any one app.
As with any app of this type, you may need a logging/control table that can "follow up" to make sure needed data is available, and clean up data when it is no longer needed. But that is true whether the table is in a user db or tempdb.
Actually, most environments I have worked in have been extremely stable, however I also know that if you don't plan/develop/design for those eventualities that one day it is going to bite you in the ***. I'd rather take those possibilities into account upfront than have to answer to management why it wasn't considered when the event occurs and causes a problem.
If the instance is going to come crashing down, the permanent tables could be damaged as well.
Planning is a good thing, but there's no reason to over-engineer something that's in reality quite simple.
NASA's wasted billions and billions of dollars on things, and failed, that SpaceX has succeeded with vastly less spent.
I'm sorry if you think using a permanent table in the applications database to support a application process is over engineering. Trying to figure out how to keep a global temporary table up and available during a 120 minute period to me is wasting effort.
Obviously you are right and everyone else is wrong if they don't agree with you and your personal philosophy.
And ditto for you.
It depends on the process. I think for this specific process, it is; and the OP has repeatedly stated that it would be. I'm willing to accept his/her judgment on it.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2013 at 4:11 pm
Not saying I am right or wrong. I wanted a straight answer to a straight question: Is it a problem to lose the data if there is an restart of the instance (hardware failure, cluster/mirror failover, planned restart, etc). The answer, no but it is a problem if the table is simply dropped by SQL Server if there are no processes accessing the table. Looks to me like a contradiction. You can lose data in one case but not in another. If a customer/client/user/whatever is expecting to be able to retrieve error messages and original file during a 120 minute time frame and can't, they aren't going to care why they can't, it is simply going to be a problem that they will want to see corrected.
I was not be pedantic when I was trying to take into account a possibility of failure on the system.
It would be easier to work with a permanent table in the same database and implement a cleanup process to delete data over 120 minutes old than to try and develop a means to preserve a global temporary table over the same period of time.
And I'm sorry if I take exception with some phraseology that may be taken more literally by others less familiar with the English language and the various colloquialisms used by speakers from different locales.
I try to say what I mean and mean what I say. Software doesn't "know" anything, it simply does what you tell it to do. It may follow algorithms to determine the best way to accomplish a task, but it still doesn't "know" things.
April 4, 2013 at 4:19 pm
Lynn Pettis (4/4/2013)
Not saying I am right or wrong. I wanted a straight answer to a straight question: Is it a problem to lose the data if there is an restart of the instance (hardware failure, cluster/mirror failover, planned restart, etc). The answer, no but it is a problem if the table is simply dropped by SQL Server if there are no processes accessing the table. Looks to me like a contradiction. You can lose data in one case but not in another. If a customer/client/user/whatever is expecting to be able to retrieve error messages and original file during a 120 minute time frame and can't, they aren't going to care why the can't, it is simply going to be a problem that they will want to see corrected.I was not be pedantic when I was trying to take into account a possibility of failure on the system.
It would be easier to work with a permanent table in the same database and implement a cleanup process to delete data over 120 minutes old than to try and develop a means to preserve a global temporary table over the same period of time.
And I'm sorry if I take exception with some phraseology that may be taken more literally by others less familiar with the English language and the various colloquialisms used by speakers from different locales.
I try to say what I mean and mean what I say. Software doesn't "know" anything, it simply does what you tell it to do. It may follow algorithms to determine the best way to accomplish a task, but it still doesn't "know" things.
Sorry.
Of course software doesn't sentiently "know" anything -- we're all computer people, I assumed we all knew that even w/o explicitly stating it.
My point was that some tasks really don't need to be engineered to the point that they're failproof in the event of an extraordinarily unlikely event. Rather than dozens/hundreds of developers hours on that, sometimes the best answer is "We had a major, unexpected event. You may need to re-run the last thing you were doing."
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
April 4, 2013 at 4:30 pm
ScottPletcher (4/4/2013)
Lynn Pettis (4/4/2013)
Not saying I am right or wrong. I wanted a straight answer to a straight question: Is it a problem to lose the data if there is an restart of the instance (hardware failure, cluster/mirror failover, planned restart, etc). The answer, no but it is a problem if the table is simply dropped by SQL Server if there are no processes accessing the table. Looks to me like a contradiction. You can lose data in one case but not in another. If a customer/client/user/whatever is expecting to be able to retrieve error messages and original file during a 120 minute time frame and can't, they aren't going to care why the can't, it is simply going to be a problem that they will want to see corrected.I was not be pedantic when I was trying to take into account a possibility of failure on the system.
It would be easier to work with a permanent table in the same database and implement a cleanup process to delete data over 120 minutes old than to try and develop a means to preserve a global temporary table over the same period of time.
And I'm sorry if I take exception with some phraseology that may be taken more literally by others less familiar with the English language and the various colloquialisms used by speakers from different locales.
I try to say what I mean and mean what I say. Software doesn't "know" anything, it simply does what you tell it to do. It may follow algorithms to determine the best way to accomplish a task, but it still doesn't "know" things.
Sorry.
Of course software doesn't sentiently "know" anything -- we're all computer people, I assumed we all knew that even w/o explicitly stating it.
My point was that some tasks really don't need to be engineered to the point that they're failproof in the event of an extraordinarily unlikely event. Rather than dozens/hundreds of developers hours on that, sometimes the best answer is "We had a major, unexpected event. You may need to re-run the last thing you were doing."
And your last point is a design decision. But it also means that it was one of several (hopefully) alternatives presented during the design phase as part of risk mitigation and that the users and/or stakeholders had a say in the decision. Because it could also be the same answer here, we can't develop a consistant process that will allow us to preserve a global temporary table over the required time span. Is this really a necessary capability? Let the stakeholders/users make that decision and then present other alternatives if it is.
April 4, 2013 at 4:36 pm
Lynn Pettis (4/4/2013)
ScottPletcher (4/4/2013)
Lynn Pettis (4/4/2013)
Not saying I am right or wrong. I wanted a straight answer to a straight question: Is it a problem to lose the data if there is an restart of the instance (hardware failure, cluster/mirror failover, planned restart, etc). The answer, no but it is a problem if the table is simply dropped by SQL Server if there are no processes accessing the table. Looks to me like a contradiction. You can lose data in one case but not in another. If a customer/client/user/whatever is expecting to be able to retrieve error messages and original file during a 120 minute time frame and can't, they aren't going to care why the can't, it is simply going to be a problem that they will want to see corrected.I was not be pedantic when I was trying to take into account a possibility of failure on the system.
It would be easier to work with a permanent table in the same database and implement a cleanup process to delete data over 120 minutes old than to try and develop a means to preserve a global temporary table over the same period of time.
And I'm sorry if I take exception with some phraseology that may be taken more literally by others less familiar with the English language and the various colloquialisms used by speakers from different locales.
I try to say what I mean and mean what I say. Software doesn't "know" anything, it simply does what you tell it to do. It may follow algorithms to determine the best way to accomplish a task, but it still doesn't "know" things.
Sorry.
Of course software doesn't sentiently "know" anything -- we're all computer people, I assumed we all knew that even w/o explicitly stating it.
My point was that some tasks really don't need to be engineered to the point that they're failproof in the event of an extraordinarily unlikely event. Rather than dozens/hundreds of developers hours on that, sometimes the best answer is "We had a major, unexpected event. You may need to re-run the last thing you were doing."
And your last point is a design decision. But it also means that it was one of several (hopefully) alternatives presented during the design phase as part of risk mitigation and that the users and/or stakeholders had a say in the decision. Because it could also be the same answer here, we can't develop a consistant process that will allow us to preserve a global temporary table over the required time span. Is this really a necessary capability? Let the stakeholders/users make that decision and then present other alternatives if it is.
Sounded to me like the OP had already decided that.
It might not be worth the overhead cost of the meeting(s); cumulatively, just that/those could cost dozens of hours of time. Given the type of take it is, and extraordinarily rare chance that it even occurs, I don't believe it's worth investing that much time in it. The opportunity cost is just way too high to me.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
Viewing 15 posts - 31 through 45 (of 48 total)
You must be logged in to reply to this topic. Login to reply