May 27, 2005 at 10:17 am
Hi.
I've got a main stored procedure that downloads files via ftp, decompresses them, drops the indexes on a table, and then performs the insert. I start a transaction right after I the drop indexes and right before the insert. If there are any indexes on the table, after dropping them via a subcall to another sproc that uses dynamic sql and a transaction to drop all the table's indexes, the main stored procedure just stops after successfully dropping the indexes but prior to moving into the main transaction containing the insert. There's no explanation in the SQL Job history other than, "OLE DB provider 'MSDASQL' reported an error. [SQLSTATE 42000] (Error 7399) The step failed."
This doesn't make any sense to me at all. The only thing I could think of is to put in a WAITFOR DELAY '00:00:01' before the main transaction thinking SQL Server was still "thinking".
Any ideas would be greatly appreciated.
May 27, 2005 at 10:40 am
Can you break up your steps and return @@ERROR?
BOL reference Access databases for this error. It does mention you need to be the Admninistrator. I am assuming you already have that Role.
I wasn't born stupid - I had to study.
May 27, 2005 at 10:45 am
Yes, I'm administrator and so is the account that's executing the job.
For every executable statement, dynamic or not, I'm capturing @@ERROR into a variable. The drop index sproc sends this back to the main sproc if a statement there fails, but they're succeeding: the indexes are all dropped.
The problem comes in when the main procedure starts the insert transaction. Then the SQL Job just stops. I've added some debug insert-into-a-log-table statements in and around the main transaction to try to pinpoint exactly where the failure is occuring, but I'm not sure what that's going to tell me.
Also, whatever's making this stop isn't throwing an error because the code never goes to my ErrorHandler which sends an email out.
This is very strange and more than a little disconcerting; the code looks fine and runs fine most of the time....
May 27, 2005 at 11:08 am
Argh. It stopped again right after the one second delay but before beginning the insert transaction. This doesn't make any sense!
May 27, 2005 at 11:16 am
could it be the file system operations of the FTP and UNZIP operations are not truely completed when it goes to the next step of the TSQL? The OS may be returning a completion code, but the write-behind operations of the file system might not actually be completed...so i'm thinking that when you try to BCP or otherwise open the file, the failure occurs there? Is the file large in size?
just an idea;
Lowell
May 27, 2005 at 11:20 am
That's not a bad idea, but it doesn't feel right. These files are indeed quite large, but after being downloaded and decompressed, SQL Server goes to drop the indexes on the table (which takes several minutes) and then goes to inserting the data via a:
"INSERT INTO... SELECT FROM OPENROWSET(''MSDASQL'', ''Driver={Microsoft Text Driver (*.txt; *.csv)};DefaultDir=' + @gFTPDestinationPath + ';'', ''select * from ' + @vchFilename + ''')z".
This is quite intermittent but maybe you're on to something. Is there a way to check the file's status, that SQL Server can open it, prior to attempting it? Why isn't SQL Server throwing an error?
May 27, 2005 at 11:39 am
have a look at this article:
http://www.databasejournal.com/features/mssql/article.php/3492046
Hopefully it should help
May 27, 2005 at 12:01 pm
Thank you. Although I am checking for file existence after both download and decompression, I'm not doing it the way they do in the article's function.
I'm still skeptical that this is the issue -- that the OS still has ahold of the file -- but if I see this again, I'll try implementing that function.
Thanks again.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply