July 9, 2007 at 10:14 am
I have been having BCP fail on existing jobs that are scheduled via Maestro.
"SQLState = S1000, NativeError = 0
Error = [Microsoft][ODBC SQL Server Driver]Unable to open BCP host data-file"
is the error message. FTP log show normal entry and exit for the sender of the the incoming file. There are no other applications that use the file beside the batch update.
System backups run hours before or after the failure. In short no activity other then the sending and call to BCP are taking place. These jobs run normal 90% of the time and then fial but with no pattern I can see. For example one jobs runs 4 times a day
2:00 am 11:30 am 3:30 pm and 5:30 pm ..anyone of them can fail. and the rest are fine.
If I request a re-run after the failure...it works!
Anyone seen this before?
July 9, 2007 at 4:36 pm
just verify that FTP is *done* by the time BCP starts!
* Noel
July 10, 2007 at 4:19 am
I'd guess the same as Noel, that the ftp wasn't finished. You might be able to test for an open file, but I might just schedule the job to retry on failure.
July 10, 2007 at 4:47 am
Other guesses might be that someone opened the file to see if the FTP was done or a system defrag was running.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 10, 2007 at 9:21 am
The schedule has two steps:
1) check that a trigger file was sent. This is nothing more then a trigger.txt and contains a comment "this is a trigger file". It is sent after the data text file is sent.
2) BCP begins once the trigger file has been found.
I've verified the log from the sender that the file transmission logged in and logged out correctly. Also, my server log show the same result. User come in...drops off the files..and then logs out.
July 10, 2007 at 9:29 am
My server admin has verified that no system maintenace jobs are running during this time frame. He has also checked to see that no one is logged on the box other then the scheduler agent "Maestro".
I tried a work around by copying the data file to a different directory. This at least gives me the cushion that if the copy fails I haven't truncated a table and then can not re-load it.
So far this approach has failed two time with incomplete copies of the file.
July 10, 2007 at 8:34 pm
Could it be that the sender of the files is sending the trigger file before the data file is received in total?
--Jeff Moden
Change is inevitable... Change for the better is not.
July 11, 2007 at 8:46 am
I did verify that with the sender and confirmed via the ftp log that the trigger arrives first and then the data file.
Log shows -- user come in ..creates trigger ---logs off...log back in create data -- log off
July 11, 2007 at 5:34 pm
Can you get them to reverse the process? Seems to me that the data table should be there completely befor the trigger table that says it is.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 12, 2007 at 8:29 am
You definitely want that trigger file to be sent after the data file has finished transmission. Its job isn't to announce that the data file exists, rather it should indicate that the whole file has been transmitted and is now ready to process.
July 12, 2007 at 8:41 am
opps I wrote that backwards. the Data files arrives first then the trigger..
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply