August 3, 2010 at 1:30 pm
Hi,
My MSSQL 2008 server is on another machine and I have a batch script that needs to do stuff on both machines (backup db, update/alter tables, migrate data, copy some files around...). At some point, is does this:
"C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqlcmd.exe" -i D:\path\updateDB.sql -S <db server ip> -d <database> -b -X -U script_deploy -P <password>
When I run that line locally on the DB server, it runs in about 1 minute. When running it from the remote server, it runs in about 14-15 minutes! Any ideas on what could make it run so slow?
The 'script_deploy' user is 'db_owner' (I know it's probably too permissive, but I'm trying to get it working here. 😉 )
Last thing that may or may not be useful, I noticed that the output for the script ran from sqlcmd is much shorter than the output received when ran from Management Studio. It's missing many many "(1 rows affected)", but I can confirm that the script ran OK.
Any help is appreciated,
Thanks,
Dominic.
August 3, 2010 at 5:38 pm
Hi Dominic,
It's possible that the (x rows affected) message is the problem. According to Books Online for sqlcmd:
When multiple results are returned, sqlcmd prints a blank line between each result set in a batch. In addition, the "<x> rows affected" message does not appear when it does not apply to the statement executed.
sqlcmd may still be receiving these messages, but just not displaying them. It's feasible that network performance could be 15 times slower than local. I recommend adding SET NOCOUNT ON to the top of your script, and see if that helps.
August 3, 2010 at 7:08 pm
Whoa ! Indeed, that was it ! Thanks.
I never would have thought that a 100 Mbps network link wouldn't be fast enough to display those "(1 row affected)" results, even if there were about 1000 of them ! Not by a factor of 15x anyway! 😉
Dominic.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply