when peers maintain a job rdp'd in they corrupt the job

  • Hi, one of my peers prefers to maintain one specific sql agent job after rdp'ing into that server.  I think its probably because he usually wants to then restart the job knowing it will continue running even if he has to shut down his own pc.  I'm not sure but i think if you restart from the client on your own pc , the thread is broken and the job stops.

    unfortunately whenever he does that, the server that particular step is to run on (most cases) turns to localhost.   This is a problem for the rest of us who tend to change and rerun from the ssms client on our pc's.  When we edit that step, of course localhost isnt known and all of our parameters and configs reset to what appears to be some default perhaps from the days when the job was tested on a different server.   And i end up resetting them manually from pictures i keep or from looking in some cases at the ssis dashboard that displays them from the last time that step ran or from backup definitions of the job.

    i would hate to argue with him about this.  does anybody know of a solution to this problem?

  • I have absolutly no idea what you are talking about. Running jobs in sql server agent does not require rdp sessions so far. Just a SSMS and the corresponding permissions. That's it.

  • stan wrote:

    i would hate to argue with him about this.  does anybody know of a solution to this problem?

    SQL Agent runs on the server on which it is hosted, no matter where it is started from. There must be something else going on here.

    Script the job out fully, when it is in its 'correct' state – then you can fix it up at any time at the click of a mouse.

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • thx folks, but has anybody seen the unwanted byproduct of changing a step (eg different step to go to next etc) while right on the server?  I believe what is happening is "server to run on" is automatically changing to localhost.   unfortunately, when left in that state and someone uses ssms the usual way on their pc, just hitting edit on that step corrupts the params and configs.

    Phil, i already have as a backup the approach you gave but i'm wondering if anyone out there knows of a way for localhost to stop replacing the name of the server to run on.  The reason i ask is that as you know, backups get stale.

  • Are you referring to agent jobs running SSIS packages - and only those types of agent jobs?

    If so - why are you manually modifying and running agent jobs in the first place?  That is not the purpose of agent jobs and is the root cause of your issues.

    If you are not deploying your SSIS packages to the catalog - that would be your next issue.  Deploying to the catalog would allow for some 'limited' executing directly from the catalog without having to create agent jobs.  Note: if the packages use windows authentication to access other systems you may run into a double-hop (kerberos) issue - that is why it would be limited.

    Of course, if these are all SSIS packages - then you always have the option of running them directly in VS without having to deploy them anywhere.  However - SSIS packages are supposed to be setup and configured as batch processes and not interactive processes so manually running them should not be the norm.  They should be set up in a scheduler with appropriate parameters and scheduled to run on an interval - and not modified for each 'run'.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • they arent only ssis Jeffrey but your points are still well taken.   All of our packages are versioned , in the catalog and at the moment set with some very static params and configs depending on the agent step by the sql agent job.   If your point is that they could be set somewhere else or differently like in abstract environments etc, sure but agent would still have to say something about the environment.   and whatever it is saying will get corrupted by this behavior.

    Basically the same package is run 13 times against 13 erps.   And the params and configs vary from biz unit, erp connection info, full vs incremental etc etc.

    Generally when this person is in there he is skipping a step in advanced settings for this step or setting a flag from incremental (or back) because a data problem occurred in the erp or even in the etl.

    the package is pretty sophisticated in that it can even be driven by parameters that tell it which sub packages can run concurrently and/or in what order.  Even if we could abstract this a bit further in environments, we arent ready and have to deal with the problem that in my opinion will occur often enough to disrupt things.

    so i have to ask again, is anybody familiar with the behavior where sql corrupts existing params and configs because of that localhost setting that overlays the "server to run on name"  because someone edited a step on the server itself?  We are aware of the different ways a pkg can be run.  what i hadnt thought of before is whether the "server to run on name" itself can be parameterized.  I'll look at that later.

     

  • its not the it corrupts - but if when running a package the user unsets a required parameter it may be defaulting to what is "hardcoded" on the package - so while you seem to think its corrupted, it may just be an user error.

  • Maybe a screenshot would make things clearer.

  • Without actually seeing a specific workflow or a more specific example - I would have to say that making any changes to the agent job is the problem.  If a step fails then you fix the issue and restart the job from that step with no changes.  If the step failed and the job is configured to bypass that error and run the other steps - then you fix the problem and either manually run the package or create a temporary agent job just to run that one package.

    Modifying the agent job and settings should not be the standard way of addressing processing issues with those agent jobs.

    With that said - the part of this that actually bothers me is the single package approach.  The real power to the catalog and project deployments is the fact that we can now build individual packages to perform a task.  We can then create a master package to run each sub-package - or create an agent job to run each individual package.

    For projects that need to run against different servers - I setup the connection as a project level connection.  Then in the agent job I can override the 'default' server name with the server name it should run against.  Instead of a single package with some kind of loop over a list of servers - I then have individual agent jobs per server.

    Then - if the values of the parameters change per server I can either pull that information from a configuration database or I can hard-code the values in Environment variables.

    Since I don't know how you have things set up - I come back to this idea of making changes to the agent jobs.  That is the root of the problem and should only be done when it has been identified that a wrong parameter was setup in the first place.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • thanks frederice, deubel, jeffrey.

    1.  frederico i've changed full vs incremental and sometimes the go to step when we need to skip.  No problem when using the client.   i believe when my peer changes things on the server itself,   and later i go to change something in the client, localhost shows up corrupting everything because agent and the catalog cant do a handshake.  Its been my expperience with agent in general when you change a server to run on other things get corrupted or their values simply disappear as if sql is trying to do you a favor.
    2. deubel, i'll provide a screen shot when i can.  In the meantime i've asked my peer to stop doing that.
    3.  Jeff, your input is appreciated .  I'll show screen shots when i can.   The job is long.   And dealing with multiple erp's is a challenge.   Unfortunately fixing some problems often rests on the shoulders of the job, specifically which steps are or arent to run and/or what parameters (especially full vs incremental) values are to be passed from the job.   We are aware of and use pkgs and sub pkgs so i dont understand your point there.  One of the greatest strengths of ssis is reusability based on different config values.  We have one orchestrator pkg that calls about 10 sub pkgs whose responsibilities (eg record customers, sales etc) are very specific.   That orchestrator runs once , in different steps, for each erp.  We dont have different servers that need to run these pkgs so im unclear about your point there too.  But we are aware of the flexibility in agent and ssis of enlisting different servers for different reasons.   For instance, the source erp server for location a may be and probably is different (server, technology like oracle vs sql server etc) from the source for loacation b.  The root problem may be what you say but this post and our immediate problem is someone changing something while rdp'd and impacting folks who use the client.   We didnt write this thing.    But we are saddled with making the most of it.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply