Execute As not reading system view

  • FWIW - Filestream allows for direct access to the files through a file share...so not sure why that wouldn't work.  But - it would be a significant amount of work.

    You stated that compiling the procedure normally - works fine.  But later - it seems it doesn't work?  So I am confused...not sure what isn't working in that scenario that now requires trying to run as a different user using EXECUTE AS.

    When you enabled xp_cmdshell - you utilized a proxy account for that certificate.  When xp_cmdshell is executed - it will be executed in the context of that proxy account.  If that proxy account does not have access to the directories at the OS level then it is going to fail or return nothing.

    So you seem to have 2 problems...if you run without using EXECUTE AS - something doesn't work (I am thinking it is access to the directories), but you are able to query sys.master_files to get the file locations.  When you run using EXECUTE AS - you cannot query sys.master_files - but you can see the directories.

    If that is the case...then a simple solution is to put the xp_cmdshell portion into a separate procedure and use EXECUTE AS on that procedure.

    Note: I would not store image files on the same drive as my SQL data/log/tempdb files.  That can easily lead to performance issues because someone is trying to copy hundreds of files across the network.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams wrote:

    So you seem to have 2 problems...

    Not really - I have  ONE problem: I don't know what I'm doing.

     

    You stated that compiling the procedure normally - works fine.  But later - it seems it doesn't work?  So I am confused...not sure what isn't working in that scenario that now requires trying to run as a different user using EXECUTE AS.

    It USED TO work, in my old system, but I don't really know what of all the things I tried actually were functional and necessary to making it work. It works in my new system ONLY without Execute As. As soon as I put that in, no matter who the AS is (owner, dbo, any named login) it is gets the message "Get-ChildItem : Cannot find drive. A drive with the name 'D' does not exist." It works, when -I- run it, from my admin-privileged account. It does not work when run from any other account, even after I have split the procedures into two, as you suggested earlier - isolating the one that executes the file lookup. That one works for me, when run without Execute As. When I add Execute As, or call it from another account than mine, it does not.

     

    When you enabled xp_cmdshell - you utilized a proxy account for that certificate.

    How did I do that? All I see is that I must execute:

    EXECUTE sp_configure 'xp_cmdshell', 1;

    I see nothing about a proxy account there. I do have a login named 'xp_cmdshell_cert_login' in my old system, but I don't know if it's doing anything. I also have a credential named 'xp_cmdshell_cert_login', but same problem. These are all things I tried when I was first trying the get this to work, years ago, but I have no idea what (if anything) actually does what (if anything).

     

    When xp_cmdshell is executed - it will be executed in the context of that proxy account.  If that proxy account does not have access to the directories at the OS level then it is going to fail or return nothing.

    What does proxy account mean in this context? A SQL Server login? A domain account, configured on the institution's domain controller? A local account created on the server? Either? Something else? Part of my problem is that I often don't even understand what some of these terms mean. I look things up, I read endless tutorials, I scan forums for similar-sounding problems, and sometimes I strike gold, other times I strike out. This case is the latter. I've read more on this subject than I can even remember, but I keep coming up short on actually getting my skull wrapped around it.

     

    FWIW - Filestream allows for direct access to the files through a file share...so not sure why that wouldn't work.  But - it would be a significant amount of work.

    All my reading stated that the only way to get to files stored other than through SQL Server was to create some sort of API. Since I had no idea what that would entail, I gave up on the idea.

    Note: I would not store image files on the same drive as my SQL data/log/tempdb files.  That can easily lead to performance issues because someone is trying to copy hundreds of files across the network.

    I only have one data drive on this server, but it's not a problem. The traffic is so light that this just isn't an issue.

     

  • pdanes wrote:

    Jeffrey Williams wrote:

    So you seem to have 2 problems...

    Not really - I have  ONE problem: I don't know what I'm doing.

     

    You stated that compiling the procedure normally - works fine.  But later - it seems it doesn't work?  So I am confused...not sure what isn't working in that scenario that now requires trying to run as a different user using EXECUTE AS.

    It USED TO work, in my old system, but I don't really know what of all the things I tried actually were functional and necessary to making it work. It works in my new system ONLY without Execute As. As soon as I put that in, no matter who the AS is (owner, dbo, any named login) it is gets the message "Get-ChildItem : Cannot find drive. A drive with the name 'D' does not exist." It works, when -I- run it, from my admin-privileged account. It does not work when run from any other account, even after I have split the procedures into two, as you suggested earlier - isolating the one that executes the file lookup. That one works for me, when run without Execute As. When I add Execute As, or call it from another account than mine, it does not.

    When you enabled xp_cmdshell - you utilized a proxy account for that certificate.

    How did I do that? All I see is that I must execute:

    EXECUTE sp_configure 'xp_cmdshell', 1;

    I see nothing about a proxy account there. I do have a login named 'xp_cmdshell_cert_login' in my old system, but I don't know if it's doing anything. I also have a credential named 'xp_cmdshell_cert_login', but same problem. These are all things I tried when I was first trying the get this to work, years ago, but I have no idea what (if anything) actually does what (if anything).

    When xp_cmdshell is executed - it will be executed in the context of that proxy account.  If that proxy account does not have access to the directories at the OS level then it is going to fail or return nothing.

    What does proxy account mean in this context? A SQL Server login? A domain account, configured on the institution's domain controller? A local account created on the server? Either? Something else? Part of my problem is that I often don't even understand what some of these terms mean. I look things up, I read endless tutorials, I scan forums for similar-sounding problems, and sometimes I strike gold, other times I strike out. This case is the latter. I've read more on this subject than I can even remember, but I keep coming up short on actually getting my skull wrapped around it.

    FWIW - Filestream allows for direct access to the files through a file share...so not sure why that wouldn't work.  But - it would be a significant amount of work.

    All my reading stated that the only way to get to files stored other than through SQL Server was to create some sort of API. I had looked writing APIs for web interfaces earlier, and it looked like a huge amount of work. Since I had no idea what that would entail, I gave up on the idea.

    Note: I would not store image files on the same drive as my SQL data/log/tempdb files.  That can easily lead to performance issues because someone is trying to copy hundreds of files across the network.

    I only have one data drive on this server, but it's not a problem. The traffic is so light that this just isn't an issue.

  • I guess I am not following - you still haven't explained why you need to EXECUTE AS.  What is it that requires the procedure to be executed in a different context?

    If it works without using EXECUTE AS - then why not just leave it that way?

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Two things.

    1. Filetable (2012+) solves some of the code hassles of using FileStream. Filestream uses a File I/O access, but not a SMB share.
    2. Rather than help a normal user run this process, make it a job. Even a job that runs once a minute. Make the first step in the job one that checks if someone wants to run this. You can use a simple table to do this, 1 row, a few columns. If a user sets a flag, the job runs with admin privileges. You have that working. If a new user wants to set it, it's already set. If the job is running, don't let a user set it, or implement some delay. I might have 2 columns (running status and last run date).
  • Jeffrey Williams wrote:

    I guess I am not following - you still haven't explained why you need to EXECUTE AS.  What is it that requires the procedure to be executed in a different context?

    If it works without using EXECUTE AS - then why not just leave it that way?

    It DOESN'T work - that's the problem. It works ONLY FOR ME, in my admin account. Other people calling this procedure form the application via ODBC links from their own accounts, it does NOT work. If it did, I wouldn't be on here. I've been trying EXECUTE AS to get it to work, so far unsuccessfully.

  • Steve Jones - SSC Editor wrote:

    Two things.

    1. Filetable (2012+) solves some of the code hassles of using FileStream. Filestream uses a File I/O access, but not a SMB share.
    2. Rather than help a normal user run this process, make it a job. Even a job that runs once a minute. Make the first step in the job one that checks if someone wants to run this. You can use a simple table to do this, 1 row, a few columns. If a user sets a flag, the job runs with admin privileges. You have that working. If a new user wants to set it, it's already set. If the job is running, don't let a user set it, or implement some delay. I might have 2 columns (running status and last run date).

    Didn't know abouot Filetable, but I'll look it up, thanks.

    Wouldn't a job be needlessly labor-intensive? Running thousands of time per day for an activity that is needed usually only a few times per day, sometimes not at all for days at a time seems like swatting a gnat witha sledgehammer. And I used to have this working, so obviously there IS a way to do it. I just can't figure out how I did it back then. I had the ability to have the user execute the DIR command at will, but the way I had the whole app set up, it became too slow as the database grew. I now have a configuration that is lightning-quick, but I'm hung on the ability to execute that blasted DIR command.

    1. Filetable is the evolution of filestream that includes a share that is accessible like any other share. https://docs.microsoft.com/en-us/sql/relational-databases/blob/filetables-sql-server?view=sql-server-ver15   FWIW, this doesn't work in many cloud scenarios, so you are stuck with VMs if you move this way.
    2. No. The job would have a step to query a table for a value. (select run status from mytable where run status = 'need to run'). If this is not set, the job ends. Running a job with a small query 1440 times a day is minimal overhead. Having it actually execute a few times, is the price of doing business.

    You could set up people with sp_execute_job rights, but that creates complexity. The delegation features here in SQL Server are poor.

  • pdanes wrote:

    Jeffrey Williams wrote:

    I guess I am not following - you still haven't explained why you need to EXECUTE AS.  What is it that requires the procedure to be executed in a different context?

    If it works without using EXECUTE AS - then why not just leave it that way?

    It DOESN'T work - that's the problem. It works ONLY FOR ME, in my admin account. Other people calling this procedure form the application via ODBC links from their own accounts, it does NOT work. If it did, I wouldn't be on here. I've been trying EXECUTE AS to get it to work, so far unsuccessfully.

    So back to what I said before - you need to setup a proxy account for xp_cmdshell.  See here: https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-xp-cmdshell-proxy-account-transact-sql?view=sql-server-ver15 and here: https://www.mssqltips.com/sqlservertip/1020/enabling-xpcmdshell-in-sql-server/

     

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams wrote:

    pdanes wrote:

    Jeffrey Williams wrote:

    I guess I am not following - you still haven't explained why you need to EXECUTE AS.  What is it that requires the procedure to be executed in a different context?

    If it works without using EXECUTE AS - then why not just leave it that way?

    It DOESN'T work - that's the problem. It works ONLY FOR ME, in my admin account. Other people calling this procedure form the application via ODBC links from their own accounts, it does NOT work. If it did, I wouldn't be on here. I've been trying EXECUTE AS to get it to work, so far unsuccessfully.

    So back to what I said before - you need to setup a proxy account for xp_cmdshell.  See here: https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-xp-cmdshell-proxy-account-transact-sql?view=sql-server-ver15 and here: https://www.mssqltips.com/sqlservertip/1020/enabling-xpcmdshell-in-sql-server/

    I read that again, have gone through all the steps. I created a local (non-domain,  non-admin) account in the server, gave that account read permissions to the folder containing all the attachment files.

    I have the proxy account enabled:

    use master

    EXEC sp_xp_cmdshell_proxy_account 'Paleo-Server\SQLServerZaloha','BigHairyPassword'

    RECONFIGURE

    Still nothing. The SP works for me only, when run normally. It does not work for anyone else; it does not even work for me when I add: With Execute As 'Paleo-Server\SQLServerZaloha' - the DIR command returns "Get-ChildItem : Cannot find drive. A drive with the name 'D' does not exist."

    • This reply was modified 3 years, 9 months ago by  pdanes.
    • This reply was modified 3 years, 9 months ago by  pdanes.
  • Steve Jones - SSC Editor wrote:

    1. Filetable is the evolution of filestream that includes a share that is accessible like any other share. https://docs.microsoft.com/en-us/sql/relational-databases/blob/filetables-sql-server?view=sql-server-ver15   FWIW, this doesn't work in many cloud scenarios, so you are stuck with VMs if you move this way.
    2. No. The job would have a step to query a table for a value. (select run status from mytable where run status = 'need to run'). If this is not set, the job ends. Running a job with a small query 1440 times a day is minimal overhead. Having it actually execute a few times, is the price of doing business.

    You could set up people with sp_execute_job rights, but that creates complexity. The delegation features here in SQL Server are poor.

    Thanks for the link. It looks interesting, but I'm not sure I fully understand it just from reading. I will have to build a small test case and see how it works. If it is suitable, it may be worth the effort to convert everything.

    As for the job, even once per minute would not give me the response I want. Application users select a file from their own machine, often from a photocard, and attach that to the collection item currently open. Using the job method, the new file would still not be visible, for possibly up to a minute. The app would have to repeatedly requery the database, waiting until the new attachment finally appeared in the file attachment table. The user would either wonder why it had not appeared when he clearly attached it, and most likely would try again, leading to duplicates. Or they would be forced to sit and wait until the attachment showed up. Or they would have to leave the opened item without having seen the attachment appear at all, and trust that it happened, despite them not being able to see it. None of those are how I want this to operate - I want them to navigate to a file via a normal Windows file dialog and select it for attachment. The app copies it to the server, updates the file table and requeries the window showing all attachments for that item, at which point the new file should appear, immediately. And it will, *-IF-* I can once again figure out how to let other users execute the DIR command.

  • If the app loads this, why do you need the dir command? Don't you know where attachments are stored? If you use Filetable, you'll have a known share (\\server\share\path\file). If you establish a place on the instance, store that base path in a table once and reference sub folders from there.

    It seems your architecture might not be correct here, if you have a user upload a file and you can't derive a path in the application.

  • Steve Jones - SSC Editor wrote:

    If the app loads this, why do you need the dir command? Don't you know where attachments are stored? If you use Filetable, you'll have a known share (\\server\share\path\file). If you establish a place on the instance, store that base path in a table once and reference sub folders from there.

    It seems your architecture might not be correct here, if you have a user upload a file and you can't derive a path in the application.

    The app may load an individual file or files, as an attachment to a particular record. Naturally, the app would know about that. But it does not know how many OTHER files attached to the record may already exist on the server. Back in Post 3840258, I described the structure to Jeffrey Williams, when he wanted to know exactly what I was up to. Both the app and users directly have access to this mass of attachment files. The app can add and delete, users can additionally directly rename and edit, via normal access through a mapped network share.

     

    The files have a naming schema, but it is not one that I can precisely specify to a DIR command, or to a PowerShell command, which is what I am using here. (I thought PowerShell might be faster than the DOS DIR command, so I converted the procedure to that. It's not faster, but it's also not slower, so I left it in PowerShell.) I submit a DIR command, which gives me at least what I want, but also a lot more, so I then have to sort out what I actually want from all the stuff the DIR command returns. There are tens of thousands of these attachment files, and even with appropriately named subfolders, scanning the file/folder structure each time was just too slow. I now have these names cached in an internal table, making lookups instantaneous, but I also need to update the table when new stuff is added, and that is where the user needs to run the stored procedure with the xp_cmdshell command in it. They may also want to run it if the photographer informs them that she has just uploaded some batch of photo work that the user has been waiting for. I also have a job that runs nightly, to capture anything that has gotten loaded onto the server manually, rather than via the app, but I don't want to be running that constantly – that really would slow the machine down, and probably quickly wear out my drives.

     

    I have all this exactly the way I want, and it works perfectly, except for the one detail of I forgot how I managed to allow end users to run a procedure that executes this DIR command. THAT is the one thing I am trying to figure out, and I can't believe it is such a problem, especially since I already got it to work once, at a time when I knew even less about SQL Server than I do now.

  • If the app gets changes, then an upload is immediately correct. You said that a job doesn't work because it could take a minute to show changes, but those would only be the "out of app" changes, which aren't caught until someone uploads anyway.

     

  • Steve Jones - SSC Editor wrote:

    If the app gets changes, then an upload is immediately correct. You said that a job doesn't work because it could take a minute to show changes, but those would only be the "out of app" changes, which aren't caught until someone uploads anyway.

    True, I could update the cached file table with the info from the upload, for that specific file. But again, I also need to be able to scan for the 'out of app' changes. That requires a disk scan. I could do that the way you suggest, with a job triggered by notation in some small table, but it's still an unnecessary delay, up to a minute before the task even starts, and an unnecessary complication. I would have to deal with the possibility of two users writing to the table at once, since two users would almost certainly have different requirements of what they want updated - almost every user has his own catalog. Yes, I could make the requester add records to the table, and the scan routine could pick them off, one at a time, as it processes each request, but again, all this is unnecessarily complicated. Why build something like that when I have everything complete and functional, except for this one issue of proper permissions?

    I KNOW it can be done - I did it once, way back when. And there are endless tutorials on the net on how to do this. It's just that I haven't been able to make the necessary connections to get any of the suggestions functional.

Viewing 15 posts - 16 through 30 (of 30 total)

You must be logged in to reply to this topic. Login to reply