October 15, 2015 at 4:51 am
peter-757102 (10/15/2015)
The App writes a new file using a filename that contains a timestamp and UUD. Then another asynchronous process pulls those files in, batches them and takes care of the rest. Then even if you app crashes, nothing is lost and result show up in the database with minimal delay!
The only file writing along these lines that I am familiar with buffers the output in memory - presumably writing a "page" when it has enough. I doubt that it outputs based on whole-record written (but it might? - e.g. splitting at a LineBreak) so with that method it seems that I would lose any in-memory buffered data at a crash, and quite possibly the last record written to file would be incomplete (i.e. the whole-page-write would contain a final, partial, record).
I get that logging to table may not be scalable, but at least I know that SQL will take care of all the COMMIT/ROLLBACK side of it and I don't have to worry about that 🙂 but seems to me with a file-write approach I've got a lot of careful edge-testing to do to make sure that it will fail-safe. but I'm not very familiar with the mechanics, its a long time (20 years at least 😛 ) since we changed from ISAM to SQL and I stopped having to write to files directly!, so it might be that I am worrying unnecessarily?
October 15, 2015 at 6:01 am
Kristen-173977 (10/15/2015)
peter-757102 (10/15/2015)
The App writes a new file using a filename that contains a timestamp and UUD. Then another asynchronous process pulls those files in, batches them and takes care of the rest. Then even if you app crashes, nothing is lost and result show up in the database with minimal delay!The only file writing along these lines that I am familiar with buffers the output in memory - presumably writing a "page" when it has enough. I doubt that it outputs based on whole-record written (but it might? - e.g. splitting at a LineBreak) so with that method it seems that I would lose any in-memory buffered data at a crash, and quite possibly the last record written to file would be incomplete (i.e. the whole-page-write would contain a final, partial, record).
I get that logging to table may not be scalable, but at least I know that SQL will take care of all the COMMIT/ROLLBACK side of it and I don't have to worry about that 🙂 but seems to me with a file-write approach I've got a lot of careful edge-testing to do to make sure that it will fail-safe. but I'm not very familiar with the mechanics, its a long time (20 years at least 😛 ) since we changed from ISAM to SQL and I stopped having to write to files directly!, so it might be that I am worrying unnecessarily?
The base idea is to get rid of lock contention and increase log throughput while decreating latcency of log actions. What is best is situation dependent, but I would not start doing complicated things unless there is a real problem on the horizon.
Also I did mean writing a file per log record, not anything more complicated. This may or may not have the throughput needed, but it sure is low latency and local. If there are multiple (web-)servers then each has its own local folder with files to pick up by another process.
October 15, 2015 at 8:46 am
Kristen-173977 (10/15/2015)
How does the APP write to a text file, for later bulk-import, in a multi-user friendly way? Isn't it going to have to open the file, lock it, read the last page, append the new record, write the last page, close the file?Or is there some smarter way?
Or is the suggestion that each user/session has a separate file, buffered in memory, which is periodically flushed & closed (and new file opened when necessary) to allow bulk import? (That would cause me some worries about being able to get at the errors in the event of a crash, as it might be that that is the very time when I need the log data).
Maybe this data could be "queued" to some other process? For example I haven't (previously!) considered Service Broker for this type of task - perhaps I should?
My understanding is that a .NET or Windows app can write to a file in simply append mode (ie: WriteLine) or in block mode. For event logging, all you need is a simple append, write X bytes followed by new end of file marker. It would be a very short lock. You could also batch X record in memory prior to each log insert.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
November 13, 2015 at 2:22 pm
Eric M Russell (10/15/2015)
...My understanding is that a .NET or Windows app can write to a file in simply append mode (ie: WriteLine) or in block mode. For event logging, all you need is a simple append, write X bytes followed by new end of file marker...
Correct.
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
November 5, 2019 at 11:18 am
We use a structure that contains all the sql server error variable values. We add to that several columns which are related to the who, what, when and where of the execution. The logging code and tables are standard. To the code that does the logging we added logic to determine if the logging should take place such as object name, date and time range, client, mode such as debugging, warning, error and info. Nothing is logged until we insert a row into a log specification table which would cause logging by any or all of the above criteria. This is a very fast short circuit boolean evaluation to determine if we need to log. As we always start from generated CRUD stored procedures which are generated based on templates that contain the logging code for exceptions it is fairly easy to extend the usage of it in the SP for anything else. We also generate parameter and parameter value logging code in the CRUD SP's so they are also easily extended. This is also off by default and may be turned on the same way. We do not index the logging tables or parameter logging other than the default big int identity so there is as little performance hit as possible if we need to log. The logs and parameters are swept daily into other tables where we can mince them up as needed.
Viewing 5 posts - 31 through 34 (of 34 total)
You must be logged in to reply to this topic. Login to reply