To know how many times our client login

  • Hello

    what is query we should run to know how many times user login to our databases?

  • You could try something like this:

    --Create Server Audit:

    USE master;

    GO

    CREATE SERVER AUDIT Login_Audit

    TO FILE (FILEPATH = 'd:\sql_temp\Login_Audit',

    MAXSIZE = 2GB,

    MAX_ROLLOVER_FILES = 10)

    WITH (QUEUE_DELAY = 0, ON_FAILURE = SHUTDOWN);

    CREATE SERVER AUDIT SPECIFICATION Login_Logout_Audit

    FOR SERVER AUDIT Login_Audit

    ADD (SUCCESSFUL_LOGIN_GROUP),

    ADD (FAILED_LOGIN_GROUP)

    WITH (STATE = OFF);

    --Now, turn it on:

    ALTER SERVER AUDIT Login_Audit

    WITH (STATE = ON);

    ALTER SERVER AUDIT SPECIFICATION Login_Logout_Audit

    FOR SERVER AUDIT Login_Audit

    WITH (STATE = ON);

    --Now try to do some logins, both successful and unsuccessful

    --Query the audit file

    SELECT event_time AS EventTime,

    sequence_number AS Seq,

    action_id AS ID,

    succeeded AS ,

    session_id AS SessionID,

    server_principal_name AS SPrincipal

    FROM sys.fn_get_audit_file('d:\sql_temp\Login_Audit\*.sqlaudit',DEFAULT,DEFAULT);

    --Remove audit from server

    ALTER SERVER AUDIT Login_Audit

    WITH (STATE = OFF);

    DROP SERVER AUDIT Login_Audit;

    GO

    ALTER SERVER AUDIT SPECIFICATION Login_Logout_Audit

    WITH (STATE = OFF);

    DROP SERVER AUDIT SPECIFICATION Login_Logout_Audit;

    ----------------------------------------------------------------------------------------------
    Microsoft Certified Solution Master: Data Platform, Microsoft Certified Trainer
    Email: Louis.Li@rrlminc.com | Blog[/url] | LinkedIn[/url]

  • Without some sort of auditing process it is not possible. SQLAudit is the best way to do it if you want to meet regulatory compliance requirements. Otherwise, you also have the option to log any successful, unsuccessful logins through GUI i.e. Server Properties -> Security -> Login Auditing -> your choice from there

    OR through TSQL

    USE [master]

    GO

    EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'AuditLevel', REG_DWORD, 3 --FOR BOTH FAILED AND SUCCESSFUL LOGINS

    GO

    Then you can read successful/failed login entries from the SQL Server error log files with the help of the procedure xp_readerrorlog[/url]

    An example of reading successful login entry from the current error log could be

    EXEC master.dbo.xp_readerrorlog 0, 1, 'Login succeed', NULL, '20120809', '20120810', N'asc'

    But since the error log files are archived/deleted at regular intervals, you may have to setup a job to keep the history in a table.

    Another option could be to use SQL trace.

  • If you don't have audit capability, here's my take on how to do it.

    -- this sets up the table

    CREATE TABLE loginAudit ( login_name VARCHAR(MAX), last_login_date DATETIME )

    INSERT INTO loginAudit (login_name, last_login_date)

    SELECT DISTINCT sl.name, NULL FROM sys.syslogins sl

    -- put this into a job and run it every minute. It polls for active sessions.

    SELECT MAX(login_time) [login_time], login_name

    INTO #loginTempTable

    FROM sys.dm_exec_sessions

    GROUP BY login_name

    UPDATE loginAudit

    SET last_login_date = ltt.login_time

    FROM #loginTempTable ltt

    WHERE loginAudit.login_name = ltt.login_name

    Output is a list of last login time. If you want a count too, modify the code to include a BIGINT column and modify the UPDATE to increase the count for the relevant column by 1.

    ---

    Note to developers:
    CAST(SUBSTRING(CAST(FLOOR(NULLIF(ISNULL(COALESCE(1,NULL),NULL),NULL)) AS CHAR(1)),1,1) AS INT) == 1
    So why complicate your code AND MAKE MY JOB HARDER??!:crazy:

    Want to get the best help? Click here https://www.sqlservercentral.com/articles/forum-etiquette-how-to-post-datacode-on-a-forum-to-get-the-best-help (Jeff Moden)
    My blog: http://uksqldba.blogspot.com
    Visit http://www.DerekColley.co.uk to find out more about me.

  • derek.colley (8/10/2012)


    If you don't have audit capability, here's my take on how to do it.

    -- this sets up the table

    CREATE TABLE loginAudit ( login_name VARCHAR(MAX), last_login_date DATETIME )

    INSERT INTO loginAudit (login_name, last_login_date)

    SELECT DISTINCT sl.name, NULL FROM sys.syslogins sl

    -- put this into a job and run it every minute. It polls for active sessions.

    SELECT MAX(login_time) [login_time], login_name

    INTO #loginTempTable

    FROM sys.dm_exec_sessions

    GROUP BY login_name

    UPDATE loginAudit

    SET last_login_date = ltt.login_time

    FROM #loginTempTable ltt

    WHERE loginAudit.login_name = ltt.login_name

    Output is a list of last login time. If you want a count too, modify the code to include a BIGINT column and modify the UPDATE to increase the count for the relevant column by 1.

    Nice thought. But one of the few things I would be concerned would be increment of the count for old logins already logged in the Audit table. So the better way would be to omit the already logged sessions. Something like the following (Just kind of pseudo-code)

    UPDATE loginAudit

    SET last_login_date = ltt.login_time , LoginCount = LoginCount + 1

    FROM #loginTempTable ltt

    WHERE loginAudit.login_name = ltt.login_name

    AND NOT EXISTS (SELECT 1 from loginAudit LA

    WHERE ltt.login_name = LA.login_name

    AND ltt.login_time = LA.last_login_date )

    Another concern would be what if the OP wants to see the count for a date range. But that again can be handled by use of INSERT instead of UPDATE.

    But the biggest concern would be what if the session is occupied by another user before the job is ran? We could miss some counts? Also, what if the job does not ran somehow (Getting too pessimistic ;))

    Just wanted to convey that if it is a compliance requirement then some Audit process is necessary preferably SQLAudit (which is not affordable for most users). Otherwise, if some counts anomalies can be ignored then other solutions can be taken into account.

  • I forgot that the same could be handled with DDL (LOGON) trigger as well 😉

  • Usman Butt (8/10/2012)


    I forgot that the same could be handled with DDL (LOGON) trigger as well 😉

    Definitely not - the problem with FOR LOGON triggers is that if something happens to the dependent objects (tables etc) then you can have wide-ranging repurcussions.

    Consider a FOR LOGON trigger that updates table 'audit_logins', one row for each new login.

    Now if any of the following happen:

    * The table is dropped

    * The table data becomes corrupted

    * Permissions to write to the table are changed/removed

    * The table cannot be updated (think IDENTITY having a 2bn upper row limit)

    Or any other event causing an error to be raised, then the trigger fails. If the trigger fails, the logon is rolled back and the login to the server fails.

    Consider, let's say, 10 application logins / sec. If this was a critical link in e.g. a manufacturing system you could see large financial loss very quickly.

    Not to mention the added headache of working out how to fix the problem (DAC then drop / disable trigger in sys.systriggers, for reference).

    I *really* don't like FOR LOGON triggers :-S

    ---

    Note to developers:
    CAST(SUBSTRING(CAST(FLOOR(NULLIF(ISNULL(COALESCE(1,NULL),NULL),NULL)) AS CHAR(1)),1,1) AS INT) == 1
    So why complicate your code AND MAKE MY JOB HARDER??!:crazy:

    Want to get the best help? Click here https://www.sqlservercentral.com/articles/forum-etiquette-how-to-post-datacode-on-a-forum-to-get-the-best-help (Jeff Moden)
    My blog: http://uksqldba.blogspot.com
    Visit http://www.DerekColley.co.uk to find out more about me.

  • derek.colley (8/10/2012)


    Usman Butt (8/10/2012)


    I forgot that the same could be handled with DDL (LOGON) trigger as well 😉

    Definitely not - the problem with FOR LOGON triggers is that if something happens to the dependent objects (tables etc) then you can have wide-ranging repurcussions.

    Consider a FOR LOGON trigger that updates table 'audit_logins', one row for each new login.

    Now if any of the following happen:

    * The table is dropped

    * The table data becomes corrupted

    * Permissions to write to the table are changed/removed

    * The table cannot be updated (think IDENTITY having a 2bn upper row limit)

    Or any other event causing an error to be raised, then the trigger fails. If the trigger fails, the logon is rolled back and the login to the server fails.

    Consider, let's say, 10 application logins / sec. If this was a critical link in e.g. a manufacturing system you could see large financial loss very quickly.

    Not to mention the added headache of working out how to fix the problem (DAC then drop / disable trigger in sys.systriggers, for reference).

    I *really* don't like FOR LOGON triggers :-S

    I did say could be handled not should be handled 😉

    I agree there could be "wide-ranging repurcussions", but in a controlled environment, such situations may not occur. It is the same as someone changes the user table without knowing the dependencies and then later no user is able to log into the system. It all depends upon the environment 😉

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply