March 6, 2015 at 10:07 am
I have a table that is part of a Suite CRM installation. It Contains approx 1.5 million rows. When I do a "Select all rows" in SSMS it take up to 5 minutes to execute the query. The problem appears to be the [description] field which is nvachar(max). It looks to contain 0 to ~1600 characters.
I've run the database tuning adviser and applied the recommendations which were all create statistics.
The same table is linked into an access database and when i open it there, I can go to the last row in 2 or 3 seconds.
Table Def:
CREATE TABLE [dbo].[notes](
[assigned_user_id] [varchar](36) NULL,
[id] [varchar](36) NOT NULL,
[date_entered] [datetime] NULL,
[date_modified] [datetime] NULL,
[modified_user_id] [varchar](36) NULL,
[created_by] [varchar](36) NULL,
[name] [nvarchar](255) NULL,
[file_mime_type] [nvarchar](100) NULL,
[filename] [nvarchar](255) NULL,
[parent_type] [nvarchar](255) NULL,
[parent_id] [varchar](36) NULL,
[contact_id] [varchar](36) NULL,
[portal_flag] [bit] NULL,
[embed_flag] [bit] NULL,
[description] [nvarchar](max) NULL,
[deleted] [bit] NULL,
CONSTRAINT [notespk] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Is there anything I can do to improve the performance or is it just the amount of data?
March 6, 2015 at 10:42 am
check what the query is waiting on. you can use the following query. My guess is the wait type for the query will be ASYNC_NETWORK_IO, which would usually mean the client, in this case SSMS, is taking too long to consume the results.
SELECT
[owt].[session_id],
[owt].[exec_context_id],
[ot].[scheduler_id],
[owt].[wait_duration_ms],
[owt].[wait_type],
[owt].[blocking_session_id],
[owt].[resource_description],
[es].program_name,
CASE [owt].[wait_type]
WHEN N'CXPACKET' THEN
RIGHT ([owt].[resource_description],
CHARINDEX (N'=', REVERSE ([owt].[resource_description])) - 1)
ELSE NULL
END AS [Node ID],
[est].text,
[er].[database_id],
[eqp].[query_plan],
[er].[cpu_time],
[es].memory_usage
FROM sys.dm_os_waiting_tasks [owt]
INNER JOIN sys.dm_os_tasks [ot] ON
[owt].[waiting_task_address] = [ot].[task_address]
INNER JOIN sys.dm_exec_sessions [es] ON
[owt].[session_id] = [es].[session_id]
INNER JOIN sys.dm_exec_requests [er] ON
[es].[session_id] = [er].[session_id]
OUTER APPLY sys.dm_exec_sql_text ([er].[sql_handle]) [est]
OUTER APPLY sys.dm_exec_query_plan ([er].[plan_handle]) [eqp]
WHERE
[es].[is_user_process] = 1
ORDER BY
[owt].[session_id],
[owt].[exec_context_id];
March 6, 2015 at 10:45 am
hi dschubel,
I would usually start troubleshooting an issue like this by using the "SET STATISTICS IO ON" and "SET STATISTICS TIME ON" feature in my script, turning on actual execution plans and executing my script. You can then view the execution plan to see how SQL Server is executing the query and if there are any issues you can identify from there.
https://msdn.microsoft.com/en-us/library/ms189562.aspx
Also, there is a great script out there called SP_WhoIsActive written by Adam Machanic that you can use to look at the query during execution.
http://sqlblog.com/blogs/adam_machanic/archive/2012/03/22/released-who-is-active-v11-11.aspx
There are a couple columns returned by this called "wait_info" and "blocking_session_id" that you can use to see if the query is "waiting" on anything and if there is another process blocking it.
Check those out and let me know what you see. 🙂
March 6, 2015 at 10:59 am
So you are spooling 1M+ rows (fat or otherwise) back to the client, in this case SSMS?!? That isn't going to work out well for you!!
Declare variables for every field (of the exactly correct data type) and SELECT all fields into those variables and you can see just how fast the server reads data.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 6, 2015 at 11:00 am
Oh, I should also ask WHY do you want to bring back the entire table to SSMS?
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 6, 2015 at 11:03 am
dschubel (3/6/2015)
I have a table that is part of a Suite CRM installation. It Contains approx 1.5 million rows. When I do a "Select all rows" in SSMS it take up to 5 minutes to execute the query. The problem appears to be the [description] field which is nvachar(max). It looks to contain 0 to ~1600 characters.I've run the database tuning adviser and applied the recommendations which were all create statistics.
The same table is linked into an access database and when i open it there, I can go to the last row in 2 or 3 seconds.
Table Def:
CREATE TABLE [dbo].[notes](
[assigned_user_id] [varchar](36) NULL,
[id] [varchar](36) NOT NULL,
[date_entered] [datetime] NULL,
[date_modified] [datetime] NULL,
[modified_user_id] [varchar](36) NULL,
[created_by] [varchar](36) NULL,
[name] [nvarchar](255) NULL,
[file_mime_type] [nvarchar](100) NULL,
[filename] [nvarchar](255) NULL,
[parent_type] [nvarchar](255) NULL,
[parent_id] [varchar](36) NULL,
[contact_id] [varchar](36) NULL,
[portal_flag] [bit] NULL,
[embed_flag] [bit] NULL,
[description] [nvarchar](max) NULL,
[deleted] [bit] NULL,
CONSTRAINT [notespk] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Is there anything I can do to improve the performance or is it just the amount of data?
I agree with what Robert said, it probably has nothing to do with the table itself but rather your SSMS client consuming the results. To see for sure you can, from SSMS, also go into TOOLS > OPTIONS > Query Results > Results to Grid and check "Discard Results after Execution" the run a "SELECT * FROM dbo.notes". I suspect that, for 1.5M rows, the query can complete in a few seconds.
The data tuning Adviser is a fantastic too but will not help you here. It will help you determine what indexes you need over time.
-- Itzik Ben-Gan 2001
March 6, 2015 at 11:57 am
TheSQLGuru (3/6/2015)
Oh, I should also ask WHY do you want to bring back the entire table to SSMS?
I'm actually trying to trouble shoot poor performance with the Suite CRM application. It is a web based (php) app and this table appears to be involved when the performance is poor.
March 6, 2015 at 12:57 pm
Hopefully the CRM app rarely does "SELECT * FROM dbo.notes" on the whole table, so doing that in Management Studio is not a useful test. You're just stressing the network. Access can go to the last row very quickly, but so can Management Studio if you only ask for the last row (assuming that ids are sequential):
SELECT TOP (1) * FROM dbo.notes ORDER BY id DESC
You could look at the missing index dmv to get suggestions on indexes that might improve performance. Or be an old-school DBA and use SQL Profiler to capture the worst-performing SQL statements, then use Database Tuning Advisor to get ideas.
You might also look at table fragmentation, particularly if ids are nonsequential. That VARCHAR(36) datatype doesn't mean it's a random GUID converted to text, does it?
March 6, 2015 at 6:24 pm
Scott Coleman (3/6/2015)
Hopefully the CRM app rarely does "SELECT * FROM dbo.notes" on the whole table, so doing that in Management Studio is not a useful test. You're just stressing the network. Access can go to the last row very quickly, but so can Management Studio if you only ask for the last row (assuming that ids are sequential):
SELECT TOP (1) * FROM dbo.notes ORDER BY id DESC
You could look at the missing index dmv to get suggestions on indexes that might improve performance. Or be an old-school DBA and use SQL Profiler to capture the worst-performing SQL statements, then use Database Tuning Advisor to get ideas.
You might also look at table fragmentation, particularly if ids are nonsequential. That VARCHAR(36) datatype doesn't mean it's a random GUID converted to text, does it?
I'm betting there are 6 GUIDs in that table! :w00t:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 9, 2015 at 7:06 am
seshurtz (3/6/2015)
hi dschubel,I would usually start troubleshooting an issue like this by using the "SET STATISTICS IO ON" and "SET STATISTICS TIME ON" feature in my script, turning on actual execution plans and executing my script. You can then view the execution plan to see how SQL Server is executing the query and if there are any issues you can identify from there.
https://msdn.microsoft.com/en-us/library/ms189562.aspx
Also, there is a great script out there called SP_WhoIsActive written by Adam Machanic that you can use to look at the query during execution.
http://sqlblog.com/blogs/adam_machanic/archive/2012/03/22/released-who-is-active-v11-11.aspx
There are a couple columns returned by this called "wait_info" and "blocking_session_id" that you can use to see if the query is "waiting" on anything and if there is another process blocking it.
Check those out and let me know what you see. 🙂
wait_info: (2ms)PAGEIOLATCH_SH:suitecrm:1(*)
blocking_session_id: NULL
CPU: 4.656
reads: 593,112
physical_reads: 80,541.
March 9, 2015 at 7:11 am
TheSQLGuru (3/6/2015)
Scott Coleman (3/6/2015)
Hopefully the CRM app rarely does "SELECT * FROM dbo.notes" on the whole table, so doing that in Management Studio is not a useful test. You're just stressing the network. Access can go to the last row very quickly, but so can Management Studio if you only ask for the last row (assuming that ids are sequential):
SELECT TOP (1) * FROM dbo.notes ORDER BY id DESC
You could look at the missing index dmv to get suggestions on indexes that might improve performance. Or be an old-school DBA and use SQL Profiler to capture the worst-performing SQL statements, then use Database Tuning Advisor to get ideas.
You might also look at table fragmentation, particularly if ids are nonsequential. That VARCHAR(36) datatype doesn't mean it's a random GUID converted to text, does it?
I'm betting there are 6 GUIDs in that table! :w00t:
Correct. I suspect the problem may actually be with the CRM application's queries which I believe are created dynamically by the php code. I'm looking into capturing those through the apps logging functionality.
March 9, 2015 at 7:56 am
So you did 600K reads, with 80K being physical, and you are upset it takes 5 mins?? 😀
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply