February 28, 2007 at 9:40 pm
hmm, you also might want to consider adding a couple of indexes to your view - you need to start with a unique clustered index, but then you could add others, perhaps a nonclustered one on NetworkID. This would speed up the instantiation of the view (the populated indexes are retained by SQL Server, though the populated view itself is not stored. The cost of SQL Server having to maintain the indexes should be checked (your tables might have a significant updates/deletes, and the any view indexes would require corresponding updates/deletes). I've never actually experimented with indexed views, so I would steer you to BOL or some other poster. Good luck, and at least you are a lot closer now 🙂
March 1, 2007 at 7:35 am
At this point I can't do anything with indexes on my view. I am running SQL 2000 STD edition, which doesn't support that. Also, when we looked into indexing views on another system a couple of years ago we found that the restrictions on the table designs to have indexes on views based on them was prohibitive, at least in the case of the vendor's DB design. A properly designed DB would not have this problem. I can only wish that I could find a vendor that can design a database properly.
MS is supposed to get more involved today. I will post back if they can help.
Thanks
Chris
March 1, 2007 at 12:37 pm
I have ran into similar performance gaps between a TSQL statement and a sproc.
Run both queries in SSMS with actual execution plan. You should spot where the execution plans differ between the two queries. I usually tweak the indexes to 'convince' sqlserver to use an index it overlooked in the sproc.
Besides tweaking indexes rearranging table order can have a large impact.
2 cents
Daryl
March 5, 2007 at 9:23 am
Tweaking indexes may have slightly changed the execution plan but did not speed anything up.
MS is stumped so far. I have sent them a bunch of information they requested by way of a program they asked me to run against the SQL Server, and a copy of my DB along with the queries/stored procedures I am trying to run.
Thanks,
Chris
March 20, 2007 at 11:25 am
Update:
The MS response on this situation is that this is how the optimizer works. Because it does not have the actual value at compile time of the procedure, even though it knows what field it will be searching, it generates a different execution plan than if the literal value of the argument is there. What this means is this:
Declare @NetworkID VARCHAR(255)
Select @NetworkID = 'psridhar@landam.com'
SELECT FirstName as FirstName,LastName as LastName,EmpNumber as UserID,UnitNumber as UnitNumber,EmailAddress as EmailAddress
,WorkPhone as PhoneNumber FROM VHRDevTest.dbo.vwlaEmployeeListVPS WHERE EmploymentStatus = 'Active' and EmailAddress = @NetworkID
Will run 10 times longer than this:
SELECT FirstName as FirstName,LastName as LastName,EmpNumber as UserID,UnitNumber as UnitNumber,EmailAddress as EmailAddress
,WorkPhone as PhoneNumber FROM VHRDevTest.dbo.vwlaEmployeeListVPS WHERE EmploymentStatus = 'Active' and EmailAddress =
It can be sped up a little by using query hints like:
Option(Maxdop 1, FAST 100)
My ending solution for this was to rewrite the stored procedure to use this:
CREATE PROCEDURE [dbo].[VPA_UserInfo_Retrieve2]
@EmailID NVARCHAR(255)
AS
Declare @sql NVarChar(2000)
Set @sql = 'SELECT FirstName as FirstName,LastName as LastName,EmpNumber as UserID,UnitNumber as UnitNumber,EmailAddress as EmailAddress
,WorkPhone as PhoneNumber FROM VHRDevTest.dbo.vwlaEmployeeListVPS WHERE EmploymentStatus = ''Active'' and EmailAddress = ''' + @EmailID + ''''
--Print @sql
Exec sp_ExecuteSQL @sql
This runs in less than one second. Can't qualify the time better because QA doesn't change from 0:00.
The moral of the story, for me, is to test queries on an individual basis to see how they will run the fastest, as a stored procedure does NOT automatically mean it will run faster if parameters are involved.
Thanks
Chris
March 20, 2007 at 3:37 pm
Did you check that you were using the same set options in both places. Some of the settings can result in different performance.
March 21, 2007 at 6:49 am
Not sure what other place you speak of. All of this testing was done on one DB server and 2 DBs. The performance is identical between the 2 DBs even though one os scrubbed and the other is a copy of Prod data. I ran the several different queries through the same connection window, so there wouldn't be any difference in connection options.
Thanks,
Chris
March 21, 2007 at 9:29 am
Dare I say parameter sniffing.
When I see the behavior where straight t-sql runs fast, but procedure runs slow, is the query plan getting stored by the optimizer is not correct for whatever reason.
try this, sometimes it works and sometimes not so much
CREATE PROCEDURE [dbo].[VPA_UserInfo_Retrieve]
@NetworkID NVARCHAR(100) With recompile
AS
BEGIN
declare @EmailAddress varchar(100) -- this should match table datatype
' this prevents the optimizer from making assumptions about the data.
set @EmailAddress = @NetworkID
SELECT FirstName as FirstName,LastName as LastName,EmpNumber as UserID,UnitNumber as UnitNumber,EmailAddress as EmailAddress
,WorkPhone as PhoneNumber FROM VHRDevTest.dbo.vwlaEmployeeListVPS WHERE EmploymentStatus = 'Active' and EmailAddress = @EmailAddress
END
GO
March 21, 2007 at 9:36 am
Recompile was one of the first things I tried. It did nothing for the performance. I saw the thread on SQLPerformance.com forums about parameter sniffing, but have not yet tried that. It might be usefull to use a default there to the execution plan gets built right.
Thanks,
Chris
March 21, 2007 at 2:19 pm
So I tried to give it a hint, by adding a default to th email vauem
It performed the same whether I used "user@domain.com" or a legitmate email address.
I also tried another suggestion, delaring local variables inside the procedure then assigning the value of hat variable to the value that was passed to the procedure. It was just as slow as all the other formats.n So far sp_ExecuteSQL has been the winner, and it's blazing fast.
Thanks,
Chris
March 22, 2007 at 2:17 pm
Couldn't you do column level permissions on the table and select straight from it?
March 22, 2007 at 2:39 pm
I have to balance management with performance. The view in question comes from 4 different tables of HR information. I try to avoid column-level rights wherever possible, because while it is a novel approach, and can be useful, it can cause many problems also, especially if the application(s) accessing those tables do not understand column-leven rights. I have encountered this before.
In this case, the view solved 2 problems, the security, and it allowed me to condense 4 tables together.
Managing column-level rights is another management headache, or can be, so that is another of my considerations. I try to keep things simple where possible, which allows me to get more done, in general.
Thanks,
Chris
March 22, 2007 at 3:02 pm
OH I absolutely agree. Column level permissions is the last thing I will do for security, only to be done when all else has failed. Was just thinking if you really needed performance might be something to consider.
Once upon a time I ran accross a company that stored data a person was working on in a column basis, and needed to have column level permission for each... i.e. person A stored data in Column 1, B in Column 2 etc. They ran into the max number of column level permissions you could have on a table, at that time with 6.5 I think 237 columns. I tried to explain to them that there where a LOT better approaches to this, but they insisted on doing it this way. Thankfully I was a consultant and could leave asap...
March 22, 2007 at 4:34 pm
this is a long thread so I may have missed it if someone already mentioned it, but this seems like it could be a parameter sniffing issue.
See: http://blogs.msdn.com/khen1234/archive/2005/06/02/424228.aspx
---------------------------------------
elsasoft.org
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply