November 25, 2009 at 11:43 pm
Comments posted to this topic are about the item 4 Ways to Increase Your Salary (Using UPDATE)
November 26, 2009 at 2:39 am
Hi Arup,
I'm fairly new to SQL so maybe it's a stupid question, but why do you use a temp-table in the last example? I think there's no use for that since both the emp-table and the temp-table share the same identity. The way the query is built I think it wouldn't even work if the id's in the emp-table would be different (for example 1,2,5,6,7 (after delete)).
Jochen
November 26, 2009 at 2:51 am
You say that salary * 115 / 100 and (salary * 115) / 100 return different results due to operator precedence - I can't see how operator precedence makes any difference in this scenario. Could you elaborate or provide an example?
Basically :
salary x (115/100) = (salary x 115)/100 so operator precedence should make no difference...
Actually - thinking about it, is it the integer division that's the issue? What I say above is true in a pure mathematical sense, but if you divide 115/100 as integers in SQLServer, you get 1. If however you divide 115 as a decimal /100, then you get 1.150000, and the calculation will work in any layout. So technically it's the operator precedence causing the integer divide to happen first which is the issue I suppose... though it's the integer divide in its own right that means there is any confusion to be had in the first place.
declare @percentage decimal(3,0)
set @percentage = 115 -- Even works with an 'integer' decimal.
select 3000 * @percentage / 100
select (3000 * @percentage) / 100
select 3000 * (@percentage / 100.0)
Cheers
-- Kev
-------------------------------Oh no!
November 26, 2009 at 3:45 am
I have to agree, there is no real difference with the calculations except in the interpretation by the system. Int divide by Int = Int which causes incorrect values. Simpler would be to change one of the Int's or even remove one altogether.
What is wrong with using Salary * 1.15? After all Salary is a float.
Remember the key to performance is simplicity. Every level of complexity will exponentially affect your query performance.
November 26, 2009 at 4:50 am
There is actually another way to archive a row by row processing. I am using it in different scenarios since quite a few years and it turned out to be very efficient.
BEGIN
Declare @ID int
Select @ID = Min(ID) from emp
WHILE @ID Is Not Null
BEGIN
UPDATE emp SET salary = (salary * 115) /100 WHERE emp.id = @ID
Select @ID = Min(ID) from emp where ID > @ID
END
END
In relation to the actual performance of this method I listed the times/cost on my server below. Please note that I also included the cost calculated by the execution plan:
READESTIMATED EXECUTION COST
Direct SQL50.0132935 100%
Cursor690.14620601099%
Temp table with TOP2540.25310071904%
Temp table with IDENTITY column1210.0994881 748%
Min Loop390.0861955 648%
๐
November 26, 2009 at 5:14 am
Martins answer is a lot sweeter and more robust if you need to loop through a data set, as there is no guarantee that the numbers are consecutive eg record number 4 is deleted for some reason (yes it can happen), so the last record will never get updated, assuming a 1% row deletion count, then in an organisation with 1000 members, 10 wont get pay rises.
As always the advice is use set based queries rather than Loops and Cursors, though in some cases you have to revert to them they should be rare.
As an aside, one of my colleagues ran some internal tests on a somewhat larger data set (1000 rows, same number of columns) and found that the Cursor ran significantly faster than the last option.
_________________________________________________________________________
SSC Guide to Posting and Best Practices
November 26, 2009 at 5:14 am
nice one Martin,
That's a nice and easy way to do the updates.
...And independent of identity values in the table.
November 26, 2009 at 6:12 am
The last example uses assumption on the identity column values in the main table.
You can just add these values to the temp table.
But Martin's solution is better
November 26, 2009 at 6:22 am
Jason Lees-299789 (11/26/2009)
As an aside, one of my colleagues ran some internal tests on a somewhat larger data set (1000 rows, same number of columns) and found that the Cursor ran significantly faster than the last option.
Cursors and temp tables are very similar, since a cursor creates a temp table in tempdb (spooling all data to disk before being read back)...just like a temp table does. A cursor has an upfront hit when allocating itself, but that overhead doesn't occur with each row fetch and each row fetch is quite speedy.
The CPU cycles is much greater on each row's operation in the identity temp table method (it is doing a COUNT(*) on the temp table each iteration). With a large data set, a cursor will (always) startup slower but finish faster and with less CPU. On a small data set, a cursor will still startup slower but finish slower as well.
Bottom line is, the last method makes certain assumptions about the identity column that can't always be made and thus isn't a universal solution. And of course, 99% of the time there is a set-based solution that will run circles around any procedural solution. I once tuned a stored procedure that took over 2 hours to complete by replacing a cursor with a set-based solution and brought the execution time down to just around 15 seconds. If you are forced into a procedural situation, first post your situation onto a forum and let someone find a set-based solution to the problem (there's a great chance there is one), and if none is found, use a method that suits the size of the data set. Martin's solution is the appropriate one here, no matter the size of the data set - it doesn't use a temp table.
November 26, 2009 at 6:37 am
Hi All,
Iยดm test for 24200 rows, the cursor done in 35 seconds and the tabla with identity in 84 seconds.
But when i modify the WHILE control for an constant the final time is 35 seconds, equal to cursor.
BEGIN
CREATE TABLE #temp(id INT IDENTITY(1,1), name VARCHAR(32), salary float)
INSERT INTO #temp
SELECT name, salary FROM #emp
DECLARE @i INT, @last INT
SELECT @Last = COUNT(id) FROM #temp
SET @i = 1
WHILE (@i <= @last )
BEGIN
UPDATE #emp
SET salary = (salary * 115)/100
WHERE #emp.id = @i
SET @i = @i + 1
END
END
November 26, 2009 at 7:38 am
I've been using Martin's solution for years, however, slightly modified.
BEGIN
Declare @min-2 int, @max-2 int
Select @min-2 = Min(ID),
@max-2=Max(ID) -- OR a user-defined max
from emp
BEGIN
UPDATE emp SET salary = (salary * 115) /100 WHERE emp.id = @min-2
Select @min-2 = Min(ID) from emp where ID >@min
END
November 26, 2009 at 8:30 am
Since \ and * share the same precidence, does that not mean that if we have salary*115/100 that the salary*115 part will happen first? Or is there no guarentee of the ordering?
Edit:
The presidence article that was linked to says this "When two operators in an expression have the same operator precedence level, they are evaluated left to right based on their position in the expression."
November 26, 2009 at 9:20 am
I found a very efficient way to DECREASE my salary.
I stayed in a government job for longer than I should have, and inflation did the rest ๐
November 26, 2009 at 10:59 am
Actually, the author and the first two replyers are wrong about the order of precedence. If you fully understand tha order of precedence, multiplication is evaluated before division. The order is as follows:
Paranthesis -> Exponents -> Multiplication -> Division -> Addition -> Subtraction
This can be remembered using the following nemonic: Please excuse my dear aunt Sally. That's how I learned it in school.
But don't believe me. Try it out:
Declare @Salary int
Set @Salary = 85199
Select @Salary*115/100, (@Salary*115)/100
----------- -----------
97978 97978
(1 row(s) affected)
November 26, 2009 at 11:18 am
Despite the misinformed on operator precedence, just multiply by 1.15! Keep things DRY and simple ๐
Viewing 15 posts - 1 through 15 (of 25 total)
You must be logged in to reply to this topic. Login to reply