June 15, 2005 at 5:11 am
Before we install SP4 the query like :
select * from tbl where id = 123456.00 time was 0.01 s
after SP4
select * from tbl where id = 123456.00 time is 5s, but
select * from tbl where id = 123456 time is 0.01s.
We cannot trucate .00 from the end of nuber, because it generate framework.
ID is DECIMAL(10,0) and it is primary key of table tbl.
Is there any way to solve this issue or we have to downgrade to SP3?
Thank you Ales
June 15, 2005 at 6:50 am
check the plan of the proc. When you use 123456, you'll probabely have a clustered index seek, but with 123456.00 you'll probabely have a clustered index scan. The only way I would see to solve the problem would be to convert the parameter to an int or decimal(10,0).
Are you seeing this behavior in stored procs or in the program with adhoc queries?
June 15, 2005 at 7:02 am
we use adhoc queries. I know that the solution is to sen number without .00, but we cannot do that. Because it sends PowerBuilder from their api (I think). I'm only surprice that with SP3 was result time same for both statemans.
I thing that there is no other way, only to downgrade to SP3. 🙁
Ales
June 15, 2005 at 8:13 am
How long would it take to modify the application?
Also why r u using decimal(10,0)?
Can't you use a bigint, or int instead? It would propabely solve the problem and also allow for a very big range of values.
June 15, 2005 at 8:36 am
To modify the application it takes cca 1 month. Why we are using decimal (10,0) instead of bigint or int because column bigint takes more disk space instead of decimal (I think), or may be some another thing. Our application is runnigr from MSSQL 6.5 till now.
The problem is, because we are using PowerBuilder which have a type number for int, decimal, and so. And when it creates query it ends number with .00000000.
We try to find some solution of our problem on PowerBuilde site.
Anyway thank you for your help.
Ales
June 15, 2005 at 8:51 am
Just on a last note : bigint takes 8 bytes and handles numbers from -9 223 372 036 854 775 807 to 9 223 372 036 854 775 807. Decimal(10,?) takes at least 10 bytes (might have some overhead associated with its use but I'm not sure).
June 16, 2005 at 12:07 am
just my 2Ct
Rule no 1 in _any_ RDBMS : tell your system what you know ! If you know it's an integer, don't declare a decimal , if you know it's a date, don't declare a char().
rule no 2 in _any_ RDBMS : use the same datatype(and precision) as defined in your table-column when using a predicat, sort, group,...
Any violations against these rules will be payed. Maybe you will not notice it, but sometime, and mostly at moments you'll not expect it, it will bite you in the back.
Sof for this single query, it costs you 4.9 seconds. you can imagine what the rest may be doing. (n times a minute/hour/day/week/month/year...)
Our dbms-engines get more intelligent and will perform beter if we provide the correct datatypes, indexes, normalisation, ...
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
June 16, 2005 at 12:45 am
to: alzbda
I know that is better to use correct datatype. I mean datatype which is in table definition, but Our application wa written in PowerBuilder 6.5 and runtime of PB create from 1234 number 1234.000000000000 before it sends into DB as query.
This is our problem.
One solution is that we are sending '1234', then PB does not adds .00000000000 at the end of number and the query take same time as we sent number without .00000.
By the way, I was surprised when I was try :
select * from table where id=1234 0.01s
select * from table where id=1234.00 5s
select * from table where id='1234.00'0.01s
I can't understand why conversion from wrong decimal type to decimal type used in table definition takes longer time then conversion from string to decimal used in table definition :-(.
Ales
June 16, 2005 at 1:29 am
I cannot help with PB
This reply was meanth to be supportive for you to get approval for the app-maintenance-effort.
Maybe it can be easyer to use parameterised queries to get control over the datatypes being used
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
June 16, 2005 at 1:40 am
You are right if we can use parametrised queries it will be easier to control datatypes, but our app was written in PB :-(.
Current version of our application is in .NET (C#) and we have control over everithing what we are sending to DB (MSSQL, ORA, Postgres, ....). There are many exceptions for performance to every DB server.
But our old app is depending on PB runtime :-(.
June 16, 2005 at 5:51 am
Ales;
Did you try deleting and rebuilding the index? There may have been some modification to the index during the SP4 upgrade. (I don't have anything concrete to say that is a problem, this is just a suggestion to see if it helps.)
Mark
June 16, 2005 at 6:30 am
I can try it, but it is Primary Key of table, so I will try on other column, which is not PK. Then I can send conclusion.
Thaks
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply