May 9, 2006 at 4:13 am
Morning,
Our company have recently had to convert the datastore for one of our clients over to SQL Server 2000 from Sybase 5. This was done using DTS to copy the tables and their data over into a blank SQL 2000 database.
For historical and budget reasons we have not been allowed to change the structure of the database in areas that we feel could do with improvement.
As a result there is still a very poorly implemented table in the schema. It has 100 plus columns on it some of which can be large text fields etc and the table itself holds 700,000 + rows.
This table should really be normalised into a patient and calls table but it can't be as I said above.
I have noticed a very strange problem with the table's behaviour within Enterprise Manager. If I open the table and edit and move off a row then the following message appears :
"The entire resultset must be returned before this row can be updated"
It then wants to retrieve ALL the rows from the table before it'll update the row. Subsequent updates are instantaneous but if I close and reopen the table the same thing happens again. This causes the memory on my PC to be rapidly consumed and I get Virtual Memory warnings from the Operating System.
The table does have a primary key defined but it also does have some TEXT columns. I'm not sure if it is the number of columns on this table that is causing this, the number of rows or the TEXT columns. I'm just totally lost to the cause as I am a developer not a DBA.
Could any kind DBA please give me some advice on what is causing this?
It would be very much appreciated.
Chris Denslow
Pragma Systems
mailto:chris.denslow@pragma.co.uk
May 9, 2006 at 4:30 am
Hi Chris,
What compatibility mode do you have on that database? Make sure to set it on mode 80.
regards,
Holger
May 9, 2006 at 1:40 pm
The properties of the database say that it is already on 80. Could this be showing 80 but the db be on something else?
Thanks for answering Holger. Any more ideas?
Cheers
Chris
May 10, 2006 at 4:06 am
I wouldn't recommend editing data in tables opened through EM at all, and especially not if there are TEXT columns - although I admit that I don't know much about possible errors and their causes. IMHO EM is not a tool designed to edit data, but to manage data structure. I wouldn't be surprised if only part of the text column (and maybe also of large char/varchar column) is loaded into EM when you open a table, but then if you update the row, it needs to retrieve the full contents of all columns... but that's just a guess. The only occassion when I edited data in EM was when I needed to change some descriptions (VARCHAR column) in a small table with about 10 columns and 100 rows, and was lazy to script one-time update... What is your reason for editing in EM? Can't you do it some other way?
If you use Query Analyzer and script the update, it should run without any problems.
May 10, 2006 at 9:13 am
Here's an article in support of Vladan...
Ryan Randall
Solutions are easy. Understanding the problem, now, that's the hard part.
February 23, 2007 at 3:46 am
I am having the same problem, even when I am NOT using the Enterprise Manager.
But when I do, I get the "The entire resultset ..." messagage the first time.
When I use a direct SQL command, an UPDATE on one record takes >2 min. the first time.
This would be faster by hand! What is going on in MSSQL? Please, anyone has an idea?
(My table has an Primary Index, 45.000 records and includes Text fields)
February 23, 2007 at 4:01 am
Hi Mads,
please post the DDL for your table (CREATE TABLE .... don't have to include all columns, just all that enter in any way into this update), and the actual query you are using to update the record and takes over 2 minutes.
I have limited experience with TEXT datatype, but I'm sure someone will be able to come up with ideas what to change.
February 23, 2007 at 5:29 am
The DDL looks:
CREATE TABLE [dbo].[Persondata] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Tid] [timestamp] NULL ,
[Fornavn] [nvarchar] (80) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
....
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Using the Update:
UPDATE Persondata SET Foravn='ABC' WHERE ID=12344
Yes - it takes >2 minutes.
I have moved this table using an Transformation with "Enable Identity Insert" Checked under Options. Maybee thas has an effect, since Id is an Identity key ...
Extraordinary quick reaction on my question - thank you!
February 23, 2007 at 6:00 am
Well, that even does not include any text columns... and with such a simple update (no references to other tables, only one column in WHERE clause), on a table with just a few rows as you have, it should run quicker even without proper indexes. Strange... did you really post the entire update, or was that simplified? If it was, please post the precise statement that takes 2 minutes to execute.
What did you find in execution plan in QA?
Are you running the update from Query Analyzer?
What index is on the column ID - clustered, nonclustered, unique? (I would probably use NONCLUSTERED UNIQUE in this case.)
Can you drop and recreate the index on column ID and try again, whether it makes any difference?
Enable identity insert during insert into the table should have no effect on subsequent updates, as far as I know.
February 23, 2007 at 6:13 am
Will look at your sugestions and return
February 26, 2007 at 2:21 am
ID is an Clustered, Primary Index
An SQL that gives the 2 min. wait was
UPDATE Persondata SET Fornavn='John Test" WHERE ID=370
I run the SQL from QA, but have also tried from ODBC. The Executian plan has an "Comput Scalar - Computing new values from existing values in a row" (the Identity key?) as its heaviest job: Estimated cost 600%
There was Text fields in the Table.
Changing them to nvarchar 254 seem to have given at benifit from 2 min. to around 15 sek. That is better, but still slow. Testing to see if it is temporar.
February 26, 2007 at 2:32 am
Have tested the change to nvarchar, and are now back on 2 min.
February 26, 2007 at 2:56 am
Not sure, but I think Compute Scalar deals with the column which will be updated (i.e. fornavn). You might try to convert the new value to NVARCHAR(80) explicitly and look if it makes any difference :
SET Fornavn = CAST('John Test' AS NVARCHAR(80))
I'm sorry, but I really have no idea why this happens to you, so I'm just trying to guess possible causes.
February 28, 2007 at 7:24 am
Problem solved - but still not understood.
Changing all nvarchar to varchar, and responce time came down with a factor 10.000 - why?
Anyway, thanks for working on the problem.
March 7, 2007 at 3:19 am
I've been away for a week and couldn't reply... I don't know WHY the conversion takes so long, but SQL Server is doing conversion internally when you compare two values that have different datatype. It is always better to do explicit conversion, especially of parameters - SQL Server might decide to convert values of the column (for all rows) instead. I think this is what happened here, but anyway it should not have been so slow.
Unless you work with some (for Europeans) exotic languages like Chinese or Japanese, VARCHAR should be preferred over NVARCHAR - so your solution is fine.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply