November 4, 2009 at 5:36 am
Hi All,
We have a database named 'Track' running on a 64 bit Sql Server 2005 enterprise edition.
Currently the Track DB occupies about 100G space, with 77G data, 16G index, mainly on Primary filegroup.
The primary filegroup has one data file, with autogrowth enabled at 100M with no limit size. The log file is also autogrowth.
When I execute a 'select * into ' query, the disk space is been eaten up rapidly, mainly the mdf file.
It dosen't complete after 30 minutes, so I have to stop the query.
Now the mdf file is 160G, increased about 60G.
The result of 'sp_spaceused' shows:
database_name | database_size | unallocated_space
Track | 163649.94MB | 70518.09MB
reserved | data | index_size | unused
94258216KB | 77720304KB | 16200240KB | 337672KB
Now I have to shrink the the database data file to free the occupied space.
What's the problem?
November 4, 2009 at 7:55 am
pink.thereisnoif (11/4/2009)
Hi All,We have a database named 'Track' running on a 64 bit Sql Server 2005 enterprise edition.
Currently the Track DB occupies about 100G space, with 77G data, 16G index, mainly on Primary filegroup.
The primary filegroup has one data file, with autogrowth enabled at 100M with no limit size. The log file is also autogrowth.
When I execute a 'select * into ' query, the disk space is been eaten up rapidly, mainly the mdf file.
It dosen't complete after 30 minutes, so I have to stop the query.
Now the mdf file is 160G, increased about 60G.
The result of 'sp_spaceused' shows:
database_name | database_size | unallocated_space
Track | 163649.94MB | 70518.09MB
reserved | data | index_size | unused
94258216KB | 77720304KB | 16200240KB | 337672KB
Now I have to shrink the the database data file to free the occupied space.
What's the problem?
the problem is your 'select * into ' query,
that literally says to make a copy of your huge table and put the copy into another table. if that table is 100Gig, well...it'll take a while to grab space and copy that 100 gig table.
most likely you do not need to make a copy of the data; you could use a CTE or just a tighter WHERE statement in your process.
Lowell
November 9, 2009 at 12:53 am
Thanks Lowell,
Yes, the problem comes from the 'select into ' query.
The query was executed by a careless dev, which produced an almost infinite amount of result rows...
The query is as below:
select b.id, a.amount into result
from log a, member b
where a.logtime < '2009-11-01'
and table 'log' has 2 million rows, 'member' has more the 30 million rows.
Imagine how many records will the query produce....
October 6, 2010 at 10:22 am
and table 'log' has 2 million rows, 'member' has more the 30 million rows.
Imagine how many records will the query produce....
I'm imagining about 60,000,000,000,000 😉
October 8, 2010 at 6:15 am
My questions is why do you require select into another table in same database ?
November 10, 2010 at 11:00 am
sejal p gudhka (10/8/2010)
My questions is why do you require select into another table in same database ?
lol. a question that every DBA has asked a DEV at least once 😛
November 10, 2010 at 11:12 am
sejal p gudhka (10/8/2010)
My questions is why do you require select into another table in same database ?
I've done it on rare occassions to produce flattened results during overnight rebuilds for reporting systems. It's not a common practice, though, and would usually only be done during maintenance/data refreshes.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply