Viewing 11 posts - 226 through 236 (of 236 total)
That's a really strange requirement, since generally you don't have the same tables in every database. It's possible, however, but it would have to be done through dynammic sql....
April 20, 2010 at 12:50 pm
It is definitely possible. Best practice may dictate creating a table specifically for it, but I can't say for certain. Personally, I've always placed universal functions/procedures in the...
April 20, 2010 at 12:28 pm
Personally, I've always used user-defined scalar functions to return the first day of the year or quarter for a given date.
Example:
CREATE FUNCTION fn_DATE_FirstDayOfYear
(@date DATETIME)
RETURNS DATETIME
AS
RETURN DATEADD(yy,...
April 20, 2010 at 12:06 pm
Well, ultimately breaking it into blocks gives me complete control over the transaction log size. In this specific case, disk space is more important than performance. I can...
April 20, 2010 at 10:12 am
I had trouble removing the clustered index on InvoicedDate (it was taking a long time), but I was finally able to do it. I tested the stored procedure with...
April 20, 2010 at 7:40 am
Lynn Pettis (4/19/2010)
April 19, 2010 at 2:02 pm
Lynn Pettis (4/19/2010)
April 19, 2010 at 1:51 pm
The easiest way seems to be looping based on an identity key. I've broken up the transaction into groups of 50,000 rows, and a CHECKPOINT is issued manually (just...
April 19, 2010 at 1:40 pm
Grant Fritchey (4/19/2010)
Another option would be to use a minimally-logged operation such as bcp or BULK INSERT.
I'm confused as to how BCP or BULK INSERT would help, considering the data...
April 19, 2010 at 12:32 pm
Danny Sheridan (4/19/2010)
April 19, 2010 at 12:29 pm
Lynn Pettis (4/19/2010)
The first INSERT statement moves about 800,000 rows from these tables. The second INSERT statement creates about 200,000 rows in the fact table by querying the...
April 19, 2010 at 12:26 pm
Viewing 11 posts - 226 through 236 (of 236 total)