Viewing 15 posts - 841 through 855 (of 859 total)
Hi,
What is it you are exactly looking for ?
you cna use this to get a list of users:
SELECT * FROM sys.database_principals
WHERE TYPE='S'
November 21, 2008 at 7:17 am
Hi,
Yes it's possible. You should look at some of High Availability methods. You can use any of these methods:
Replication - If you want to transfer the data over instantaneously in...
November 21, 2008 at 7:01 am
Is it true that one database can have the maximum size of 8GB in sql server 2005?
which edition of SQL server are you using?
check this out
http://www.microsoft.com/Sqlserver/2005/en/us/compare-features.aspx
November 21, 2008 at 6:36 am
Hi,
No the transaction log would not get deleted if you delete the data. Basically it logs the transactions yo performed you need to truncate the transaction log. If the log...
November 21, 2008 at 6:20 am
Hi,
Yes this is the fastest way to load bulk data into the table. It works smoothly when both the target table and the input file has same number of columns....
November 19, 2008 at 10:03 pm
P_DBA (11/19/2008)
Hi,You could look at using BULK INSERT specifying column and row delimiters...
this could be useful:
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
before doing this you can look at changing the recovery model to bulk logged
November 19, 2008 at 7:59 pm
Hi,
You could look at using BULK INSERT specifying column and row delimiters...
this could be useful:
http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-bulk-insert-load-comma-delimited-file-into-sql-server/
November 19, 2008 at 7:55 pm
As you say if you have high level of transactions taking LOG back up frequently is minimal and monitoring the size of log file is a must.
I would create...
November 19, 2008 at 7:18 pm
Hi,
its my pleasure.
But you need to take some proactive measurements that transaction will fill up again.
they are:
Reasons:
Due to Uncommitted transactions
run DBCC OPENTRAN() to check for any open transactions
Due to...
November 19, 2008 at 4:18 pm
What is the error you are getting? post it here
November 19, 2008 at 10:26 am
Why are you shriking the file twice just back your log first ad then shrink the file as:
USE mydatabase
GO
BACKUP LOG mydatabase WITH Truncate_Only
TO DISK=".....path"
GO
then try to shringk the log file.....
DBCC...
November 19, 2008 at 9:52 am
Hi,
Yes if you want to publish your database level objects you can use Replication(merge) technology, if you want to mirror the database either logshipping or DB mirroring
November 18, 2008 at 11:48 pm
Hi,
this really depends on what you want to implement, since you have only two servers DB Mirroring would be advisable as you dont have to mirror it to multiple servers....
November 18, 2008 at 10:40 pm
You can use the :
DBCC SHRINKFILE('logfilename') to shrink the log file specifying the target file size....to do a once-off
Please refer to this post describes your situation:;)
http://www.sqlservercentral.com/Forums/Topic530467-146-1.aspx
The following proactive measurements...
November 18, 2008 at 8:23 pm
run this statement:
Select log_reuse_wait from sys.databases where name='mydatabase'
post the value of log_reuse_wait column
0 = Nothing;1 = Checkpoint;2 = Log backup;3 = Active backup or restore;4 = Active transaction;5...
November 18, 2008 at 8:14 pm
Viewing 15 posts - 841 through 855 (of 859 total)