March 18, 2009 at 1:10 am
Comments posted to this topic are about the item Guest Editorial: Short Transactions
March 18, 2009 at 3:32 am
Of course, there are issues to deal with here, both in terms of the extra burden snapshot isolation places on tempdb, and the fact that swapping to this mode may cause some issues with existing applications that weren't designed with it in mind. However, as these issues are planned for and addressed, I envisage snapshot isolation becoming the norm.
Please! When is this going to happen? In fact, when is the shared tempdb going to be removed entirely? If anything stops SQL Server from scaling with multiple databases on the one instance, I think it's the tempdb.
March 18, 2009 at 7:02 am
A good reminder to look at the overall picture.
I have had to write a stored procedure to prepare invoices. There are several sub-tables such as general_ledger_distribution, invoice_discount, sales_invoice_product etc. If something fails, both the main table and all sub-tables MUST revert to their original state. This in turn calls for a lengthy transaction. When there are 800 invoices to be generated, this takes about 15 seconds but is only run, at best, once or twice a day.Mind you, the process is yet to be optimized as there is an obvious lack of indexes on foreign keys, etc. There is also a second stored procedure which takes an incoming customer purchase order containing 800 distinct orders. A lot of information and validation is required, such as comparing the EDI price against the customer_item_price table, etc.
Not using a transaction means going back to each table involved to delete records involved in the failure of other processes. And, should one DELETE fail, this requires additional recovery to wipe out the unwanted records. A transaction will not help here, since it only results in all of the DELETE steps being rolled back due to the failure of one step. Then all unwanted records remain in the database.
March 18, 2009 at 7:53 am
one of the great things about SQL server is the simplicity of the database architecture, particularly as compared to Oracle, do we want to go down the road of having some equivalent to redo logs? doesn't Oracle return 'snapshot too old' errors if you run out of redo log space, so there are limitations to this form of locking strategy and an imposed limit on how long a query can run for.
I am not an expert on this but I'm not sure this is a better way to handle concurrency taking the extra load to maintain the snapshot into account.
---------------------------------------------------------------------
March 18, 2009 at 8:42 am
I am fairly new to the SQL Server world, having been in this environment for a few years now. Previously my company was using Oracle for 15 years or so. There are many aspects of SQL Server that I prefer over Oracle, especially ease of administration. I will say, however, that on a system with similar transactions I never once experienced a transaction/concurrency problem with the Oracle database; whereas with SQL Server we have struggled with this issue, having to resort to dirty reads for some of our reporting to avert concurrency problems. It is probably true that some of our transactions could be shorter or tuned better, but that was also the case in Oracle.
March 18, 2009 at 9:07 am
tempdb is something they'd considering (multiples per instance). For 90+% of instances it doesn't matter, but it definitely holds back scaling. Reporting Services does this and MS recognizes this is a bit of an issue.
March 18, 2009 at 9:10 am
I'm not as familiar with Oracle, but I do worry about having too much stuff dropped into tempdb. I'd rather have some sort of redo area per database to handle changed rows.
I also think that transactions are one of those areas that are hard for people to learn what a good practice is.
March 18, 2009 at 9:46 am
seems I should have referred to 'undo' rather than redo. See I said Oracle was confusing.
If this area did not go into tempdb it would have to go into each database, perhaps as a third type of file? Either way its getting more complex.
---------------------------------------------------------------------
March 18, 2009 at 10:08 am
SQL Server has ambitions to match this ability with its new snapshot isolation level. When enabled, SQL Server will maintain in tempdb a time-stamped "version store" of all data changes. Instead of blocking when it encounters an exclusive lock, a reader transaction will simply "read around" the lock, retrieving the version of the row consistent with the time the transaction started.
Sql 2005 released about 3.5 years ago now, this isn't all *that* new of an ambition 😉
March 18, 2009 at 10:20 am
whereas with SQL Server we have struggled with this issue, having to resort to dirty reads for some of our reporting to avert concurrency problems.
The main reason for transaction problems in SQL Server comes from Microsoft providing many implementations of transactions which Microsoft in their docs tells developers a resource manager is needed. That means when those transactions are propagated into SQL Server code is needed to convert none atomic transactions to the atomic version in SQL Server but few developers do that instead they post useless bugs at the MSDN forums or connect.
I have resolved all but a few of these threads when i find them because I separate the transactions into product groups resolve what I can send the user to product owners within Microsoft. Microsoft knows their developers don't know the difference so I think Microsoft needs to provide to atomic conversion code block in either ADO.NET or SQL client point of connection.
But wait it is already there it is the ubiquitous MSDTC.
Kind regards,
Gift Peddie
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply