November 26, 2013 at 7:21 am
Here's a mistake I see developers make a lot (full disclosure: i am both a .net developer and a sql developer with a touch of DBA experience): they think about SQL Server code, especially procedures, as if it were .net code. So, it seems that 80% of the time, I end up seeing a cursor. And, frequently, xml processing where it just wasn't necessary.
November 26, 2013 at 9:21 am
I know this is an old thread, but it was re-posted on Steve's site. The biggest problem with commercial software is vendors seem to love to woo executives, they are convinces that their software is really great. The executives, or manager purchases the application, without consulting the DBA team. They pay a lot of money for some of this software and assume the vendor really knows what they are talking about. Management assumes the vendor knows more than their internal DBA's, because they are in the business of creating the software.
What works for a small/medium size business can be entirely inadequate for a larger enterprise, with millions of transactions weekly. The vendor keeps suggesting larger and larger hardware, when the root cause is poor database design. Eventually, the data grows beyond the point where simply adding hardware resources fixes the problem. It's like trying to use Quickbooks to run a company the size of General Electric, it just will not scale.
My problem is convincing the executive team that they really need to consult the IT organization before they make these purchases and we could save them a lot of headaches down the road.
November 26, 2013 at 12:46 pm
Here's one that came up recently in my shop. The abuse of tempdb. Developers creating temp tables, then creating indexes on temp tables in a highly transactional environment. My logs are constatntly filling up with deadlock entries related to schema locking in tempdb. Their response after we showed them the Microsoft documentation on not what to do -- "Let's use SSD for tempdb"
November 26, 2013 at 12:56 pm
We have one of those too!!! We must have a common vendor. One server was set up with a 4 SSD RAID 10 setup. Another vendor, a data warehouse "Expert",said his database tables were too high performance for indexes. Seems the "expert" they hired had no idea hoe a relational database engine worked.
They threw more hardware at that issue too. A single index would have reduced his run time from minutes to seconds.
November 26, 2013 at 3:35 pm
1. Not using SSIS (and writing awful, single threaded non-tsql/non-SSIS code)
2. Not using SSSB (and writing awful polling jobs)
3. Not using SSRS (and destroying SQL servers instances because there's no caching or security)
4. Not using the native SQLS security model (and having huge quantities of weak code implementing anon security)
5. Not using SSAS (and introducing expensive 3rd party tools or weak open source ones)
6. Not understanding that triggers are great (and instead writing enormous t-sql sprocs and then cloaning the logic everywhere)
7. Not modulising code (and cut and pasting the same code through hundres of sprocs)
8. Not using CLR code (and writing string handling absurdities in t-SQL)
9. Not using Linked Servers (and writing non-tsql code to do the same)
10. Using ORM tools (and wondering why the SQL code is so slow or the data model on the SQLS makes no sense to anyone looking at the model at the database level)
11. Not understanding SQLS Replication (and writing huge amounts code to do the same)
12. Not understanding the SQLS licensing model (and not using SQL Express where it should be used or delivering into production and then informing the customer of the huge unplanned additional licensing bill)
13. Not understanding the HA features (and saying it can't be done or ignoring the issue until too late)
14. Not understanding how snapshots work (and then creating huge locking issues on whole-of-database jobs and/or requiring the DB to unavailable for hours)
... and at the dev level ...
1. Not understanding Indexes ('nuff said)
2. Not understanding Synonyms (and writing fragile unpromotable code)
3. Not keeping up-to-date on t-SQL syntax (e.g. MERGE)
4. Not using system views and generating t-sql from them
5. Not writing clean -up processes (and wondering why some tables are so huge!)
6. Not understanding their production environment (and writing code that only works on their local SQLS instance of a few hundred rows in the prime tables when production env has millions)
7. writing "database agnostic" apps (typically using ORMs) when there is no requirement for it (geez the customer has paid good money to standardise around SQLS)
8. Using no source control or fragile source control when good SC tools exist but they're too "busy" to set them up
9. Not implementing a consistent promotion procedures to get from dev env to production env (buy SQL Compare - please!)
10. Using 'N' char when 8 bit will workd fine
11. Not being able to read optimiser output (or even knowing it exists)
12. Not understanding how to use FOR XML (and writing multiple select statements and creating the XML using code off the SQL Server)
November 26, 2013 at 3:37 pm
One mistake is the overdozed usage of cursors. In most of the cases cursors can be replaced with set-based statements.
One another mistake is the implicit conversion in queries because of wrongly declared variables and temp table columns.
Regards,
IgorMi
Igor Micev,My blog: www.igormicev.com
November 26, 2013 at 10:26 pm
one of my most favorite is the join condition with
isnull(a.col1) = isnull(b.col1)
if this not helpful then people usually go for custom solution like
udf_isnull(a.col1) = udf_isnull(b.col1)
😀
November 27, 2013 at 12:38 am
My pet irk about the developers concerns indexes and stored procedures.
We have a system with about 200 SPs and it seems that the developers have followed SQL Server's suggestions for index creation for each SP, that is, the one suggested in the execution plan window. This is all very well and good, but it takes a very local view of the DB, for it optimises their particular SP but makes all DML operations for the tables in question slower.
Our worst offender is a table with 8GB data and 23GB indexes and this table is one of the key tables in the DB.
In order to optimise it, I have to test *all* of the SPs that use this table (and there are about 20-30) while trying to synthesise load on my test DB in order to achieve more of a balance between reading and writing to this table (and this table is the most heavily written-to table as well...).
That being said, I found the developers very receptive to change when I point out the problems caused. They are usually happy to receive optimised SPs or queries as well as to learn new SQL tricks and statements. They are also appreciative when I give them feedback on the performance of their SPs. Glenn Berry's Diagnostic Information Queries [1] are great for serving up the top SPs by execution count, avg. elapsed time, total worker time and so forth.
[1] http://sqlserverperformance.wordpress.com/category/diagnostic-queries/
November 27, 2013 at 5:55 am
Call me crazy, but under certain conditions it does make sense to have an additional non-clustered index with the PK in it:
* You have a where filter on the index you use to speed up filtering.
* You can find records by the PK, but also need a limited amount of space for include fields to make the index covering.
* A combination of the above!
This works because the B-Tree of the clustered index alone is not sufficient to cover the query and the IO cost of getting the actual fields from the clustered index might just be too much.
If the table is sufficiently large and the records sufficiently wide the conditions are more relaxed. In this situation the B-Tree specifies a large range of ordered PK values its records still need to be read in order to find out if a certain key even exists. It is all about limiting IO and thereby also lowering memory pressure. Most applications are pretty read-heavy, and there it makes sense!
Viewing 9 posts - 106 through 113 (of 113 total)
You must be logged in to reply to this topic. Login to reply