May 7, 2009 at 9:09 pm
Comments posted to this topic are about the item Checking Up on Developers
May 7, 2009 at 9:51 pm
The most common and worst mistake is failing to develop a normalized data model that fits the real world data.
Most developers seem to just throw tables together with little thought about how it actually models the real world data. Once a bad table "design" is in place, the application is doomed to an endless series of hacks to work around the design problems.
You can fix bad indexes and poorly written stored procedures, but a bad table design is with you forever.
May 7, 2009 at 10:13 pm
Well SQL Injection is the biggie. Wish I had a buck every time one of my customers found out that their vendor was exposing them to SQL Injection attacks.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
May 7, 2009 at 10:23 pm
Well, I'm on a new job, and am investigating performance problems. The first thing I'm looking at is everything that is causing deadlocks.
Every time, the root procedure is performing selects against multiple tables where the only indexes are on the PK, and a rowguid for replication. That's it. There queries are performing table/CI scans constantly. Adding a few indexes has helped tremendously with the stability of the server and their application.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
May 8, 2009 at 12:47 am
Security considerations. Time after time I get databases come to me with instructions on how to deliver the various database objects, but no consideration on how users are actually going to be accessing them through the application. The worst offenders are the third party vendors where it just seems the norm to give the database user dbo privileges over the database.
Why - for the love of God - why dbo???? I'll just build another brick wall to knock my head against then!
May 8, 2009 at 12:51 am
I'm a programmer/developer at a small software development house (1 programmer, 1 manager, 2 network support guys) and I enjoy learning about how to do things properly, whether it be database or application related. Thanks for this topic, I'll be following it closely!
May 8, 2009 at 1:23 am
. some of the fields in the table are of size greater than 8 kb and most times it cause page splits which causes slow performance.
. Normalizing a table without considering the implications such as too many joins while pulling out the data designers need to understand the benefits of denormalization as well.
. choosing non clustered indexes based on good selectivity instead of random fields
. Lack of understanding query plans and index seeks over table scans
May 8, 2009 at 1:38 am
Personally, I think the most common mistake made by developers is also the most common made by DBAs; being blinkered.
The app and the database are two separate tools that, when used in conjunction, provide a solution to a business problem. However, whilst they are separate tools, there's a large overlap in functionality - some actions you may want to perform could be done by either app or database, albeit not necessarily with equal efficiency. I've seen many developers who'll only think about offloading functionality to the database when they physically cannot perform it in the app. I've seen many DBAs and DB developers who want to do pretty much everything in the database and only see the app as a bit of prettification for the users to look at. Both approaches have ended up with hugely inefficient solutions. Just because you can do something in a particular place doesn't mean you should.
The most effective solutions I've seen have been put together by people who understand the strengths and weaknesses both of their programming environment and database platform, and have distributed the work across both areas according to those strengths. It requires a particular attitude of openmindedness, and for all too many developers and DBAs that means a rather uncomfortable cultural change.
OK, I'll get off my hobby horse now.
Semper in excretia, suus solum profundum variat
May 8, 2009 at 1:51 am
Good grief, sweeping generalisation about developers there, Steve. Seems to me that if a developer is implementing incorrect indexes etc, then it's actually the DBAs fault for letting them do it. DBAs, eh - tut!
May 8, 2009 at 1:55 am
I'll repeat this one until I turn blue. It is a terrible mistake to develop an application with an empty database, or one with just a little 'sample data' in it. Develop your database with the maximum amount of data in it that you can predict is going to be in the production database: Then, your failure to provide the correct indexes will become flamingly obvious. If you've made a design error, it will hit you smack in the eye. All right. I'll admit that SQL Data Generator is very close to my heart, but the reason is that this, or a tool like it that generates correctly formatted and convincing data, will become your best ally when you are developing.
Other crimes
Allowing direct table-access to applications (madness because of security issues)
Denormalisation (you'll live to regret it when the application expands)
Best wishes,
Phil Factor
May 8, 2009 at 2:36 am
Michael Valentine Jones (5/7/2009)
The most common and worst mistake is failing to develop a normalized data model that fits the real world data.Most developers seem to just throw tables together with little thought about how it actually models the real world data. Once a bad table "design" is in place, the application is doomed to an endless series of hacks to work around the design problems.
You can fix bad indexes and poorly written stored procedures, but a bad table design is with you forever.
Thats exactly the issue I face daily - hastily built databases that are data stores for websites which haven't been designed.
Extendeded functionality leads to extra tables being added without any review of impact on the database.
Also LINQ gets thrown in and it all goes down hill with the devs then having no idea how to navigate through the data.
But hey they employ me to bail them out so its all good in a way 😉
Hiding under a desk from SSIS Implemenation Work :crazy:
May 8, 2009 at 2:39 am
Phil Factor (5/8/2009)
Denormalisation (you'll live to regret it when the application expands)
Agreed.
All too often I find that when people say that they 'denormalised for performance' they actually mean 'didn't feel like normalising completely in the first place'
Denormalising for performance means that you've normalised fully, have then done tests on large volumes of data and have found that the fully normalised design is not capable of meeting the performance requirements. It does not mean "Oh, I think there are a few too many tables, let me denormalise a bit"
Oh, and I don't mean normalise to Domain-Key normal form. 3rd or BCNF is usually enough, maybe 4th if you've got odd tables that show that particular data anomaly.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 8, 2009 at 2:57 am
Phil Factor (5/8/2009)
Denormalisation (you'll live to regret it when the application expands)
Don't agree fully. Denormalization is very usefull sometimes. For example, I have ~100 tables which is representing some entities. Few entity types are primary and there are 5 complex searches with ~50 criterias. We decided to create denormalized form of data to provide search. Note, we kept an old structure as primary but populate denormalized data periodically (denormalization takes ~4-5m per day). Search is working very fast...
Is any idea how to avoid it without of using 20-30 joins (5 of them are very conditional: CASE WHEN, subquries, etc)?
I think it is pretty good decision.
May 8, 2009 at 3:09 am
Phil Factor (5/8/2009)
Denormalisation (you'll live to regret it when the application expands)
sasha.khomich
Don't agree fully. Denormalization is very usefull sometimes. For example, I have ~100 tables which is representing some entities. Few entity types are primary and there are 5 complex searches with ~50 criterias. We decided to create denormalized form of data to provide search. Note, we kept an old structure as primary but populate denormalized data periodically (denormalization takes ~4-5m per day). Search is working very fast...
Ah. Actually, there are several techniques to get around this sort of problem with searches. I'm not sure that you are really denormalising as I understand the term, but using a type of inversion 'index' table. This is a classic approach (Gary Kildall developed it for searching CDs!). It is usually called the ‘Inverted’ or ‘Inversion’ index technique. (see http://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices ). I've had an article sitting in a corner I wrote a while back awaiting a bit of a tidy-up. Maybe I'll dust it out....
Best wishes,
Phil Factor
May 8, 2009 at 3:34 am
I know it's not really specific to SQL Server but in my experience developers are often forget that someone has to use their end product so they make too much effort to provide a technical solution, throwing in some random indexes for fun (because they clearly didn't consider what would be useful to the actual users), a bit of denormalisation because they think it will help inspite of the bodge they had to do to make this module over here work and don't worry the users will never use that feature anyway.
Being a bit more serious the biggest developer issue I hit is providing an application to end users where the users are allowed to make changes to key information so breaking relationships between tables. Why oh why, I spend my life at the moment having to fix the end result and it really bugs me.
Viewing 15 posts - 1 through 15 (of 113 total)
You must be logged in to reply to this topic. Login to reply