December 3, 2003 at 12:00 am
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/snandwani/bestpracticesinanadhocenvironment.asp
December 16, 2003 at 1:52 am
Realy a nice article. It match my working environment and is of lot of help for me
December 16, 2003 at 2:06 am
I would add that stored procedure names should not start with sp_ because SQL Server will search the MASTER database for the stored procedure first and then the user database. Although the performance hit is minor it is still a hit.
For similar reasons always prefix object names with their owner i.e. dbo.tblMyTable or dbo.vwMyView.
Always comment your code unless you want to be shackled to it for life.
Always carry out a QA stage on code.
If at all possible, make developers use stored procedures, not dynamic SQL.
Anyone who uses ADO should use the command object to call the stored procedure rather than connection.open "SELECT * FROM...etc"
========================
He was not wholly unware of the potential lack of insignificance.
December 16, 2003 at 2:48 am
It is a good article for the startup DB Programmmers. Keep it up.
Prashant Thakwani
Prashant Thakwanithakwani_prashant@yahoo.co.in
December 16, 2003 at 6:14 am
This is to some extent the beginning of a standards document and I think a good start. Some additional thoughts:
I think developers should be discouraged from creating a new database for each application, instead favoring having databases for related needs (related in particular by where they are needed, replication needs, backup needs). This reduces the number of databases to maintain.
On a related note, if developers can create databases there needs to be a thorough process to make sure all the admin details are worked out -- how will it be backed up, reorged, etc. Otherwise it is very easy to have little islands of applications and discover one day you can't recover it when something happens.
A whole 'nother big topic, but I think blankent recommendations against dynamic sql are a mistake. Depending on your definition of "ad hoc" especially. If the code has parameters driven by user input and those parameters can vary radically (especially being null), AND if the queries will have long run time (more than seconds), then dynamic SQL is often preferred because it allows for better query plans. Complex queries driven by parameters often have lots of OR statements in them (e.g. @Param is null or Field=@param) which can be omitted if dynamic. OR statements make for lousy execution plans usually.
I believe any environment that allows or encourages development of databases by programmers also needs a consistent documentation environment -- how and where will data dictionary type information be stored? Make sure it is consistent among all databases, and preferably easily searchable across databases to ensure that similar concepts are recorded similarily (or perhaps not recorded at all redundantly). Strong Naming conventions are a great benefit if the number of contributors to database schemas is large.
December 16, 2003 at 7:00 am
I'm not entirely sure the statement about always use varchar and not char variables is true.
Varchar data types in a table are stored differently in the page file than char data types, varchar data types, while giving you the benefit of being only as long as the text you put in them, are not stored in order with the rest of the fields. Instead a pointer is stored in the varchar fields place. This points to the end of that record in the paging file, where the length is stored, and then it reads the varchar field from the end of the record. Using varchar fields for short text often takes more processor time and space than if a char was used. The pointer and length field combined take up (4 bytes?), I believe. Take consideration on every field under 10-15 characters to be a char field. Fields like state, gender, any field where the length is known, and the field is short.
December 17, 2003 at 7:42 am
Just curious, has anyone in your organization addressed moving database development from the production environment to a development one...?
December 17, 2003 at 10:39 pm
Thanks every one for taking time to read the article and commenting on same.
Here are few of my thoughts.
David: Thanks for giving the inputs. The sp_ naming convention is sometimes commonly seen with developers and we have learnt a bit hard way in last couple of years and now we do have a naming conventions which takes care of this. What Ferguson we have gone through the grind of developing and implementing the naming conventions and implementing the same in our environment. I agree with Adam on his suggestion that Varchar should be judiciously used only when the Character field is fairly long. Probably I got biased by my learnings as I have only come accross cases on the other extreme in which large fields have been given Char.
JWINER: As far as moving our development to separate development environment is concerned, we have considered it for some of assignments in which the development cycle is long enough. Typically to give you an idea of our organisation, we conduct lot of HR Surveys which have huge amount of data and both data and Analysis varies from survey to survey. Our environment does not fit very well into either typical OLTP or OLAP environment. However we do have some certain set of techniques and VB Components, data structures available which we customise to suit our requirements. Some of the standard surveys also are in single system and single database system in which case we do follow a process of making changes on development and porting to production but for some application which are on the fly we really don't have that luxury.
December 18, 2003 at 10:57 am
I would like to add that I discourage the use of truncate statements in lieu of delete statements. While the truncate statement is faster and consumes less log space, if the statement is called by someone who is not in the SA role or the db_ddladmin database role, or not an owner of the database the statement will fail. This provides less flexibility in allowing other users to use the procedure.
December 19, 2003 at 5:45 pm
I found that there were several factual errors in the article and several differences of opinion. The following is a list, in no particular order:
1) "where the developer has free access to the production servers..."
DO NOT let the developers have free access to the production server(s). We can debate how much access to grant them in the development environment but Production is for real. Hands Off!!!
2) "The developer may choose to create a database..."
DO NOT let the developers create databases. If they must be allowed, do it through a script that you keep in some source code control mechanism (e.g., SourceSafe, et al.)
3) "The recovery model is by default full..."
The recovery model defaults through the MODEL database. MSDE deafults this to Simple; others versions to Full.
4) "The transaction log settings should be kept to simple..."
This is a recipe for disaster. While the log files do grow in the other two modes, this is precisely what allows you to recover all (most?) of your data in the event of a failure. Regular backups should be run in order to prevent excessive growth of the log files.
5) CHAR vs VARCHAR - There are conditions where VARCHAR consumes MORE space than CHAR and results in slower performance.
6) "The transaction isolation level should be set to low wherever possible."
The isolation setting is a trade-off between speed and data integrity. There is no simple rule that can be universally applied. You could just as easily argue here that the isolation level should be set as high as necessary.
Steve Hendricks
MCSD, MCDBA
AFS Consulting Group
(949) 588-9800 x15
Steve Hendricks
MCSD, MCDBA
Data Matrix
shendricks@afsconsulting.com
(949) 588-9800 x15
December 22, 2003 at 2:06 am
TRUNCATE also fails if you use foreign key constraints, regardless of whether they are violated.
I fully agree that people should not be allowed to create datbases willy-nilly.
If developers go creating databases then
========================
He was not wholly unware of the potential lack of insignificance.
December 17, 2004 at 7:56 am
About primary keys:
Are they still a good idea no matter how complex they become or how irrelevant they are?
Example: I have this calculation table (when used with joins and group by it does wonders )
ComputedID as int,
SrcID as int,
scale as float,
offset as float
I could make a PK field, but why except maybe for making it slightly easier to make an editor)?
I also have a table that is something like
CustID int,
BudID int,
AcctID int,
BldgID int,
SrcID int,
Dt DateTime,
Reading float,
ReliabilityID int
The PK would be (CustID, BudID, AcctID, BldgID, SrcID, DT)... but does it make sense to make a PK that complex, esp since I don't plan to pull things out in that order (mostly, select Dt,Reading,Reliability order by DT where xxxx)?
Thanks,
Thor
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply