December 1, 2005 at 7:34 am
I'm curious to hear how many are using or have considered this approach to building a sql server application: .NET, C# business logic running on web servers -- tsql is generated on the fly by an object -- virtually no use of stored procedures. Database schema created by a modeling tool ( visio I think ) so extreme normalization with very long field names.
This is what we now have. Time will tell how it scales. If you chose this, why? If not, why? Keep it mind that many argue that done correctly dynamic sql is not unsafe, and that the sql execution plans are cached.
thx,Randy
December 1, 2005 at 7:54 am
I am not an expert on C# and .Net stuff, but as far as SQL Server is concerned, sp_executesql can cache the plan for dynamic SQL. But there are other factors that determine the overall performance - such as the use of temporary tables, schema changes, interleaving DDL and DML statements, etc.
No use of stored procedures is not good from a SQL Server performance and scalability perspective. Normalization is a very open ended subject, needs a lot of testing to determine the best design. But offcourse long names means more IO, a little performance penality and more effort in writing Selects. Developers tend to use SELECT * instead of column name when column names are long - which is not a good practise.
Dynamic SQL is not safe - but if the end user has limited rights and the schema is designed carefully with CHECK constraints, safety problems can be avoided.
What is best is determined by testing and testing only. Test both scenarios and determine which is the best one.
-Gary
December 1, 2005 at 8:06 am
Thanks for your response. The part about long field names affecting performance is interesting -- hadn't heard that before. This reminds me of a feeling I've had for a long time about Microsoft's approach. It seems to be a mindset that says " we are creating the future here and will not be impeded by hardware -- to use our technology you must have the latest, most powerful, most expensive hardware and software available. We assume that in our designs."
They may be right in this. By hiding complexity through GUIs and wizards, you can hire cheaper workers with less skill. But the GUIs and wizards require lots of processing power, unlike some of the older command line oriented systems. Of course there you had to have highly skilled people ( e.g. UNIX command line work )
Randy
December 1, 2005 at 12:22 pm
Randy,
1. Stored procedures are pre-compiled (performance) and easier to use due to less room for the errors and typos when writing application code.
2. Make use of Table-valued functions that combine advantages of views and stored procedures
3. Dynamic SQL may not be safe (in addition to the what was correctly pointed out by Gagandeep Singh  for the reason that application front end code may not contain correct input checking even if Check constraints are implemented on SQL level. The classic example: I enter in the user name or password input Web or .NET control: -- which is a comment in SQL. Your dynamic statement will be composed as "where the password = myinput" will look like "where the password =" and any other SQL after -- will be ignored. You can use yor imagination to figure out how to compromise sa or sysadmin login in this case
Regards,Yelena Varsha
December 2, 2005 at 6:43 am
embedded sql is far easier target for hackers and it has been reported that hackers are targetting applications far more now than o/s as it is easier.
Embedded sql is not as efficient as stored procedures from the messaging point of view - more traffic between front and back. It does cache and re-use but it's still not as quick as procs, however that said, procs can be badly written too.
Poor or questionable database structures may not be an issue if the indexing and queries are good - in real world apps are often far from ideal - but hey that's where the dba comes in !!
I'd worry about security and scalability.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 2, 2005 at 7:45 am
I agree with the above comments about database.
Usually it's not a good idea to put business logic on web server. It's better to be on a dedicated server protected by internal firewall. Web service or asp .net layer is a very thin layer. It calls the business layer by .net remoting. The architecture looks like:
WebPage-->Web server -->Business logic-->DB
All our apps run in this model.
December 2, 2005 at 11:33 am
Gimli,
I did not say dynamic sql statements if used with sp_executesql or Execute statement are not compiled. I did not say anything at all. I just wanted to point out that SPs leave less space for user error.
Additionally: Execute topic says: "........
Security Note: |
---|
Before calling EXECUTE with a character string validate the character string. Never execute a command constructed from user input that has not been validated.." |
Then it has a reference to SQL Injection topic. I clicked there. What an example!!!! (I refer to 2005 CTP BOL)
Regards,Yelena Varsha
December 2, 2005 at 12:18 pm
Gimli, you are correct. However, look at how cost is initially assigned and how that affects how long they remain in cache. That's why stored procedures are typically better.
Also, the use of queries generated on the fly means you have to give permissions to the base tables instead of using ownership chaining. It means you have a whole lot less control over how data is accessed and modified. This is another advantage to the use of views and stored procedures.
K. Brian Kelley
@kbriankelley
December 5, 2005 at 3:48 am
@Yelena: I didn't want to offend you. I'm in a company with 1000+ it-people. Off course only a small part of them are actually using sql server. But most of the people who are developing with sql server think procs are compiled when they are created and dynamic sql is compiled every time it is used (even the former lead architect sql server did). In your post those people could think they are right, so i had to reply. But reading your answer, i believe you meant dynamic-sql which is made up by an user. I meant dynamic sql that is generated by an application.
@bkelley: I don't understand what you mean. I do not know how the cache works exactly. I thougt it was LRU.
December 5, 2005 at 8:05 am
I really didn't want to start up the old arguments about dynamic sql versus procs. Yes, the dynamic sql we're using now is automatically generated, not embedded in code or aspx pages. The dynamic / proc argument has kind of been argued to death -- study it, make a choice and run with it.
I'm more interested in hearing who is using the approach of .net , c# running on web servers and particularly where the sql database design was generated using object modeling via a tool such as visio. This can result in a highly normalized database schema with long field names ( IO impact? ). The extreme normalization seems to be recommended by everyone these days, but I'm still not convinced it's the best overall when it comes to humans having to work with and understand it. Also not convinced it's the best for performance and loading from legacy sytems.
Maybe I'm just missing the good old days when you could look at a schema and get a fair understanding of it in an hour. Some aspects of this "new" approach seems to relegate the database server to just a dumb data box. I don't like that either -- SQL Server Magazine articles this year on sql 2005 have mentioned the CLR and moving mid-tier operations back to the database server.
Randy
December 6, 2005 at 3:16 pm
Yup I sat through part of a microsoft presentation ( sorry guys name forgotten but he is one of the gods of computing ) and he said that with the power of new 64bit architecture and the features of sql 2005 , clr, web services, broking service etc. it might be better to rethink the n'tier and return to all on one box. I actually think it sometimes makes sense - I support a multi tier prod environment, com+, windows, web services, msmq and such - to be honest it's a bit of a nightmare to understand and I sometimes wonder if the complexity of multiple load balanced tiers actually degrades performance, especially when I analyse the actual performance and utilisation of the server(s) < grin >
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 7, 2005 at 2:27 am
Sorry randy,
we have a server (8GB internal, glassfiber SAN) with SQL Server (getting 5,5GB of it) , IIS, Reporting services and a .Net web services running on it. There are databases with oltp and there is one database to make the reports. It's no star diagram, all the information is put in only 4 tables. It performs really BAD. We once looked at a query in SQL profiler and it read 13 GB.
I know for sure in this case normalization would really help, and the developers are doing some adjustments, but it's a running system and maintanance-windows are small.
I think there are no rules of thumb. Sometimes you have to just test various options to get the best performance. But keep in mind: less I/O = better performance.
December 9, 2005 at 1:00 pm
First step is to always normalize the database. This ensures data integrity standard are met. After that, denormalize when needed to regain performance. However, data integrity checks will need to be built in. While I agree that with faster and faster systems we may think there's no longer a need to follow this approach, I also know human nature. The faster it performs, the faster we ask for it to perform. So we're always going to be chasing that horse.
And you're right, long field names are unwieldy for us humans. Also, long field names actually contributes to the dynamic sql/stored procedures argument. The longer the names having to go across the wire, the more advantageous stored procedures look.
K. Brian Kelley
@kbriankelley
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply