October 28, 2014 at 3:41 pm
Nine years ago, when I was at a start up, we setup a CI environment. Sure, it took a bit of initial work, coordinating subversion, cruise control, testing, etc. But from a code standpoint, it is IMHO the only way to manage and test a code base, and to be sure you can deploy successfully.
Still, as others have stated, once the database is up and running, it does require some additional thought. Code is replaced, databases are modified. But, with CI, you have the ability to properly test any new database changes. And realistically, most changes are with the code, not to the database. Now, at the time, I was chief software architect, so to me, it was critical that the code was solid. But, I was also the database developer, so I was the guy who had to work out the changes. it was a pain, but it was manageable. From my standpoint, I had to look at it from the perspective of the whole, not just from a database only perspective.
What amazes me is that almost ten years later, it is not in use everywhere.
The more you are prepared, the less you need it.
October 28, 2014 at 3:59 pm
Meow Now (10/27/2014)
Developers love to push CI since it's simple and makes sense with application code. It's a whole new beast when you're trying to automate deployment of a terabyte database. I also struggle with the "leap of faith" it requires. In the end, I want to test, review, and test again everything deployed for the DB. I don't even fully trust Redgate to generate scripts.
CI is supposed to be an early review that would completely preclude any kind of "leap of faith."
Also, I don't trust Red Gate scripts either, and I work for them. But, with a CI process, I can automate testing to verify that the scripts are correct, or find where they're broken.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 28, 2014 at 4:00 pm
Iwas Bornready (10/27/2014)
My experience has been that projects are late because of unrealistic deadlines up front. To take a slower approach would only make them later. To be realistic about how long it will take would only doom the project from ever getting started. I don't know what the solution is. Whether you find the error early or late, you still have to fix it. The one advantage of finding it early is if it changes your development such that other errors are avoided or design is modified earlier which generally is easier sooner than later.
The whole idea behind CI is to speed up the development process by ensuring that you identify easy to fix issues before you move the code to testing or production. It's supposed to be automatic and running in the background, so you won't slow down development, you'll just deliver higher quality code, faster.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 28, 2014 at 4:02 pm
Matt Miller (#4) (10/27/2014)
Iwas Bornready (10/27/2014)
My experience has been that projects are late because of unrealistic deadlines up front. To take a slower approach would only make them later. To be realistic about how long it will take would only doom the project from ever getting started. I don't know what the solution is. Whether you find the error early or late, you still have to fix it. The one advantage of finding it early is if it changes your development such that other errors are avoided or design is modified earlier which generally is easier sooner than later.CI should not be slower, unless your existing process is drastically broken. By that I mean - a change to continuous integration in an organization that does good solid ongoing QA on the deliverables simply shifts the QA test development earlier in the process. If anything CI might end up going FASTER because it encourages a somewhat higher level of automation of some tests (the baseline you have to get over before you can check something in); you also find errors sooner so you don't have the opportunity to leave issues in core pieces (so you don't find that the whole thing doesn't work at the very end).
If on the other hand you happen to be in a "cowboy/wild west" environment where testing is lax or relegated to the very end, then yes CI might slow you down for the first time through (while you build up the inventory). But then again - this ends up being just another example of "you can build it once the right way, or you can build it wrong multiple times over".
It was a bloody mess the first time we used it, but by the time we were done with phase 1 we were already cruising along in our dev shop at MUCH better rates than we had achieved in previous projects.
That really reflects my own experiences with it. It really sped up development...after we had a good test suite built. That is the one pain point in getting it implemented.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 28, 2014 at 4:08 pm
Evil Kraig F (10/27/2014)
CI works in the app development world because of a particular idea: non-persistance of the layer. I'm not discussing the concept of CI, but the reality of its implementation.Due to this, function and proc changes (possibly even views) can be included into the usual app CI paradigm, *as long as they affect no storage schema*. The way I usually help communicate the issues are "Now, imagine you have to modify your .ini files for this change. What is the difference in your approach? Now, we have to do that for every table you touch."
Once you affect schema, you need to have a hard version change. This needs to be a rollforward/backward point. The reason is that no two changes to schema happen alike. Sometimes you need a script that stores the data you're removing from a table when you drop a column (or move it all out to a lookup table). Sometimes you just backup the table and move the data into the new piece. Some new columns need oneshot defaults, some have permanent ones you can leave in the schema.
My point is that once you remove the lego like swapping that is expected in CI, which means you have to persist the past, standard CI implementation is no longer something we should be applying. We should work with the devs, help them to know which version of the persisted data their CI needs to be applied to, but the absolute storage shouldn't be done using typical techniques.
Why am I being particular in how I'm stating that? Because I work with CI teams, and we work well together, but I DON'T implement storage changes the same way they play 'swap the code' with the rest of the pieces. Data changes happen *once*, and from that point the version is hardwired to the new one. When you move the foundation of a house around, you need to stop screwing around with which pretty window you want in the wall for a bit while the entire foundation is re-checked.
Excellent points. I agree completely. I set up two CI processes. One to work just like code, what you call the Lego swapping. It validates the stuff that can be validated that way, procs, etc. Then, I set up a second process that runs once a day, and it does build against a database that has data in order to validate that we can generate a change script out of source control. If we can't, time to figure out why not and what I have to do about it. This way I get as close as I can to a full set of tests and do it in a way that simulates a real deployment. Frequently the artifacts I build out of that are what I use for QA and other deployments.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 28, 2014 at 4:10 pm
cdesmarais 49673 (10/27/2014)
It's not magic. It's also not a free lunch. There are tradeoffs even beyond the extra up front work of TDD and automation. CI is poor at vetting up front architecture. Deploying on a "good enough for my current cases" data store or schema design has bitten more then one team I've worked with. The problem is that CI conceptually focuses on code, which is trivially replaceable. Once data is persisted, there is nothing trivial about changing it. Yay, job security, but I'd rather it got done right the first time. That's why, like others, I am in favor of a hybrid model. We deploy code like objects through a decentralized CI pipeline, where individual teams manage their own procs etc all the way to production, but the database engineering team still vets schema changes and non-trivial data updates and manages a seperate deployment pipeline for those. It's not perfect. The dev teams experience friction getting their changes through, and it's not always obvious to them how to coordinate their code changes in their pipeline with the schema changes in the one they don't own, but hopefully it mostly works.
That's actually really interesting. I'd love to hear more about how it works. Do you have a document you can share?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 28, 2014 at 5:02 pm
Quote from Blog
I hold out hope that anyone producing software, from managers to developers, and everyone in between, would want to do a better job with their next project.
So do I Steve. I have operated all these years understanding that with each project I will learn and mature so that the next project will be better. I find it unacceptable to push and strive for mediocrity when much higher levels of excellence are attainable.
M.
Not all gray hairs are Dinosaurs!
October 29, 2014 at 11:22 am
Miles Neale (10/28/2014)
... I find it unacceptable to push and strive for mediocrity when much higher levels of excellence are attainable.M.
And easily attainable. It often is just a bit of an attitude change than much more work.
October 29, 2014 at 11:38 am
Grant Fritchey (10/28/2014)
cdesmarais 49673 (10/27/2014)
It's not magic. It's also not a free lunch. There are tradeoffs even beyond the extra up front work of TDD and automation. CI is poor at vetting up front architecture. Deploying on a "good enough for my current cases" data store or schema design has bitten more then one team I've worked with. The problem is that CI conceptually focuses on code, which is trivially replaceable. Once data is persisted, there is nothing trivial about changing it. Yay, job security, but I'd rather it got done right the first time. That's why, like others, I am in favor of a hybrid model. We deploy code like objects through a decentralized CI pipeline, where individual teams manage their own procs etc all the way to production, but the database engineering team still vets schema changes and non-trivial data updates and manages a seperate deployment pipeline for those. It's not perfect. The dev teams experience friction getting their changes through, and it's not always obvious to them how to coordinate their code changes in their pipeline with the schema changes in the one they don't own, but hopefully it mostly works.That's actually really interesting. I'd love to hear more about how it works. Do you have a document you can share?
Unfortunately I don't. When I crawl out from under our 2014 upgrade, I may write something up. Conceptually each team has their own git repo which holds code objects which they are solely responsible for. Their process deploys those as part of their CI/CD pipeline. Part of that pipeline also syncs their objects into a master repo owned by the database engineering team. The master repo also holds all the shared objects, and we have our own manually triggered pipeline to deploy from that into the different environments (CI, QA. Prod)
October 29, 2014 at 12:11 pm
cdesmarais 49673 (10/29/2014)
Grant Fritchey (10/28/2014)
cdesmarais 49673 (10/27/2014)
It's not magic. It's also not a free lunch. There are tradeoffs even beyond the extra up front work of TDD and automation. CI is poor at vetting up front architecture. Deploying on a "good enough for my current cases" data store or schema design has bitten more then one team I've worked with. The problem is that CI conceptually focuses on code, which is trivially replaceable. Once data is persisted, there is nothing trivial about changing it. Yay, job security, but I'd rather it got done right the first time. That's why, like others, I am in favor of a hybrid model. We deploy code like objects through a decentralized CI pipeline, where individual teams manage their own procs etc all the way to production, but the database engineering team still vets schema changes and non-trivial data updates and manages a seperate deployment pipeline for those. It's not perfect. The dev teams experience friction getting their changes through, and it's not always obvious to them how to coordinate their code changes in their pipeline with the schema changes in the one they don't own, but hopefully it mostly works.That's actually really interesting. I'd love to hear more about how it works. Do you have a document you can share?
Unfortunately I don't. When I crawl out from under our 2014 upgrade, I may write something up. Conceptually each team has their own git repo which holds code objects which they are solely responsible for. Their process deploys those as part of their CI/CD pipeline. Part of that pipeline also syncs their objects into a master repo owned by the database engineering team. The master repo also holds all the shared objects, and we have our own manually triggered pipeline to deploy from that into the different environments (CI, QA. Prod)
Very interesting. If you do get a chance to share more, please pass it on. grant -at- scarydba -dot- com (unobfuscate as necessary). Thanks.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 10 posts - 16 through 24 (of 24 total)
You must be logged in to reply to this topic. Login to reply