One of the things that DevOps asks software developers to do is experiment. Try new ideas out, get feedback quickly, and then choose how to grow or stop your experiment. This is great for features, and it works well for application software.
The general flow for this is to talk to customers, and then decide what to build. In some sense, this can work, but as I heard at the DOES Summit recently, if Henry Ford had asked his early on customers what to build, they'd have asked for a faster horse.
Customers are limited by their current experience. This includes not only end users, but for us database pros, the developers that build software. When they want to experiment, they often need some backing from the database to store information and query it.
If we want to help enable experiments, and allow our software to evolve, there are two things we need to deal with in experiments. One is schema changes, either through new data buckets in tables, or programmable objects, such as views, functions, and procedures Adding these, or removing them when experiments aren't useful, can be cumbersome and difficult. It's amazing how quickly we create dependencies and how slow we are to remove them.
The other area is in ensuring that we properly or appropriately, handle resource usage. Do we go back and tune queries, or restructure the way that we've indexed items to ensure that our system works optimally? Some tuning can be done early, and should be, but some requires some feedback to understand query patterns or data loads.
Today, I'm wondering how, or if, you experiment in database work. What works for you, or what doesn't? Or do you hate the idea of experiments in the database world and want more specification up front? Let me know with a comment.