July 7, 2008 at 1:19 pm
I have a similar situation but it has another layer of complexity... any thoughts on how to refresh QA data with the following:
each QA person need a specific login ID to test against. the idea is that we would grab all related data for the specific ID and bring it down to an empty QA db. It has been proposed that each QA person gets their own local environment. I woudl rather keep their ID's all in one instance.
here is where it gets seriously crazy for us...
after a time a single QA analyst would need to refresh the data back to the baseline. not everyone but just this one QA person.
thoughts on how I could accomplish this? we are looking at these ID's in over 7 databases and inside those db's there are 100's of tables.
any guidance would be appreciated.
July 7, 2008 at 1:27 pm
All changes have to be tested together. There has to be only one QA environment for all who use it. It is called Integration Testing. And we are talking about both DATA and CODE changes.
Regards,Yelena Varsha
July 7, 2008 at 1:34 pm
I have the director of QA asking for this "golden copy" of data that they wish to test against and then revert back after testing.
is this a possibility if some QA is working on one project and another QA analyst is on another project when QA analyst #1 is done early and needs their data "reset" how can I accomplish this easily?
July 7, 2008 at 2:08 pm
Christopher Favero (7/7/2008)
I have the director of QA asking for this "golden copy" of data that they wish to test against and then revert back after testing.is this a possibility if some QA is working on one project and another QA analyst is on another project when QA analyst #1 is done early and needs their data "reset" how can I accomplish this easily?
Snapshots, multiple copies of databases (a pain to have to adjust applications), or multiple instances ... any of these should accomplish that.
July 7, 2008 at 3:06 pm
cant easily do snapshots or copies as the data size of some of ourt dbs are over 300GB. with 1000's of Id's we would need 3 or 4 levels deep of data for one ID which is alot less than the 300Gb.
and we woudl need this for all related and shared db's
July 8, 2008 at 12:01 am
It's been a while, but I've seen a solution for mainframe db2 where you could prepare a full set of (related) data to be loaded into an environment.
I cannot recall its name, but I can recall its pain.
First of all, setting up the baseline data is a drudgery.
The huge problem with this is maintenance. As systems (schema) grow and is being modified, this also needs to be reflected to your baseline data.
On top of the user related data, there is always a reporting part that uses all data in your database or tables. There also is a big caveot !!
Back in the days, there was a dedicated team for this.
After the test period of this, they switched back to one QA for all, timed and coordinated by a QA-team.
This QA was set up with a full copy of the production data, lived with the rules for production data (security,..).
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 8, 2008 at 5:53 am
Christopher Favero (7/7/2008)
cant easily do snapshots or copies as the data size of some of ourt dbs are over 300GB. with 1000's of Id's we would need 3 or 4 levels deep of data for one ID which is alot less than the 300Gb.and we woudl need this for all related and shared db's
Snapshots are sparse files though.
July 8, 2008 at 6:07 am
True, but I forgot to add that by virtue of qa testing the application data will change and snapshots are readonly.
I am trying to explain that the complexity of doing this with a subset of data is starting to get out of control.
We have a very shared database architecture with one main db being the "1 db to rule them all" db 😉 in total 14 db's with a grand total of 1 TB of data. And all are interdependent as our apps are rife with cross db joins.
I hope this is readable as I am typing this on my crackberry.
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply