The calendar tells us that once again we have reached the second tuesday of the month. In the SQL Community, this means a little party as many of you may already know. This is the TSQL Tuesday Party.
This month represents the 56th installment of this party. This institution was implemented by Adam Machanic (b|t) and is hosted by Dev Nambi (b|t) this month.
The topic chosen for the month is all about the art of being able to assume.
In many circles, to assume something infers a negative connotation. From time to time, it is less drastic when you might have a bit of evidence to support the assumption. In this case, it would be closer to a presumption. I will not be discussing either of those connotations.
What is this Art?
Before getting into this art that was mentioned, I want to share a little background story.
Let’s try to paint a picture of a common theme I have seen in environment after environment. There are eight or nine different teams. Among these teams you will find multiple teams to support different data environments. These data environments could include a warehouse team, an Oracle team, and a SQL team.
As a member of the SQL team, you have the back-end databases that support the most critical application for your employer/client. As a member of the SQL team, one of your responsibilities is to ingest data from the warehouse or from the Oracle environment.
Since this is a well oiled machine, you have standards defined for the ingestion, source data, and the destination. Right here we could throw out a presumption (it is well founded) that the standards will be followed.
Another element to consider is the directive from management that the data being ingested is not to be altered by the SQL team to make the data conform to standards. That responsibility lies squarely on the shoulder of the team providing the data. Should bad data be provided, it should be sent back to the team providing it.
Following this mandate, you find that bad data is sent to the SQL team on a regular basis and you report it back to have the data, process, or both fixed. The next time the data comes it appears clean. Problem solved, right? Then it happens again, and again, and yet again.
Now it is up to you. Do you continue to just report that the data could not be imported yet again due to bad data? Or do you now assume the responsibility and change your ingestion process to handle the most common data mistakes that you have seen?
I am in favor of assuming the responsibility. Take the opportunity to make the ingestion process more robust. Take the opportunity to add better error handling. Take the opportunity continue to report back that there was bad data. All of these things can be done in most cases to make the process more seamless and to have it perform better.
By assuming the responsibility to make the process more robust and to add better reporting/ logging to your process, you can only help the other teams to make their process better too.
While many may condemn assumptions, I say proceed with your assumptions. Assume more responsibility. Assume better processes by making them better yourself. If it means rocking the boat, go ahead – these are good assumptions.
If you don’t, you are applying the wrong form of assumption. By not assuming the responsibility, you are assuming that somebody else will or that the process is good enough. That is bad in the long run. That would be the only burning “elephant in the room”.
From here, it is up to you. How are you going to assume in your environment?