I have posted this before elsewhere. However, due to the frequency in which I come across this, I feel it’s worth reiterating…
This unfortunate situation arises all too frequently, and there is a relatively simple solution to it.
Often, everything performs well in your development/test environment because you have relatively few records in them! Sure, if I only had 30 records in my main table, my application will perform wonderfully too! Once it gets into the real world, and a few hundred thousand or millions of records are input or imported, the need for performance tuning becomes a critical apparentness. Just as in development, once this type of problem makes it to Production, it is very expensive to fix, making your grumpy boss even grumpier.
The solution is simple; create the same number of records in your development environment, as you’d expect to have in the first year of operation. If that is not possible, at least put in a good proportional amount of data to simulate a decent load. If you have importable data (say, from legacy systems or third party feeds) them pump them in. Be sure to change any critical privacy data or other legally sensitive information as development is rarely protected from prying eyes. If you do not, then there are many open source utilities to create mock data that would closely conform to your actual data. If you are the creative type you may write your own code to populate the tables with "dummy data". Regardless of how you fill the tables, the point is that now, as you’re first writing your queries, you’ll have a really good idea if you need to concentrate on a particular query to avoid future headaches and counseling bills.