If you are in Management Studio and have seen a suggested index, or if you have used a resource such as Pinal Dave’s Missing Indexes Script, you will be tempted to apply the suggested index without exercising discretion in doing so.
Please, be careful! I have seen a table with 10GB of data pages and 50GB+ of index pages. This is a terrifying thing for write performance, and often, a little bit of analysis is in order.
A common anomaly is to see multiple indexes with identical sort columns and different groups of include columns. Fixing these into a single covering index is a low-hanging fruit to pick. Simply delete the duplicate indexes and replace with one that has all necessary included columns — most of the time.
What if all of these grouping columns add up to most or all of the table columns? You are then keeping a second copy of the table in the nonclustered index and are causing numerous extra write I/O operations.
In that case, the best bet is to eliminate all included columns and just keep the sort columns as one index — most of the time. You may have an outlier query that requires included columns to evade a performance bottleneck. How do you determine this? The best way is with the DMV included in SQL Server. Analyze the missing indexes based on estimated impact of index change, multiplied by the number of executions that asked for this missing index. Look at the highest values of the product.
Pinal Dave’s script calculates this number. Again, however, you have to analyze each recommendation. Just throwing the indexes at the database may solve a read issue, and create a write or disk space issue.
We are DBA. We thrive on using the simplest solution and moving on. However, in many cases, we must do a little digging to determine what solution is truly the simplest.