Hi,
I'm trying to understand why a particular query can change to from a clustered index seek to a clustered index scan. I've pulled some code out from one of our stored procedures because it had a large number of logical reads.
SELECT th.TransactionId, i.InstrumentDescripton, td.Quanity, td.Price FROM dbo.TransactionHeader th INNER JOIN dbo.TransactionDetail td ON td.TransactionId = th.TransactionId INNER JOIN dbo.Instrument i ON i.InstrumentId = th.InstrumentId WHERE i.AccountId IN (10,19,26,31);
I've modified the table and column names but other than that, it's a valid representation of a query.
The table I am having issues with is the TransactionDetail table. There is a clustered index on the TransactionId column and a nonclustered index (also the primary key) on the identity column (not used in the queries). There are a shade under 1m rows in this table.
This query itself will produce a clustered index seek. All of the accounts in the IN list contain data.
If I add another AccountId to the IN list which does not contain any data, I then revert to a clustered index scan. When this happened, I noticed the query plan also switched to a parallel plan.
I'm curious to know why having an AccountId which is empty (no client transactions) the IN list makes a difference to the type of index lookup the engine performs.
Thanks