I have a table with roughly 200 million rows. The table contains a number of columns, but at the moment only the primary key and a non clustered index based on the datetime column for indexes.
This first query will return zero rows, in less than a second.
SELECT *
FROM GenericTable
WHERE GenericDate > '01-01-1753' AND GenericDate <= '01-29-1753'
This query takes an excessively long time to return zero rows, approximately two minutes.
DECLARE @startDate DATETIME, @endDate DATETIME
SET @startDate = '01-01-1753'
SET @endDate = '01-29-1753'
SELECT *
FROM GenericTable
WHERE GenericDate > @startDate AND GenericDate <= @endDate
Using a date range that contains data, the performance is a little better? The first query will return 1000s of rows in less than a second, the second query still requires 30 seconds or more to return the same data.
EDIT: I also had it show me the execution plan, and the second query is not using the index?
1st Query:
Select <- Nested Loops (Inner Join) - 0% <- Index Seek (NonClustered) - 0%
<- Key Lookup (Clustered) - 100%
2nd Query:
Select <- Parallelism (Gather Streams) - 10% <- Clustered Index Scan (Clustered) - 90%