SQL Server Performance for .NET Developers: Execution Plans, Index Strategy & Query Store

Posted on: 4/23/2026 3:15:01 AM

Your query runs fine on a dev machine with 1,000 rows, but times out in production with 10 million rows? You add indexes "just in case" but INSERTs keep getting slower? These are problems nearly every .NET developer faces when applications start to scale. This article dives deep into the 3 pillars of SQL Server performance: reading Execution Plans, designing proper Index Strategies, and leveraging Query Store for continuous monitoring.

~80%Slow queries caused by missing/wrong indexes
10-100xImprovement with Covering Indexes
0 costQuery Store built-in since SQL 2016+
.NET 10EF Core 10 + compiled queries

Execution Plans — The GPS Map of Every Query

An Execution Plan is the strategy SQL Server's Query Optimizer creates to execute your query. It tells you exactly: where data is read from (table scan vs. index seek), which join algorithm is used (nested loop, hash match, or merge join), and the cost of each step. Without reading Execution Plans, performance tuning is just guesswork.

graph LR
    A["SQL Query"] --> B["Query Parser"]
    B --> C["Query Optimizer"]
    C --> D["Execution Plan"]
    D --> E["Storage Engine"]
    E --> F["Result Set"]

    C --> G["Statistics"]
    C --> H["Indexes Available"]

    style A fill:#e94560,stroke:#fff,color:#fff
    style D fill:#4CAF50,stroke:#fff,color:#fff
    style G fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style H fill:#f8f9fa,stroke:#e94560,color:#2c3e50
Figure 1: SQL Server query processing flow — the Optimizer uses Statistics and Indexes to choose the optimal Execution Plan

How to Read Execution Plans

In SSMS, press Ctrl+L for the Estimated Plan or Ctrl+M then run the query for the Actual Plan. With EF Core, you can enable logging to capture the SQL for analysis:

// Get SQL from EF Core for plan analysis
var query = context.Orders
    .Where(o => o.CustomerId == customerId && o.Status == "Active")
    .OrderByDescending(o => o.CreatedDate)
    .Take(20);

// Log SQL to console
var sql = query.ToQueryString();
Console.WriteLine(sql);

Or use T-SQL directly:

-- Enable Actual Execution Plan via T-SQL
SET STATISTICS IO ON;
SET STATISTICS TIME ON;

SELECT TOP 20 o.Id, o.TotalAmount, o.CreatedDate, c.FullName
FROM Orders o
INNER JOIN Customers c ON c.Id = o.CustomerId
WHERE o.CustomerId = 12345 AND o.Status = N'Active'
ORDER BY o.CreatedDate DESC;

SET STATISTICS IO OFF;
SET STATISTICS TIME OFF;

Key Operators to Recognize

Table Scan / Clustered Index Scan — reads the entire table; acceptable for small tables (<10K rows), a red flag for large ones. Index Seek — pinpoints the exact location in the index; the fastest operation. Key Lookup (Bookmark Lookup) — after seeking on a non-clustered index, it goes back to the clustered index for additional columns — the silent performance killer. Hash Match — joins via hash table; memory-intensive but efficient for large datasets without suitable indexes.

Reading STATISTICS IO — Numbers That Don't Lie

When SET STATISTICS IO ON is enabled, SQL Server returns logical reads — the number of data pages (8KB each) read from the buffer pool. This is the most reliable metric because it's unaffected by cache state or system load. The optimization goal is to minimize logical reads:

-- Sample output:
-- Table 'Orders'. Scan count 1, logical reads 4521, physical reads 0
-- Table 'Customers'. Scan count 0, logical reads 2, physical reads 0

-- After adding a covering index:
-- Table 'Orders'. Scan count 1, logical reads 3, physical reads 0  ← 1500x reduction!
-- Table 'Customers'. Scan count 0, logical reads 2, physical reads 0

Index Strategy — The Art of Balancing Reads and Writes

An index is like a book's table of contents: it speeds up finding information, but every time the content changes, the index must be updated too. Too many indexes slow down INSERT/UPDATE; too few indexes slow down SELECT. The key lies in choosing the right columns, in the right order, with the right index type.

graph TB
    subgraph CI["Clustered Index (B-Tree)"]
        direction TB
        R["Root Page"] --> I1["Intermediate"]
        R --> I2["Intermediate"]
        I1 --> L1["Leaf = Data Row"]
        I1 --> L2["Leaf = Data Row"]
        I2 --> L3["Leaf = Data Row"]
        I2 --> L4["Leaf = Data Row"]
    end

    subgraph NCI["Non-Clustered Index"]
        direction TB
        NR["Root"] --> NI1["Intermediate"]
        NR --> NI2["Intermediate"]
        NI1 --> NL1["Leaf = Key + RID"]
        NI2 --> NL2["Leaf = Key + RID"]
        NL1 -.->|"Key Lookup"| L2
    end

    style R fill:#e94560,stroke:#fff,color:#fff
    style NR fill:#2c3e50,stroke:#fff,color:#fff
    style L1 fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50
    style L2 fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50
    style L3 fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50
    style L4 fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50
    style NL1 fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50
    style NL2 fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50
Figure 2: Clustered Index structure (data lives at the leaf) vs Non-Clustered Index (leaf contains a pointer back to the clustered index)

Clustered Index — Each Table Has Only One

The clustered index determines the physical storage order of data on disk. By default, the Primary Key is the clustered index. For most transactional tables, IDENTITY (int/bigint auto-increment) is the best choice for the clustering key because inserts always append to the end — no page splits. Using a GUID as the clustering key is the most common anti-pattern because random values cause constant page splits:

-- ANTI-PATTERN: GUID as clustered key
CREATE TABLE Orders (
    Id UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWID(),  -- Random → page splits!
    ...
);

-- BETTER: If you must use GUID, use NEWSEQUENTIALID()
CREATE TABLE Orders (
    Id UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWSEQUENTIALID(),
    ...
);

-- BEST for most scenarios: IDENTITY
CREATE TABLE Orders (
    Id BIGINT IDENTITY(1,1) PRIMARY KEY,
    ExternalId UNIQUEIDENTIFIER DEFAULT NEWID() UNIQUE,  -- GUID for API exposure
    ...
);

Non-Clustered Index — Fast SELECTs but Slower INSERTs

A non-clustered index is a separate B-tree structure containing key columns plus a pointer to the clustered index. Design principles:

PrincipleExplanationExample
Leftmost PrefixIndex (A, B, C) serves queries filtering on A, A+B, A+B+C — but NOT B alone or C aloneWHERE CustomerId = ? AND Status = ? → Index (CustomerId, Status)
Selectivity FirstPlace columns with more unique values first in composite indexesCustomerId (100K values) before Status (5 values)
INCLUDE ColumnsAdd columns to the leaf level without them participating in the key — creates a Covering Index, eliminates Key LookupsINCLUDE (TotalAmount, CreatedDate)
Filtered IndexIndex contains only a subset of data; smaller and fasterWHERE Status = 'Active' for a table with 90% inactive rows

Covering Index — The Ultimate Weapon

A Covering Index contains all columns a query needs, so SQL Server reads only the index without going back to the base table (Key Lookup). This is the most effective optimization technique, often yielding 10-100x improvements:

-- Most common query pattern
SELECT o.Id, o.TotalAmount, o.CreatedDate
FROM Orders o
WHERE o.CustomerId = @CustomerId AND o.Status = N'Active'
ORDER BY o.CreatedDate DESC;

-- Perfect Covering Index for the query above
CREATE NONCLUSTERED INDEX IX_Orders_Customer_Status
ON Orders (CustomerId, Status, CreatedDate DESC)
INCLUDE (TotalAmount);
-- Key: CustomerId, Status (WHERE), CreatedDate DESC (ORDER BY)
-- Include: TotalAmount (SELECT — leaf-only, doesn't participate in seek)

Check Missing Indexes from the Execution Plan

SQL Server automatically suggests missing indexes right in the Execution Plan (green "Missing Index" hint). You can also query the DMV:

SELECT
    CONVERT(DECIMAL(18,2), gs.avg_user_impact) AS avg_improvement_pct,
    gs.user_seeks + gs.user_scans AS total_usage,
    d.statement AS table_name,
    d.equality_columns,
    d.inequality_columns,
    d.included_columns
FROM sys.dm_db_missing_index_groups g
JOIN sys.dm_db_missing_index_group_stats gs ON g.index_group_handle = gs.group_handle
JOIN sys.dm_db_missing_index_details d ON g.index_handle = d.index_handle
WHERE d.database_id = DB_ID()
ORDER BY gs.avg_user_impact * (gs.user_seeks + gs.user_scans) DESC;

Index Maintenance — Don't Forget Fragmentation

Indexes become fragmented over time with INSERT/UPDATE/DELETE operations. Fragmentation > 30% warrants a REBUILD; 10-30% warrants a REORGANIZE:

-- Check fragmentation
SELECT
    OBJECT_NAME(ips.object_id) AS TableName,
    i.name AS IndexName,
    ips.avg_fragmentation_in_percent,
    ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10 AND ips.page_count > 1000
ORDER BY ips.avg_fragmentation_in_percent DESC;

-- REORGANIZE: online, minimal locking, suitable for 10-30% fragmentation
ALTER INDEX IX_Orders_Customer_Status ON Orders REORGANIZE;

-- REBUILD: offline (or ONLINE with Enterprise), suitable for > 30%
ALTER INDEX IX_Orders_Customer_Status ON Orders REBUILD WITH (ONLINE = ON);

Query Store — 24/7 Performance Monitoring

Query Store is a built-in feature since SQL Server 2016 that acts as a "flight recorder" for your database. It automatically captures and stores information about every query: execution plans, runtime statistics (CPU, duration, logical reads), and plan change history over time. It's the most powerful tool for detecting query regression — queries that suddenly become slow due to plan changes.

graph TB
    A["Query Execution"] --> B["Query Store"]
    B --> C["Plan Store"]
    B --> D["Runtime Stats Store"]
    B --> E["Wait Stats Store"]

    C --> F["Plan Forcing"]
    D --> G["Regression Detection"]
    E --> H["Wait Analysis"]

    F --> I["Stable Performance"]
    G --> I
    H --> I

    style A fill:#e94560,stroke:#fff,color:#fff
    style B fill:#2c3e50,stroke:#fff,color:#fff
    style I fill:#4CAF50,stroke:#fff,color:#fff
    style F fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style G fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style H fill:#f8f9fa,stroke:#e94560,color:#2c3e50
Figure 3: Query Store captures Plans, Runtime Stats, and Wait Stats — helping detect regressions and force good plans

Enabling Query Store

-- Enable Query Store (recommended for every production database)
ALTER DATABASE [MyAppDb] SET QUERY_STORE = ON (
    OPERATION_MODE = READ_WRITE,
    DATA_FLUSH_INTERVAL_SECONDS = 900,
    INTERVAL_LENGTH_MINUTES = 60,
    MAX_STORAGE_SIZE_MB = 1024,
    QUERY_CAPTURE_MODE = AUTO,        -- Only capture noteworthy queries
    SIZE_BASED_CLEANUP_MODE = AUTO,
    WAIT_STATS_CAPTURE_MODE = ON
);

Finding Top Resource-Consuming Queries

-- Top 20 queries by logical reads (past week)
SELECT TOP 20
    qt.query_sql_text,
    q.query_id,
    p.plan_id,
    rs.avg_logical_io_reads,
    rs.avg_duration / 1000.0 AS avg_duration_ms,
    rs.avg_cpu_time / 1000.0 AS avg_cpu_ms,
    rs.count_executions,
    rs.avg_logical_io_reads * rs.count_executions AS total_logical_reads
FROM sys.query_store_runtime_stats rs
JOIN sys.query_store_plan p ON rs.plan_id = p.plan_id
JOIN sys.query_store_query q ON p.query_id = q.query_id
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
WHERE rs.last_execution_time > DATEADD(DAY, -7, GETUTCDATE())
ORDER BY total_logical_reads DESC;

Force Plan — Lock a Good Plan When Regression Hits

When you detect a query suddenly slowing down because the optimizer chose a worse plan, you can "force" the previous (better) plan to stabilize performance while investigating the root cause:

-- Force a good plan for a specific query
EXEC sp_query_store_force_plan @query_id = 42, @plan_id = 7;

-- Check currently forced plans
SELECT q.query_id, p.plan_id, p.is_forced_plan, qt.query_sql_text
FROM sys.query_store_plan p
JOIN sys.query_store_query q ON p.query_id = q.query_id
JOIN sys.query_store_query_text qt ON q.query_text_id = qt.query_text_id
WHERE p.is_forced_plan = 1;

-- Unforce after fixing the root cause
EXEC sp_query_store_unforce_plan @query_id = 42, @plan_id = 7;

Parameter Sniffing — The Hidden Enemy

Parameter sniffing occurs when SQL Server creates an execution plan based on the parameter value from the first time a query is compiled. If the first call passes @CustomerId = 1 (with 5 orders), the optimizer creates a plan using Index Seek. But when @CustomerId = 99999 (with 500,000 orders) comes along, the old plan is completely unsuitable — a Table Scan would be faster, yet SQL Server still uses Index Seek + 500K Key Lookups.

Signs of Parameter Sniffing

A query runs fast with one parameter but extremely slow with another, even though both are valid. Estimated rows in the plan differ wildly from Actual rows (e.g., estimated 5 rows but actual 500K rows). Performance anomalies after SQL Server restarts or plan cache clears.

Solutions for Parameter Sniffing

-- Solution 1: OPTIMIZE FOR UNKNOWN — average plan for all values
SELECT * FROM Orders WHERE CustomerId = @CustomerId
OPTION (OPTIMIZE FOR UNKNOWN);

-- Solution 2: RECOMPILE — new plan every time (costs CPU but plan is always accurate)
SELECT * FROM Orders WHERE CustomerId = @CustomerId
OPTION (RECOMPILE);

-- Solution 3: Query Store Plan Forcing (recommended for production)
-- Force the most suitable plan based on collected data
EXEC sp_query_store_force_plan @query_id = 42, @plan_id = 7;

In EF Core, you can add query hints:

// EF Core 10 — add RECOMPILE hint for queries with parameter sniffing
var orders = await context.Orders
    .Where(o => o.CustomerId == customerId)
    .TagWith("OPTION (RECOMPILE)")  // Visible in Query Store
    .ToListAsync();

Common Anti-Patterns in EF Core

EF Core is an excellent ORM, but without care it can generate extremely inefficient queries. Here are the most common anti-patterns:

1. N+1 Query Problem

// ❌ ANTI-PATTERN: N+1 queries — 1 query for Orders + N queries for Customer
var orders = await context.Orders.ToListAsync();
foreach (var order in orders)
{
    Console.WriteLine(order.Customer.FullName);  // Lazy load = 1 query/order!
}

// ✅ FIX: Eager loading with Include
var orders = await context.Orders
    .Include(o => o.Customer)
    .ToListAsync();

// ✅ BETTER: Projection — only fetch needed columns
var orderDtos = await context.Orders
    .Select(o => new OrderDto
    {
        Id = o.Id,
        Total = o.TotalAmount,
        CustomerName = o.Customer.FullName
    })
    .ToListAsync();

2. Client-Side Evaluation

// ❌ ANTI-PATTERN: FormatDate() can't translate to SQL
// → EF Core loads entire table into memory then filters!
var orders = await context.Orders
    .Where(o => FormatDate(o.CreatedDate) == "2026-04-23")
    .ToListAsync();

// ✅ FIX: Use SQL-translatable expressions
var targetDate = new DateTime(2026, 4, 23);
var orders = await context.Orders
    .Where(o => o.CreatedDate.Date == targetDate)
    .ToListAsync();

3. SELECT * When You Only Need a Few Columns

// ❌ Fetching full entity (possibly 30+ columns, including Body nvarchar(max))
var posts = await context.Posts
    .Where(p => p.Status == "Published")
    .OrderByDescending(p => p.CreatedDate)
    .Take(20)
    .ToListAsync();

// ✅ Projection — only fetch display columns, significantly reduces IO
var posts = await context.Posts
    .Where(p => p.Status == "Published")
    .OrderByDescending(p => p.CreatedDate)
    .Take(20)
    .Select(p => new PostListItem
    {
        Id = p.Id,
        Title = p.Title,
        CreatedDate = p.CreatedDate,
        ViewCount = p.ViewCount
    })
    .ToListAsync();

4. Not Using AsNoTracking for Read-Only Queries

// ❌ Change Tracker monitors every entity — wastes memory
var products = await context.Products.ToListAsync();

// ✅ AsNoTracking — no tracking, ~30% faster for read-only
var products = await context.Products
    .AsNoTracking()
    .ToListAsync();

// ✅✅ EF Core 10: Compiled Queries for hot paths
private static readonly Func<AppDbContext, int, Task<Order?>> _getOrderById =
    EF.CompileAsyncQuery((AppDbContext ctx, int id) =>
        ctx.Orders.AsNoTracking().FirstOrDefault(o => o.Id == id));
Anti-PatternConsequenceFix
N+1 QueriesThousands of DB roundtripsInclude() or Projection
Client-Side EvaluationLoads entire table into memoryUse SQL-translatable expressions
SELECT *Reads excess columns, wastes IO and memoryProjection with Select()
No AsNoTrackingChange Tracker wastes 30%+ memoryAsNoTracking() for read-only
GUID Clustered KeyPage splits, constant fragmentationIDENTITY or NEWSEQUENTIALID()
Missing IndexesTable Scans on million-row tablesComposite + Covering Indexes

SQL Server Production Optimization Checklist

10 Steps to Check Before Deploying

1. Enable Query Store on every production database
2. Review Execution Plans for the top 10 queries (by frequency and total reads)
3. Check the Missing Index DMV and create appropriate covering indexes
4. Remove unused indexes (indexes that exist but nobody uses — only adding write cost)
5. Check index fragmentation and schedule REBUILD/REORGANIZE
6. Ensure Statistics are up-to-date (AUTO_UPDATE_STATISTICS = ON, AUTO_CREATE_STATISTICS = ON)
7. Use AsNoTracking() for all read-only queries in EF Core
8. Use Projection (Select) instead of loading full entities when possible
9. Monitor Parameter Sniffing via Query Store regressed queries report
10. Set up alerts for query duration > threshold (e.g., > 5 seconds)

Conclusion

SQL Server performance tuning doesn't have to be a "dark art" reserved for professional DBAs. With 3 core tools — Execution Plans for diagnosis, Index Strategy for treatment, and Query Store for continuous monitoring — you can significantly improve database performance without changing your system architecture. Start by enabling Query Store, reviewing your top resource-consuming queries, and creating covering indexes for them. Those 3 steps alone can improve response times 5-10x for most .NET applications.

References: