Database Indexing — The Art of Query Optimization for Production Systems

Posted on: 4/20/2026 11:12:43 AM

Have you ever seen a simple query that took 30 seconds on production drop to 5ms after adding a single CREATE INDEX line? That's not magic — that's the power of database indexing. However, indexing isn't "the more, the better." Misunderstanding indexes can make your system slower, waste space for nothing, and turn every INSERT into a nightmare.

This article dives deep into how database indexes actually work under the hood, the common index types in SQL Server and PostgreSQL, how to read Execution Plans, and indexing strategies for production systems handling millions of queries per day.

1000x Query speed improvement with proper indexing
80% DB performance issues are index-related
10-30% Extra storage per index
O(log n) B-Tree Index search complexity

1. How Does an Index Work?

Imagine a 1,000-page book. To find the word "Deadlock," you have two options: flip through every page (Table Scan) or open the index at the back (Index Lookup). A book's index is the fundamental principle behind a database index.

In a database, an index is a separate data structure maintained alongside the original table. When you create an index on the Email column, the database creates a "condensed copy" containing the sorted email values along with pointers to the original rows.

1.1. B-Tree — The Foundation of Most Indexes

B-Tree (Balanced Tree) is the most common data structure for database indexes. What makes B-Trees special is their very low height — a B-Tree holding 1 million records is usually only 3-4 levels deep. That means just 3-4 disk reads (page reads) to find any record.

graph TD
    Root["Root Node
Pointer: 30 | 60"] I1["Internal
10 | 20"] I2["Internal
40 | 50"] I3["Internal
70 | 80 | 90"] L1["Leaf
1,3,5,8,10"] L2["Leaf
12,15,18,20"] L3["Leaf
22,25,28,30"] L4["Leaf
32,35,38,40"] L5["Leaf
42,45,48,50"] L6["Leaf
55,58,60"] Root --> I1 Root --> I2 Root --> I3 I1 --> L1 I1 --> L2 I1 --> L3 I2 --> L4 I2 --> L5 I2 --> L6 style Root fill:#e94560,stroke:#fff,color:#fff style I1 fill:#2c3e50,stroke:#fff,color:#fff style I2 fill:#2c3e50,stroke:#fff,color:#fff style I3 fill:#2c3e50,stroke:#fff,color:#fff style L1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style L2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style L3 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style L4 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style L5 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style L6 fill:#f8f9fa,stroke:#e94560,color:#2c3e50

B-Tree Index structure — Root → Internal → Leaf nodes. Finding value 45 takes only 3 steps: Root → Internal (40|50) → Leaf

Each node in a B-Tree holds multiple keys and pointers. When searching for the value 45:

  1. Root node: 45 sits between 30 and 60 → go down the middle branch
  2. Internal node: 45 sits between 40 and 50 → descend to the corresponding leaf
  3. Leaf node: Find 45, return the pointer to the row in the table

Why is B-Tree efficient?

With a branching factor of 100 (each node holding up to 100 keys), a 4-level B-Tree can index up to 100 million records. Each page read on SSD takes about 0.1ms, so searching 100 million records takes ~0.4ms instead of minutes of sequential scanning.

2. Clustered vs Non-Clustered Index

This is the most important concept to grasp, especially in SQL Server. These two types of indexes behave completely differently at the physical level.

Property Clustered Index Non-Clustered Index
Physical data Reorders rows in the table by key Separate structure holding row pointers
Max per table 1 999 (SQL Server) / unlimited (PostgreSQL)
Leaf node holds The entire data row Index key + RID or Clustered Key
Range scan speed Very fast (data is physically adjacent) Slower (random I/O back to the table)
INSERT cost Higher (must preserve physical order) Lower
Default Primary Key (SQL Server) Must be created manually
graph LR
    subgraph CI["Clustered Index"]
        direction TB
        CR["Root: 50"]
        CL1["Leaf: Row 1-50
(contains full data)"] CL2["Leaf: Row 51-100
(contains full data)"] CR --> CL1 CR --> CL2 end subgraph NCI["Non-Clustered Index"] direction TB NR["Root: 'M'"] NL1["Leaf: Email A-M
+ Clustered Key"] NL2["Leaf: Email N-Z
+ Clustered Key"] NR --> NL1 NR --> NL2 end NL1 -.->|Key Lookup| CL1 NL2 -.->|Key Lookup| CL2 style CR fill:#e94560,stroke:#fff,color:#fff style NR fill:#2c3e50,stroke:#fff,color:#fff style CL1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style CL2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style NL1 fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50 style NL2 fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50

Non-Clustered Index needs a Key Lookup to retrieve data not in the index → additional I/O cost

In PostgreSQL, the concept is slightly different: every table is a heap (not physically ordered), and every index is "non-clustered" in SQL Server terms. However, PostgreSQL has the CLUSTER command to reorder data once by an index — but it doesn't maintain that order afterward.

-- SQL Server: Clustered Index is automatic with Primary Key
CREATE TABLE Users (
    Id INT IDENTITY(1,1) PRIMARY KEY, -- Clustered Index automatically
    Email NVARCHAR(255),
    FullName NVARCHAR(500),
    CreatedDate DATETIME2
);

-- Non-Clustered Index on Email
CREATE NONCLUSTERED INDEX IX_Users_Email
ON Users (Email);

-- PostgreSQL: Equivalent
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255),
    full_name VARCHAR(500),
    created_date TIMESTAMPTZ
);

CREATE INDEX idx_users_email ON users (email);

3. Composite Index and the Leftmost Prefix Rule

Composite indexes (indexes on multiple columns) are the most powerful weapon in your toolkit — and also the easiest place to make mistakes. The golden rule: the column order in a composite index determines which queries benefit from it.

-- Create a composite index
CREATE INDEX IX_Orders_Status_Date_Customer
ON Orders (Status, OrderDate, CustomerId);

With an index on (Status, OrderDate, CustomerId), the following queries can use the index:

Query WHERE clause Uses Index? Explanation
WHERE Status = 1 ✅ Index Seek Uses the first column
WHERE Status = 1 AND OrderDate > '2026-01-01' ✅ Index Seek Uses the first 2 columns (leftmost prefix)
WHERE Status = 1 AND OrderDate > '2026-01-01' AND CustomerId = 5 ✅ Index Seek Uses all 3 columns
WHERE OrderDate > '2026-01-01' ❌ Index Scan or Table Scan Skips the first column → can't be used
WHERE CustomerId = 5 ❌ Can't be used Skips the first 2 columns
WHERE Status = 1 AND CustomerId = 5 ⚠️ Partial Seek Seeks on Status, scans the rest

Common composite index mistake

Many developers create composite indexes following the column order in the table rather than the query usage order. The correct rule: the column with the highest selectivity (filters the most rows) should come first, but only if it's always present in the WHERE clause. If Status only has 3 values but is always filtered, it should still come before OrderDate (millions of values) as long as queries always filter by Status first.

3.1. Column-Ordering Strategy for Composite Indexes

Priority order when designing composite indexes:

  1. Equality first: Columns used with = before columns used with >, <, BETWEEN
  2. Range next: Columns with range filters after equality columns
  3. Sort last: Columns used in ORDER BY last
-- Query pattern:
SELECT * FROM Orders
WHERE Status = 'shipped'        -- equality
  AND OrderDate >= '2026-01-01' -- range
ORDER BY CustomerId;            -- sort

-- Optimal index: equality → range → sort
CREATE INDEX IX_Orders_Optimal
ON Orders (Status, OrderDate, CustomerId);

4. Covering Index — Avoiding Key Lookups

This is the most powerful optimization technique that many developers overlook. When a query needs to return columns not in the index key, the database performs an extra Key Lookup (going back to the table to fetch the data). Each Key Lookup is a random I/O — for a query returning 10,000 rows, that's 10,000 additional random reads.

2.3s
12ms
Regular index + Key Lookup vs Covering Index on a 5-million-row table returning 8,000 results

Covering Index solves this by using INCLUDE to add the required columns to the index leaf node — it doesn't affect sort order but completely eliminates Key Lookups.

-- SQL Server: INCLUDE clause
CREATE NONCLUSTERED INDEX IX_Orders_Status_Covering
ON Orders (Status, OrderDate)
INCLUDE (CustomerId, TotalAmount, ShippingAddress);

-- Now this query is fully covered:
SELECT CustomerId, TotalAmount, ShippingAddress
FROM Orders
WHERE Status = 'shipped' AND OrderDate >= '2026-01-01';
-- → Index Seek, NO Key Lookup

-- PostgreSQL: Equivalent (v11+)
CREATE INDEX idx_orders_status_covering
ON orders (status, order_date)
INCLUDE (customer_id, total_amount, shipping_address);
graph LR
    subgraph Without["Without Covering Index"]
        Q1["Query"] --> IS1["Index Seek
find key"] IS1 --> KL["Key Lookup
10,000 random I/O"] KL --> R1["Result"] end subgraph With["With Covering Index"] Q2["Query"] --> IS2["Index Seek
find key + INCLUDE data"] IS2 --> R2["Result"] end style Q1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style Q2 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50 style KL fill:#e94560,stroke:#fff,color:#fff style IS1 fill:#2c3e50,stroke:#fff,color:#fff style IS2 fill:#4CAF50,stroke:#fff,color:#fff style R1 fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50 style R2 fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50

Covering Index eliminates the Key Lookup entirely — significantly reducing I/O for queries returning many rows

When to use a Covering Index?

Check the Execution Plan: if you see Key Lookup consuming >50% of the plan cost, that's a clear sign you need a Covering Index. However, don't INCLUDE too many columns — the index will bloat, use more storage, and slow down INSERT/UPDATE.

5. Filtered Index — Conditional Indexes

Filtered Index (SQL Server) or Partial Index (PostgreSQL) lets you index only a subset of the data. Especially useful when you often query a small portion of the data.

-- SQL Server: Filtered Index
-- Only index orders that haven't been processed (5% of total data)
CREATE NONCLUSTERED INDEX IX_Orders_Pending
ON Orders (OrderDate, CustomerId)
WHERE Status = 'pending';

-- Query using the filtered index:
SELECT * FROM Orders
WHERE Status = 'pending' AND OrderDate > '2026-04-01';

-- PostgreSQL: Partial Index
CREATE INDEX idx_orders_pending
ON orders (order_date, customer_id)
WHERE status = 'pending';
95% Less storage vs a full index
20x Faster for queries on the subset
~0 INSERT impact for data outside the filter

6. Specialized Indexes in PostgreSQL

PostgreSQL provides many index types beyond B-Tree, each optimized for a specific use case.

Index Type Use Case Example
B-Tree Comparison operators (=, <, >, BETWEEN) Most cases
Hash Equality only (=), lighter than B-Tree Exact lookups by ID
GIN Full-text search, JSONB, Array contains WHERE tags @> '{vue}'
GiST Geometric, range, full-text (ranking) WHERE location <@> point
BRIN Naturally ordered data (time-series) Log table sorted by timestamp
SP-GiST Unbalanced data (IP, phone prefix) Trie for prefix search
-- GIN Index for JSONB (PostgreSQL)
CREATE INDEX idx_products_attrs ON products
USING GIN (attributes jsonb_path_ops);

-- Efficient JSONB query:
SELECT * FROM products
WHERE attributes @> '{"color": "red", "size": "L"}';

-- BRIN Index for time-series data
-- Extremely small (~0.1% of B-Tree size) but efficient
-- when data is inserted in chronological order
CREATE INDEX idx_logs_created ON logs
USING BRIN (created_at) WITH (pages_per_range = 32);

-- Full-text Search with GIN
CREATE INDEX idx_articles_fts ON articles
USING GIN (to_tsvector('english', title || ' ' || body));

BRIN Index — A Secret Weapon for Huge Tables

BRIN (Block Range Index) stores min/max values per block range (128 pages by default). For a 500-million-row log table, a BRIN index on created_at takes only ~200KB compared to ~5GB for a B-Tree. Most effective when data is inserted sequentially — exactly the pattern of log/event tables.

7. Reading and Understanding Execution Plans

An Execution Plan is the "map" showing how the database executes a query. Reading plans is a mandatory skill for index optimization.

-- SQL Server: View Estimated Plan
SET SHOWPLAN_XML ON;
GO
SELECT * FROM Orders WHERE Status = 'pending';
GO
SET SHOWPLAN_XML OFF;

-- Or Actual Plan (runs the query)
SET STATISTICS XML ON;
SELECT * FROM Orders WHERE Status = 'pending';
SET STATISTICS XML OFF;

-- PostgreSQL: EXPLAIN ANALYZE
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM orders WHERE status = 'pending';

7.1. Key Operators in an Execution Plan

graph TD
    TS["Table Scan / Seq Scan
Read the entire table
❌ Slowest"] IS["Index Scan
Read the entire index
⚠️ Better than Table Scan"] IK["Index Seek / Index Only Scan
Jump to the exact position
✅ Fastest"] KL["Key Lookup / Heap Fetch
Return to the table for missing columns
⚠️ Extra cost"] TS -.->|"Add Index"| IS IS -.->|"WHERE selective"| IK IK -.->|"Need columns outside index"| KL KL -.->|"INCLUDE columns"| IK style TS fill:#e94560,stroke:#fff,color:#fff style IS fill:#ff9800,stroke:#fff,color:#fff style IK fill:#4CAF50,stroke:#fff,color:#fff style KL fill:#ff9800,stroke:#fff,color:#fff

Optimization journey: Table Scan → Index Scan → Index Seek → Covering Index (eliminates Key Lookup)

-- Sample PostgreSQL EXPLAIN output:
EXPLAIN (ANALYZE) SELECT email, full_name FROM users WHERE status = 'active';

-- RESULT WITHOUT INDEX:
Seq Scan on users  (cost=0.00..25432.00 rows=50000 width=64)
                   (actual time=0.021..245.123 rows=48532 loops=1)
  Filter: (status = 'active')
  Rows Removed by Filter: 951468
Planning Time: 0.085 ms
Execution Time: 267.456 ms      ← SLOW

-- RESULT AFTER ADDING A COVERING INDEX:
-- CREATE INDEX idx_users_status ON users (status) INCLUDE (email, full_name);
Index Only Scan using idx_users_status on users
                   (cost=0.42..1523.45 rows=50000 width=64)
                   (actual time=0.031..8.234 rows=48532 loops=1)
  Index Cond: (status = 'active')
  Heap Fetches: 0               ← No table fetches!
Planning Time: 0.112 ms
Execution Time: 12.567 ms       ← 21x FASTER

8. Index Anti-patterns — Mistakes to Avoid

8.1. Applying Functions to Indexed Columns

-- ❌ WRONG: The index on Email isn't used
SELECT * FROM Users WHERE UPPER(Email) = 'ADMIN@EXAMPLE.COM';
SELECT * FROM Users WHERE YEAR(CreatedDate) = 2026;

-- ✅ RIGHT: Keep the indexed column untouched
SELECT * FROM Users WHERE Email = 'admin@example.com'; -- CI collation
SELECT * FROM Users
WHERE CreatedDate >= '2026-01-01' AND CreatedDate < '2027-01-01';

-- ✅ PostgreSQL: Expression Index if really needed
CREATE INDEX idx_users_email_upper ON users (UPPER(email));

8.2. Over-indexing — Too Many Indexes

Every index is a copy of the data that must be updated on INSERT/UPDATE/DELETE. A table with 10 indexes means every INSERT writes 11 times (1 table + 10 indexes).

INSERT: 0.5ms
INSERT: 8ms
INSERT cost on a table with 0 indexes vs 15 indexes — 16× slower for every write operation

8.3. Indexing Low-Cardinality Columns

-- ❌ Inefficient: Gender has only 2-3 values
CREATE INDEX IX_Users_Gender ON Users (Gender);
-- The index returns 50% of rows → Optimizer picks Table Scan instead

-- ✅ Better: Composite index or Filtered Index
CREATE INDEX IX_Users_Gender_Date ON Users (Gender, CreatedDate);
-- Or if the common query filters to just one value:
CREATE INDEX IX_Users_Active ON Users (CreatedDate) WHERE IsActive = 1;

8.4. Implicit Conversion Kills Indexes

-- ❌ UserId is VARCHAR but an INT is passed → implicit conversion
-- Database must convert EVERY ROW → Table Scan
SELECT * FROM Users WHERE UserId = 12345;

-- ✅ Pass the correct type
SELECT * FROM Users WHERE UserId = '12345';

8.5. LIKE with Leading Wildcard

-- ❌ Index unusable: wildcard at the start
SELECT * FROM Products WHERE Name LIKE '%phone%';

-- ✅ Index usable: wildcard only at the end
SELECT * FROM Products WHERE Name LIKE 'phone%';

-- ✅ Solution for full-text: use a Full-Text Index
-- SQL Server:
CREATE FULLTEXT INDEX ON Products (Name) KEY INDEX PK_Products;
SELECT * FROM Products WHERE CONTAINS(Name, 'phone');

-- PostgreSQL:
CREATE INDEX idx_products_fts ON products USING GIN (to_tsvector('simple', name));
SELECT * FROM products WHERE to_tsvector('simple', name) @@ to_tsquery('phone');

9. Index Strategy for Production

9.1. The 4-Step Process

Step 1: Collect Query Patterns

Analyze the slow query log and real workload. SQL Server has Query Store; PostgreSQL has pg_stat_statements. Focus on the top 20 resource-consuming queries — they typically account for 80% of total load.

-- SQL Server: Top queries by CPU
SELECT TOP 20
    qs.total_worker_time / qs.execution_count AS avg_cpu,
    qs.execution_count,
    SUBSTRING(qt.text, 1, 200) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY avg_cpu DESC;

-- PostgreSQL: Top queries (requires pg_stat_statements enabled)
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;
Step 2: Analyze Execution Plans

For each slow query, read its Execution Plan to identify: Table Scan? Key Lookup? Expensive sort? From there decide whether to create a new index or extend an existing one.

Step 3: Design Indexes Using the ESR Rule

Equality → Sort → Range. Order the columns in a composite index as: equality columns first, sort columns in the middle, range columns last. Add INCLUDE for columns that need to be SELECTed but not filtered/sorted.

Step 4: Measure and Refine

Create indexes on staging, benchmark with production-like data, then deploy. Watch index usage stats to catch indexes that aren't being used.

9.2. Finding Unused and Missing Indexes

-- SQL Server: Indexes that are never used
SELECT
    OBJECT_NAME(i.object_id) AS table_name,
    i.name AS index_name,
    s.user_seeks, s.user_scans, s.user_lookups, s.user_updates
FROM sys.indexes i
LEFT JOIN sys.dm_db_index_usage_stats s
    ON i.object_id = s.object_id AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1
    AND (s.user_seeks + s.user_scans + s.user_lookups) = 0
    AND s.user_updates > 0
ORDER BY s.user_updates DESC;

-- SQL Server: Missing index suggestions
SELECT
    d.statement AS table_name,
    d.equality_columns,
    d.inequality_columns,
    d.included_columns,
    s.avg_user_impact,
    s.user_seeks
FROM sys.dm_db_missing_index_details d
JOIN sys.dm_db_missing_index_groups g ON d.index_handle = g.index_handle
JOIN sys.dm_db_missing_index_group_stats s ON g.index_group_handle = s.group_handle
ORDER BY s.avg_user_impact * s.user_seeks DESC;

-- PostgreSQL: Indexes that are never used
SELECT
    schemaname, tablename, indexname,
    idx_scan AS times_used,
    pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;

9.3. Periodic Index Maintenance

Indexes become fragmented over time as data is INSERTed/UPDATEd/DELETEd. A fragmented index means leaf pages aren't contiguous on disk, increasing I/O.

-- SQL Server: Check fragmentation
SELECT
    OBJECT_NAME(ips.object_id) AS table_name,
    i.name AS index_name,
    ips.avg_fragmentation_in_percent,
    ips.page_count
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
JOIN sys.indexes i ON ips.object_id = i.object_id AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10
ORDER BY ips.avg_fragmentation_in_percent DESC;

-- 10-30% fragmentation: REORGANIZE (online, light)
ALTER INDEX IX_Orders_Status ON Orders REORGANIZE;

-- > 30% fragmentation: REBUILD (more expensive but thorough)
ALTER INDEX IX_Orders_Status ON Orders REBUILD WITH (ONLINE = ON);

-- PostgreSQL: REINDEX (equivalent to REBUILD)
REINDEX INDEX CONCURRENTLY idx_orders_status;

-- PostgreSQL: VACUUM + ANALYZE (routine maintenance)
VACUUM ANALYZE orders;

Caveats when REBUILDing indexes in production

In SQL Server Enterprise, use ONLINE = ON to avoid table locks. Standard Edition doesn't support online rebuild — schedule a maintenance window. In PostgreSQL, always use CONCURRENTLY to avoid exclusive locks.

10. Common Index Patterns

10.1. Efficient Pagination

-- ❌ Large OFFSET = slow (must read then skip N rows)
SELECT * FROM Products ORDER BY CreatedDate DESC
OFFSET 100000 ROWS FETCH NEXT 20 ROWS ONLY;

-- ✅ Keyset Pagination (cursor-based) + Index
-- Index: CREATE INDEX IX_Products_Date ON Products (CreatedDate DESC, Id DESC);
SELECT TOP 20 * FROM Products
WHERE (CreatedDate, Id) < (@lastDate, @lastId)
ORDER BY CreatedDate DESC, Id DESC;

-- PostgreSQL equivalent:
SELECT * FROM products
WHERE (created_date, id) < ($1, $2)
ORDER BY created_date DESC, id DESC
LIMIT 20;

10.2. Soft Delete Pattern

-- Most queries only need active records
-- A Filtered Index shines here:
CREATE INDEX IX_Users_Active
ON Users (Email, CreatedDate)
WHERE IsDeleted = 0;

-- 95% of queries only touch non-deleted data
-- The index takes only 5% of the size of a full index

10.3. Multi-tenant SaaS

-- TenantId is always the leading column in composite indexes
CREATE INDEX IX_Orders_Tenant
ON Orders (TenantId, Status, OrderDate)
INCLUDE (TotalAmount);

-- Each tenant only scans a small subset of the table
SELECT Status, COUNT(*), SUM(TotalAmount)
FROM Orders
WHERE TenantId = @tenantId AND OrderDate >= '2026-01-01'
GROUP BY Status;

11. Conclusion

Database indexing isn't just CREATE INDEX and done. It's a continuous process: analyze the workload → design indexes → measure → refine. Key takeaways:

ESR Equality → Sort → Range: column order in composite indexes
INCLUDE Eliminate Key Lookup with Covering Indexes
30% Fragmentation threshold for REBUILD
DMV Dynamic Management Views: find unused/missing indexes

Pre-deploy Checklist for a New Index

1. Tested on staging with production-like data volume? 2. Does the index cover SELECT/WHERE/ORDER BY columns? 3. Is the composite index ordered by ESR? 4. Any duplicate indexes or subsets of the new index? 5. Have you quantified write performance impact?

An index is a trade-off between read performance and write cost + storage. There's no perfect index for every case — only one that fits your specific workload. Start with the slow query log, read Execution Plans, and apply the ESR principle. The results will speak for themselves.

References