The symptom
Your /posts endpoint takes 40ms locally. You push to staging with realistic data and the same endpoint takes 3 seconds. You're not doing anything fancy — just a LINQ query with some navigation properties. Welcome to the N+1 query problem, EF Core's most expensive default.
What's actually happening
var posts = await db.Posts.ToListAsync();
foreach (var p in posts)
Console.WriteLine($"{p.Title} — {p.Author.Name}");
That innocuous p.Author.Name triggers a separate database round-trip per post. 1000 posts = 1001 queries. On localhost with in-memory latency this is invisible. On production with 2ms network latency, it costs 2 seconds per page render.
Catch it before production
Turn on EF's sensitive logging in development and watch the console while hitting your endpoints:
builder.Services.AddDbContext<AppDb>(opt => opt
.UseSqlServer(cs)
.LogTo(Console.WriteLine, LogLevel.Information)
.EnableSensitiveDataLogging()
);
Any endpoint that fires more than one SELECT for a single request is suspicious. Run your highest-traffic pages through and count the SELECTs.
Fix it
var posts = await db.Posts
.Include(p => p.Author)
.Include(p => p.Tags)
.AsSplitQuery()
.ToListAsync();
Include(...)tells EF to JOIN the related tables.AsSplitQuery()breaks one huge JOIN into a few smaller SELECTs — usually faster when you include multiple collections.
The nuclear option
For hot code paths, drop EF and write projection queries that return exactly the DTO shape you need:
var result = await db.Posts
.Select(p => new PostListDto(p.Id, p.Title, p.Author.Name))
.ToListAsync();
One SELECT, only the columns you need, no change-tracker overhead. Keep EF for your commands, use projections for your queries.
Takeaway
Assume every Include and every navigation property is a potential N+1. Log-watch your dev console. Benchmark with production-sized data — not the 10 rows in your test DB.