gRPC and Protocol Buffers on .NET 10 — High-Performance Microservice Communication
Posted on: 4/18/2026 2:10:16 PM
Table of contents
- 1. What Is gRPC and Why Use It?
- 2. gRPC vs REST vs GraphQL
- 3. Protocol Buffers — The API Definition Language
- 4. The Four gRPC Communication Patterns
- 5. Implementing gRPC on .NET 10
- 6. Interceptors — Cross-cutting Concerns
- 7. Health Checks and gRPC Reflection
- 8. Schema Evolution — Evolving APIs Safely
- 9. gRPC Architecture in a Microservices System
- 10. Performance Tuning and Best Practices
- 11. gRPC-Web — When You Need Browser Access
- 12. When NOT to Use gRPC?
- Conclusion
- References
In the microservices world, making services "talk" to each other quickly and reliably is make-or-break. REST with JSON has served us well for years, but when a system scales to hundreds of services handling millions of requests per second, the overhead of a text-based protocol becomes a real bottleneck. gRPC — Google's high-performance RPC framework built on HTTP/2 and Protocol Buffers — is the solution Netflix, Spotify, Square, and thousands of other companies rely on for internal communication.
1. What Is gRPC and Why Use It?
gRPC (Google Remote Procedure Call) is an open-source RPC framework that uses HTTP/2 as its transport and Protocol Buffers (Protobuf) as both Interface Definition Language (IDL) and serialization format. Instead of sending JSON text over HTTP/1.1 like REST, gRPC transmits binary data across multiplexed HTTP/2 streams — faster, leaner, and more type-safe.
flowchart LR
A["📱 gRPC Client
.NET / Go / Java"] -->|"HTTP/2 Binary Frame"| B["⚡ gRPC Server
ASP.NET Core"]
B -->|"Protobuf Response"| A
C["🌐 REST Client
Browser / Mobile"] -->|"HTTP/1.1 JSON Text"| D["🖥️ REST API
ASP.NET Core"]
D -->|"JSON Response"| C
style A fill:#e94560,stroke:#fff,color:#fff
style B fill:#e94560,stroke:#fff,color:#fff
style C fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50
style D fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50
Core advantages of gRPC:
- Contract-first API: Define APIs in a
.protofile, then auto-generate code for both client and server — ensuring type safety end-to-end. - HTTP/2 multiplexing: Multiple requests/responses run in parallel over one TCP connection, without HTTP/1.1's head-of-line blocking.
- Binary serialization: Protobuf encodes data into compact binary, dramatically faster than JSON parsing.
- Native streaming: Supports server streaming, client streaming, and bidirectional streaming — no WebSocket or SSE workarounds needed.
- Code generation: Auto-generates strongly-typed client/server code for 10+ languages from the same
.protofile.
2. gRPC vs REST vs GraphQL
Each protocol has its strengths. The table below compares them in detail to help you pick the right tool for the right problem:
| Criterion | gRPC | REST | GraphQL |
|---|---|---|---|
| Transport | HTTP/2 | HTTP/1.1 or HTTP/2 | HTTP/1.1 or HTTP/2 |
| Payload format | Protobuf (binary) | JSON (text) | JSON (text) |
| Contract | .proto file (strict) | OpenAPI/Swagger (loose) | Schema (strict) |
| Streaming | Native bi-directional | No (need SSE/WebSocket) | Subscriptions (need WebSocket) |
| Browser support | Requires gRPC-Web proxy | Native | Native |
| Latency (P99) | ~25ms | ~120ms | ~95ms |
| Throughput | ~8,500 RPS | ~2,200 RPS | ~2,800 RPS |
| Best for | Internal service-to-service | Public API, CRUD | Flexible frontend queries |
The most common hybrid architecture
Most large production systems use both: REST or GraphQL for external APIs (browser, mobile) and gRPC for internal service-to-service communication. The API Gateway acts as the "translator" between the two worlds.
3. Protocol Buffers — The API Definition Language
Protocol Buffers (Protobuf) is the heart of gRPC. You define data structures and service interfaces in .proto files, and the protoc compiler generates code for your target language.
3.1. Basic Syntax
syntax = "proto3";
package ecommerce;
option csharp_namespace = "ECommerce.Grpc";
// Message definitions
message Product {
int32 id = 1;
string name = 2;
string description = 3;
double price = 4;
ProductCategory category = 5;
repeated string tags = 6;
optional string image_url = 7;
google.protobuf.Timestamp created_at = 8;
}
enum ProductCategory {
PRODUCT_CATEGORY_UNSPECIFIED = 0;
ELECTRONICS = 1;
CLOTHING = 2;
BOOKS = 3;
FOOD = 4;
}
message ProductFilter {
optional ProductCategory category = 1;
optional double min_price = 2;
optional double max_price = 3;
int32 page = 4;
int32 page_size = 5;
}
message ProductList {
repeated Product products = 1;
int32 total_count = 2;
}
// Service definition
service ProductService {
// Unary RPC
rpc GetProduct (GetProductRequest) returns (Product);
// Server streaming — server sends many responses
rpc ListProducts (ProductFilter) returns (stream Product);
// Client streaming — client sends many requests
rpc BulkCreateProducts (stream Product) returns (BulkCreateResponse);
// Bidirectional streaming
rpc SyncInventory (stream InventoryUpdate)
returns (stream InventoryStatus);
}
message GetProductRequest {
int32 id = 1;
}
message BulkCreateResponse {
int32 created_count = 1;
repeated string errors = 2;
}
message InventoryUpdate {
int32 product_id = 1;
int32 quantity_change = 2;
}
message InventoryStatus {
int32 product_id = 1;
int32 current_stock = 2;
bool low_stock_alert = 3;
}Important Protobuf rules:
- Field numbers are permanent: Numbers 1-15 use 1-byte encoding; 16-2047 use 2 bytes — reserve 1-15 for your most frequently used fields.
- Never reuse a field number: When removing a field, use
reservedto "lock" the number. repeatedfor arrays/lists,optionalfor nullable fields,map<K,V>for dictionaries.- Enums must have a 0 value: The convention is
UNSPECIFIED = 0to handle default values.
4. The Four gRPC Communication Patterns
gRPC supports 4 communication patterns, each suited to a different kind of problem:
flowchart TB
subgraph U["1️⃣ Unary RPC"]
U1["Client"] -->|"1 Request"| U2["Server"]
U2 -->|"1 Response"| U1
end
subgraph SS["2️⃣ Server Streaming"]
SS1["Client"] -->|"1 Request"| SS2["Server"]
SS2 -->|"N Responses"| SS1
end
subgraph CS["3️⃣ Client Streaming"]
CS1["Client"] -->|"N Requests"| CS2["Server"]
CS2 -->|"1 Response"| CS1
end
subgraph BD["4️⃣ Bidirectional"]
BD1["Client"] -->|"N Requests"| BD2["Server"]
BD2 -->|"M Responses"| BD1
end
style U fill:#f8f9fa,stroke:#e94560
style SS fill:#f8f9fa,stroke:#4CAF50
style CS fill:#f8f9fa,stroke:#ff9800
style BD fill:#f8f9fa,stroke:#2c3e50
style U1 fill:#e94560,stroke:#fff,color:#fff
style U2 fill:#e94560,stroke:#fff,color:#fff
style SS1 fill:#4CAF50,stroke:#fff,color:#fff
style SS2 fill:#4CAF50,stroke:#fff,color:#fff
style CS1 fill:#ff9800,stroke:#fff,color:#fff
style CS2 fill:#ff9800,stroke:#fff,color:#fff
style BD1 fill:#2c3e50,stroke:#fff,color:#fff
style BD2 fill:#2c3e50,stroke:#fff,color:#fff
| Pattern | Description | Real-world use case |
|---|---|---|
| Unary | 1 request → 1 response (like REST) | CRUD operations, auth, single read/write |
| Server Streaming | 1 request → N responses over time | Real-time feeds, large file download, push notifications |
| Client Streaming | N requests → 1 aggregated response | Chunked uploads, batch inserts, telemetry data |
| Bidirectional | N requests ↔ M responses concurrently | Chat, game state sync, collaborative editing |
5. Implementing gRPC on .NET 10
5.1. Creating a gRPC Server
On .NET 10, the gRPC server is integrated into ASP.NET Core — just add the NuGet package and register the service:
// Program.cs — gRPC Server
var builder = WebApplication.CreateBuilder(args);
// Add gRPC services
builder.Services.AddGrpc(options =>
{
options.MaxReceiveMessageSize = 16 * 1024 * 1024; // 16MB
options.MaxSendMessageSize = 16 * 1024 * 1024;
options.EnableDetailedErrors = builder.Environment.IsDevelopment();
});
// Health checks for gRPC
builder.Services.AddGrpcHealthChecks()
.AddCheck("database", () => HealthCheckResult.Healthy());
// gRPC Reflection (for dev tools like grpcurl)
builder.Services.AddGrpcReflection();
var app = builder.Build();
// Map gRPC services
app.MapGrpcService<ProductServiceImpl>();
app.MapGrpcHealthChecksService();
if (app.Environment.IsDevelopment())
{
app.MapGrpcReflectionService();
}
app.Run();5.2. Implementing the Service
// Services/ProductServiceImpl.cs
public class ProductServiceImpl : ProductService.ProductServiceBase
{
private readonly IProductRepository _repo;
private readonly ILogger<ProductServiceImpl> _logger;
public ProductServiceImpl(
IProductRepository repo,
ILogger<ProductServiceImpl> logger)
{
_repo = repo;
_logger = logger;
}
// Unary RPC
public override async Task<Product> GetProduct(
GetProductRequest request,
ServerCallContext context)
{
var product = await _repo.GetByIdAsync(
request.Id, context.CancellationToken);
if (product is null)
{
throw new RpcException(new Status(
StatusCode.NotFound,
$"Product {request.Id} not found"));
}
return MapToProto(product);
}
// Server Streaming — send each product over the stream
public override async Task ListProducts(
ProductFilter request,
IServerStreamWriter<Product> responseStream,
ServerCallContext context)
{
var products = _repo.GetFilteredAsync(request);
await foreach (var product in products
.WithCancellation(context.CancellationToken))
{
await responseStream.WriteAsync(MapToProto(product));
}
}
// Client Streaming — receive a batch from the client
public override async Task<BulkCreateResponse> BulkCreateProducts(
IAsyncStreamReader<Product> requestStream,
ServerCallContext context)
{
var created = 0;
var errors = new List<string>();
await foreach (var proto in requestStream
.ReadAllAsync(context.CancellationToken))
{
try
{
await _repo.CreateAsync(MapFromProto(proto));
created++;
}
catch (Exception ex)
{
errors.Add($"Product '{proto.Name}': {ex.Message}");
}
}
return new BulkCreateResponse
{
CreatedCount = created,
Errors = { errors }
};
}
// Bidirectional Streaming — realtime inventory sync
public override async Task SyncInventory(
IAsyncStreamReader<InventoryUpdate> requestStream,
IServerStreamWriter<InventoryStatus> responseStream,
ServerCallContext context)
{
await foreach (var update in requestStream
.ReadAllAsync(context.CancellationToken))
{
var newStock = await _repo.UpdateStockAsync(
update.ProductId, update.QuantityChange);
await responseStream.WriteAsync(new InventoryStatus
{
ProductId = update.ProductId,
CurrentStock = newStock,
LowStockAlert = newStock < 10
});
}
}
}5.3. Creating a gRPC Client
// Client registration in the DI container
builder.Services.AddGrpcClient<ProductService.ProductServiceClient>(options =>
{
options.Address = new Uri("https://product-service:5001");
})
.ConfigurePrimaryHttpMessageHandler(() => new SocketsHttpHandler
{
PooledConnectionIdleTimeout = Timeout.InfiniteTimeSpan,
KeepAlivePingDelay = TimeSpan.FromSeconds(60),
KeepAlivePingTimeout = TimeSpan.FromSeconds(30),
EnableMultipleHttp2Connections = true
})
.AddInterceptor<ClientLoggingInterceptor>();
// Use the client from another service
public class OrderService
{
private readonly ProductService.ProductServiceClient _productClient;
public OrderService(ProductService.ProductServiceClient productClient)
{
_productClient = productClient;
}
public async Task<OrderResult> CreateOrder(OrderRequest order)
{
// Unary call
var product = await _productClient.GetProductAsync(
new GetProductRequest { Id = order.ProductId });
// Server streaming — read the product list
var products = new List<Product>();
using var stream = _productClient.ListProducts(
new ProductFilter { Category = ProductCategory.Electronics });
await foreach (var p in stream.ResponseStream.ReadAllAsync())
{
products.Add(p);
}
return new OrderResult { /* ... */ };
}
}6. Interceptors — Cross-cutting Concerns
Interceptors in gRPC are similar to middleware in ASP.NET Core — they let you hook into the request/response pipeline to add logging, authentication, metrics, and retry logic without touching business logic.
flowchart LR
A["Client Request"] --> B["Auth
Interceptor"]
B --> C["Logging
Interceptor"]
C --> D["Metrics
Interceptor"]
D --> E["Service
Handler"]
E --> D
D --> C
C --> B
B --> F["Client Response"]
style A fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style B fill:#e94560,stroke:#fff,color:#fff
style C fill:#4CAF50,stroke:#fff,color:#fff
style D fill:#ff9800,stroke:#fff,color:#fff
style E fill:#2c3e50,stroke:#fff,color:#fff
style F fill:#f8f9fa,stroke:#e94560,color:#2c3e50
6.1. Server Interceptor — Logging & Metrics
public class ServerLoggingInterceptor : Interceptor
{
private readonly ILogger<ServerLoggingInterceptor> _logger;
public ServerLoggingInterceptor(
ILogger<ServerLoggingInterceptor> logger) => _logger = logger;
public override async Task<TResponse> UnaryServerHandler<TRequest, TResponse>(
TRequest request,
ServerCallContext context,
UnaryServerMethod<TRequest, TResponse> continuation)
{
var sw = Stopwatch.StartNew();
var method = context.Method;
try
{
var response = await continuation(request, context);
sw.Stop();
_logger.LogInformation(
"gRPC {Method} completed in {ElapsedMs}ms",
method, sw.ElapsedMilliseconds);
return response;
}
catch (RpcException ex)
{
sw.Stop();
_logger.LogError(ex,
"gRPC {Method} failed with {StatusCode} in {ElapsedMs}ms",
method, ex.StatusCode, sw.ElapsedMilliseconds);
throw;
}
}
}
// Register in Program.cs
builder.Services.AddGrpc(options =>
{
options.Interceptors.Add<ServerLoggingInterceptor>();
});6.2. Client Interceptor — Retry & Deadline
public class ClientRetryInterceptor : Interceptor
{
public override AsyncUnaryCall<TResponse> AsyncUnaryCall<TRequest, TResponse>(
TRequest request,
ClientInterceptorContext<TRequest, TResponse> context,
AsyncUnaryCallContinuation<TRequest, TResponse> continuation)
{
// Add a deadline if none is set
if (context.Options.Deadline == null)
{
var options = context.Options
.WithDeadline(DateTime.UtcNow.AddSeconds(30));
context = new ClientInterceptorContext<TRequest, TResponse>(
context.Method, context.Host, options);
}
return continuation(request, context);
}
}
// Or use the built-in retry policy (recommended)
builder.Services.AddGrpcClient<ProductService.ProductServiceClient>(o =>
{
o.Address = new Uri("https://product-service:5001");
})
.ConfigureChannel(o =>
{
o.ServiceConfig = new ServiceConfig
{
MethodConfigs =
{
new MethodConfig
{
Names = { MethodName.Default },
RetryPolicy = new RetryPolicy
{
MaxAttempts = 3,
InitialBackoff = TimeSpan.FromMilliseconds(500),
MaxBackoff = TimeSpan.FromSeconds(5),
BackoffMultiplier = 2,
RetryableStatusCodes =
{
StatusCode.Unavailable,
StatusCode.DeadlineExceeded
}
}
}
}
};
});7. Health Checks and gRPC Reflection
In production environments with Kubernetes or load balancers, health checks are mandatory. gRPC has its own health checking protocol standard (grpc.health.v1.Health), and .NET 10 integrates it directly with the ASP.NET Core Health Checks system.
// Health Checks configuration
builder.Services.AddGrpcHealthChecks()
.AddAsyncCheck("database", async ct =>
{
try
{
await using var conn = new SqlConnection(connectionString);
await conn.OpenAsync(ct);
return HealthCheckResult.Healthy();
}
catch (Exception ex)
{
return HealthCheckResult.Unhealthy(ex.Message);
}
})
.AddCheck<RedisHealthCheck>("redis");
// Map the endpoint
app.MapGrpcHealthChecksService();
// Kubernetes liveness/readiness probes use grpc_health_probe:
// livenessProbe:
// grpc:
// port: 5001
// initialDelaySeconds: 5
// periodSeconds: 10gRPC Reflection — An essential dev tool
gRPC Reflection lets clients discover services and methods on the server without having a .proto file. Combined with grpcurl or grpcui, you can test gRPC services just like using Postman for REST. Only enable it in development.
# Install grpcurl
# brew install grpcurl (macOS) or scoop install grpcurl (Windows)
# List all services
grpcurl -plaintext localhost:5001 list
# Describe a service
grpcurl -plaintext localhost:5001 describe ecommerce.ProductService
# Call a Unary RPC
grpcurl -plaintext -d '{"id": 42}' \
localhost:5001 ecommerce.ProductService/GetProduct
# Server streaming
grpcurl -plaintext -d '{"category": "ELECTRONICS", "page_size": 10}' \
localhost:5001 ecommerce.ProductService/ListProducts8. Schema Evolution — Evolving APIs Safely
One of Protobuf's greatest strengths is the ability to evolve schemas without breaking running clients/servers. This is critical in microservices, where services deploy independently.
flowchart TD
A["v1: Product has 5 fields"] --> B{"Add a new field?"}
B -->|"✅ Safe"| C["v2: Add optional field
with new field number"]
B -->|"⚠️ Careful"| D["Change field type
requires compatible types"]
B -->|"❌ Dangerous"| E["Remove field or
reuse field number"]
C --> F["Old client ignores the new field
New client reads it"]
E --> G["Binary corruption
Data loss"]
style A fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style B fill:#2c3e50,stroke:#fff,color:#fff
style C fill:#4CAF50,stroke:#fff,color:#fff
style D fill:#ff9800,stroke:#fff,color:#fff
style E fill:#e94560,stroke:#fff,color:#fff
style F fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
style G fill:#f8f9fa,stroke:#e94560,color:#2c3e50
Golden Rules for Schema Evolution
| Action | Safe? | Explanation |
|---|---|---|
| Add a new optional field | ✅ Safe | Old clients ignore it, new clients read it |
| Add a new repeated field | ✅ Safe | Defaults to an empty list |
| Rename a field | ✅ Safe | Binary encoding uses field numbers, not names |
| Delete a field + reserve the number | ⚠️ Careful | Must use reserved to lock the field number |
| Change int32 → int64 | ⚠️ Compatible | int32 values are still readable in int64 |
| Reuse a deleted field number | ❌ DO NOT | Corrupts data when old clients send messages |
| Change string → int32 | ❌ DO NOT | Incompatible wire types |
// Example of safe schema evolution
message Product {
int32 id = 1;
string name = 2;
// Field 3 was removed — LOCK it
reserved 3;
reserved "old_description";
double price = 4;
ProductCategory category = 5;
repeated string tags = 6;
optional string image_url = 7;
// v2: Add new fields with new numbers
optional double discount_percent = 8;
optional string brand = 9;
// v3: Add a sub-message
optional ProductDimensions dimensions = 10;
}
message ProductDimensions {
double weight_kg = 1;
double width_cm = 2;
double height_cm = 3;
double depth_cm = 4;
}Use Proto-Break in CI/CD
Integrate tools like buf breaking or Proto-Break into your CI pipeline to automatically detect breaking changes before merge. One breaking change leaking into production can cause cascade failures across the entire microservices mesh.
9. gRPC Architecture in a Microservices System
Below is a typical architecture when integrating gRPC into a .NET 10 microservices system:
flowchart TB
subgraph External["External Clients"]
Web["🌐 Web App
(Vue.js)"]
Mobile["📱 Mobile App"]
end
subgraph Gateway["API Gateway Layer"]
GW["API Gateway
(YARP / Envoy)"]
end
subgraph Internal["Internal Services (gRPC)"]
PS["Product Service
gRPC + .NET 10"]
OS["Order Service
gRPC + .NET 10"]
IS["Inventory Service
gRPC + .NET 10"]
NS["Notification Service
gRPC + .NET 10"]
end
subgraph Infra["Infrastructure"]
DB1[("PostgreSQL")]
DB2[("PostgreSQL")]
MQ["Message Broker"]
end
Web -->|"REST / GraphQL"| GW
Mobile -->|"REST / GraphQL"| GW
GW -->|"gRPC"| PS
GW -->|"gRPC"| OS
OS -->|"gRPC"| PS
OS -->|"gRPC"| IS
OS -->|"gRPC"| NS
IS -->|"Bidirectional
Streaming"| PS
PS --> DB1
OS --> DB2
IS --> DB1
NS --> MQ
style Web fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style Mobile fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style GW fill:#e94560,stroke:#fff,color:#fff
style PS fill:#2c3e50,stroke:#fff,color:#fff
style OS fill:#2c3e50,stroke:#fff,color:#fff
style IS fill:#2c3e50,stroke:#fff,color:#fff
style NS fill:#2c3e50,stroke:#fff,color:#fff
style DB1 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
style DB2 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
style MQ fill:#f8f9fa,stroke:#ff9800,color:#2c3e50
10. Performance Tuning and Best Practices
10.1. Connection Management
// Optimized connection pool
builder.Services.AddGrpcClient<ProductService.ProductServiceClient>(o =>
{
o.Address = new Uri("https://product-service:5001");
})
.ConfigurePrimaryHttpMessageHandler(() => new SocketsHttpHandler
{
// Keep connections alive — avoid repeated TCP handshakes
PooledConnectionIdleTimeout = Timeout.InfiniteTimeSpan,
// HTTP/2 keep-alive ping
KeepAlivePingDelay = TimeSpan.FromSeconds(60),
KeepAlivePingTimeout = TimeSpan.FromSeconds(30),
// Allow multiple HTTP/2 connections when needed
EnableMultipleHttp2Connections = true,
// Connection lifetime — rotate for load balancing
PooledConnectionLifetime = TimeSpan.FromMinutes(5)
});10.2. Compression
// Server-side: enable Gzip compression
builder.Services.AddGrpc(options =>
{
options.ResponseCompressionAlgorithm = "gzip";
options.ResponseCompressionLevel = CompressionLevel.Optimal;
options.CompressionProviders = new List<ICompressionProvider>
{
new GzipCompressionProvider(CompressionLevel.Optimal)
};
});
// Client-side: send requests with compression
var callOptions = new CallOptions(
writeOptions: new WriteOptions(WriteFlags.NoCompress)); // disable
// or enable via channel option
var channel = GrpcChannel.ForAddress("https://server:5001",
new GrpcChannelOptions
{
CompressionProviders = new[]
{
new GzipCompressionProvider(CompressionLevel.Fastest)
}
});10.3. Deadlines & Cancellation
Always set a deadline for every gRPC call
No deadline = wait forever. In microservices, one hung service drags down the entire call chain. Rule: every RPC call must have a deadline, and deadlines must be propagated to downstream services.
// Set a deadline when calling
var deadline = DateTime.UtcNow.AddSeconds(5);
var product = await client.GetProductAsync(
new GetProductRequest { Id = 42 },
deadline: deadline);
// Propagate the deadline in the server
public override async Task<OrderResult> CreateOrder(
CreateOrderRequest request,
ServerCallContext context)
{
// Use the caller's deadline; don't create a longer one
var remainingTime = context.Deadline - DateTime.UtcNow;
var childDeadline = DateTime.UtcNow.Add(remainingTime * 0.8);
var product = await _productClient.GetProductAsync(
new GetProductRequest { Id = request.ProductId },
deadline: childDeadline);
// ...
}10.4. Best Practices Checklist
| Practice | Description | Importance |
|---|---|---|
| Always set deadlines | Every RPC call needs a clear timeout | 🔴 Required |
| Use CancellationToken | Propagate the token through the entire async chain | 🔴 Required |
| Retry policy | Retry on Unavailable, DeadlineExceeded | 🔴 Required |
| Health checks | Implement grpc.health.v1 for K8s probes | 🔴 Required |
| Logging interceptors | Log method, duration, status code | 🟡 Recommended |
| Compression | Gzip for payloads > 1KB | 🟡 Recommended |
| Connection pooling | Reuse connections, enable multiplexing | 🟡 Recommended |
| Buf/Proto-Break in CI | Detect breaking schema changes | 🟡 Recommended |
| gRPC-Web proxy | If browser access is needed via Envoy | 🟢 Optional |
| Reflection dev-only | Disable reflection in production | 🟢 Optional |
11. gRPC-Web — When You Need Browser Access
Standard gRPC doesn't work directly in browsers because browsers don't expose HTTP/2 framing at the application layer. gRPC-Web solves this by encoding gRPC frames inside regular HTTP/1.1 or HTTP/2 requests.
// Server: enable gRPC-Web
builder.Services.AddGrpc();
var app = builder.Build();
app.UseGrpcWeb(); // Middleware that translates gRPC-Web ↔ gRPC
app.MapGrpcService<ProductServiceImpl>()
.EnableGrpcWeb(); // Enable per-service
// Client JavaScript/TypeScript (using @grpc/grpc-js or connect-web)
// Or use an Envoy proxy to translate gRPC-Web → gRPCConnect Protocol — A modern alternative
If you need better browser compatibility, consider Connect (connectrpc.com) — a gRPC-compatible protocol that also supports JSON over HTTP/1.1, making it easier to debug and usable with any HTTP client without special proxies.
12. When NOT to Use gRPC?
gRPC is not a silver bullet. The cases below are where you should consider another protocol:
- Public APIs for third parties: REST with OpenAPI docs is much more approachable — developers just need curl or Postman.
- Simple browser-to-server: If you don't need streaming, REST/GraphQL is simpler than gRPC-Web + a proxy.
- Small teams, few services: The overhead of the Protobuf toolchain isn't worth it for just 2-3 services.
- Need HTTP caching: gRPC can't leverage HTTP caches (CDN, reverse proxy) because every request uses POST.
- Payloads are mostly text/human-readable: If you need to inspect requests/responses often, JSON is easier to debug than binary.
Conclusion
gRPC on .NET 10 offers the perfect blend of performance (binary serialization, HTTP/2 multiplexing), type safety (Protobuf contracts), and developer experience (code generation, interceptors, built-in health checks). For internal microservices communication — where low latency and high throughput are essential — gRPC is the best choice today.
Start by defining a .proto file for your most critical service, implement Unary RPC first, then expand to streaming as real needs arise. Always set deadlines, implement health checks, and establish schema evolution rules from day one — that's the foundation for your gRPC system to scale sustainably.
References
- Microsoft Learn — gRPC services with ASP.NET Core
- gRPC Official Documentation — Guides
- GitHub — grpc-dotnet repository
- Microsoft Learn — gRPC Interceptors on .NET
- Microsoft Learn — gRPC Health Checks in ASP.NET Core
- Microsoft Learn — Versioning gRPC services
- Calmops — gRPC vs REST Complete Comparison 2026
- Microservices with gRPC and Protocol Buffers for Scalability 2026
Speculation Rules API — Lightning-Fast Web Navigation with Prefetch and Prerender
Load Balancing: The Art of Traffic Distribution for Million-Request Systems
Disclaimer: The opinions expressed in this blog are solely my own and do not reflect the views or opinions of my employer or any affiliated organizations. The content provided is for informational and educational purposes only and should not be taken as professional advice. While I strive to provide accurate and up-to-date information, I make no warranties or guarantees about the completeness, reliability, or accuracy of the content. Readers are encouraged to verify the information and seek independent advice as needed. I disclaim any liability for decisions or actions taken based on the content of this blog.