This page looks best with JavaScript enabled

Jetbrains .NET Days 2021 - Notes

 ·   ·  ☕ 13 min read

Here are some of the notes I took while attending Jetbrains .NET Days 2021 online event - for select sessions from Days 1 and 2. Specifically, I skipped 1 session from Day 1 (React/CosmosDB), and only watched 1 session from Day 2 (AWS/Kubernetes). This is similar to my earlier MS Build notes articles for Day 1 and Day 2. Main purpose is providing a quick overview of the topics covered, if you do not have those hours to watch whole videos. I tried to make it coherent, but there might be some inaccuracies that slipped in - use at your own risk.

Where it was necessary to explain or give more context, I put « Victor:, followed by my thoughts. There were not directly mentioned by the presenter, but it might help the reader understand why / how to better apply the technology in question.

Event details and full agenda can be found here. Recording for Day 1 can be watched on Youtube (almost 9 hours of video). And here is one for Day 2 (9 more hours).

Below is a table of contents for this article. Click to navigate.

Source Generators

By Andrey Dyatlov, Resharper developer at JetBrains.

Source generators is a relatively new feature available as part of C# compiler. It’s free to use.

When there is lots of boilerplate (for example INotifyPropertyChanged for MVVM), we no longer need to copy paste it into hundreds of files. Code generation process can be debugged by placing Debugger.Launch() in generator code. Generated code can be found in Visual Studio by navigating to type and selecting the generated portion (as a partial class).

You can also implement partial methods which are like abstract methods but implementation can be provided by the generator. « Victor: So, you could, in theory, use some kind of NLP-based AI to infer what the code should be by looking at method name and context. This of course can only work with proper DDD, where business logic is generated using high level building blocks.

To improve code generation performance on large solutions, generator class can subscribe to syntax notifications inside Initialize by providing SyntaxReceiver instance into it (needs custom code).

Generators can only add new files. Other technologies like IL Weaving can do more (but more complex). Also look at PostSharp and aspect programming (a paid tool).

Debugging .NET apps

By Tess Ferrandez-Norlander, Principal Developer at Microsoft

The scope of this session is finding and fixing complex issues in code. Complex issues are those where step by step debugging is not enough. One of the most typical issues are memory leaks. There are several ways to find them.

You can start with Dotnet counters - and look at memory usage.

Next, there are dumps. You can collect a .NET dump or a regular process dump which will include native code. To collect the dump:

Dotnet gcdump collect -p {processId}

Then open this dump file in VS. Or, if troubleshooting your own app, use Diagnostics Tools from VS, which opens when you start the debugging session.

Other useful commands:

Dotnet dump collect -p {processId}
Dotnet dump analyze {dumpName}

Depending on dump analysis context, you can use gcroot command which can point you to the stack trace that causes a memory leak.

Memory leaks can analyzed using command line, but if you prefer GUI, there is WinDbg.

Useful resource on memory management:

When you are developing for the cloud, Azure has a bunch of diagnostic tools to perform similar debugging and more.

Labs for this session can be found here.

Writing high-performance C# and .NET code

By Steve Gordon, Microsoft MVP, Pluralsight author and Senior .NET Engineer with Elastic.

For the purpose of this session, performance is defined as a combination of execution time, throughput and memory allocations. Performance is contextual. And there is always a performance/readability tradeoff, which means that readability might be more important.

Best approach to improve performance is an iterative process. Work in small increments, Be practical. How to measure – you can use VS Diagnostic Tools (that start with a debugging session). You can also use VS Profiling / PerfView (free, but steep learning curve) / dotTrace / dotMemory. Steve recommends dotTrace and dotMemory (not just because are from Jetbrains).

In an attempt to improve performance, it might be worth looking at the generated code using ILSpy / JustDecompiler / dotPeek. The idea here is if we can reduce the number of generated lines or their complexity, it might improve the overall performance.

Performance monitoring can be used to detect regressions in production. If you performance baseline suddenly changes, perhaps, a regression was introduced recently.

There is a tool called Benchmark .NET, which is used by .NET teams at Microsoft. You can use it to compare performance of various implementations, side by side. Not only it shows you the CPU time, but also memory allocations, per execution. It requires setting custom attributes on the methods under test (and sometimes, class definitions).

Efficient memory allocations

If you never heard of Span, this is what made ASP.NET run a lot faster. It creates a read/write view over a contiguous region of memory. It’s extremely light weight. Almost no overhead vs raw array access. Usually, we should not be using it directly, but sometimes it can save a lot of unnecessary memory allocations. Use Span<T>.Slice to create a span view of its subarray.

Simple operation like selecting a sub-array with Span is a constant time operation regardless of array size, vs LINQ (Skip, Take), which takes progressively more memory and CPU time as array grows.

There is also ReadOnlySpan, using which you can pass substrings around without extra allocations.

Span<T> lives on a stack, so it cannot be used as an argument or local variable inside async methods. Instead, use Memory, slightly slower than Span<T>, but works on a heap instead of stack.

When arrays are created and destroyed frequently, you can use ArrayPool to significantly reduce memory allocations, vs creating new arrays.

System.IO.Pipelines is a technology created by ASP.NET team to improve Kestrel, which on average improves I/O performance by 2x vs streams. Works internally on ArrayPools. Allocates buffers as a bunch of linked lists.

System.Text.Json library

System.Text.Json – part of .NET 3.0 and higher:

  • Low-level – Utf8JsonReader and Utf8JsonWriter – can optimize json processing by 3-4 orders of magnitude in terms of both CPU and memory performance, in cases where it’s not necessary to read the whole JSON file to determine the outcome.
  • Mid-level – JsonDocument.
  • High-level – JsonSerializer.

How to convince business

So how to convince business to improve performance? If you tell them it’s possible to save 100 nanoseconds per function call, they might not be so impressed. Instead, create proof of concept (POC) of the optimization, estimate cost savings, and then talk with decision makers.

Useful resources on the topic

Steve recommends this book:

It has over 1000 pages on how memory works.

All source code for this demo is accessible here on github. You can find the slides here.

Embracing gRPC in .NET

By Irina Scurtu, Microsoft MVP, Software Architect at Endava.

Using RPC makes it look like the code is on the same machine, but it’s not. This approach is prone to errors.

gRPC appeared in 2001 by Google. In 2005 it was open sourced. In 2016 gRPC v1.0 came out, and since 2019 it’s a first-class citizen in .NET Core. gRPC is contract based, using HTTP/2 by default, which means it’s faster.

It’s using protocol buffers serialization, which results in smaller payload. gRPC is available in many languages. It also supports code generation for statically typed languages.

gRPC is using a .proto file for its definition (protocol buffers format).

Visual Studio has a template to create gRPC service, it’s called ASP.NET Core gRPC Service.

gRPC can be unary and bi-directional. Unary is when client initiates the requests and gets the response, similar to a normal function call. In Bi-directional mode, server can initiate the request. Bi-directional is more complex to implement. There are also client and server streaming.

All gRPC requests are of POST type.

Strengths vs REST:

  • Action based.
  • Programming semantics.
  • Tighter coupling.
  • Binary based (= faster).

gRPC is the future of Web API*.

« Victor: From a quick Google search, gRPC is roughly 7 times faster than REST when receiving data & roughly 10 times faster than REST when sending data for this specific payload. gRPC payload is 5 times smaller.

Building modern applications with GraphQL and Blazor

By Michael Staib, Developer, ChilliCream.

Benefits of GraphQL

Without GraphQL, the flexibility on front end is heavily dependent on the proper API design. If wrong decisions are made on the back end, it can result on cumbersome and non-performing front end, because there was no other way to handle it.

With GraphQL, front end developer is in the driver seat. It works on trees of data, only returning what we need. We can use the connections on the graph to get related data.

Terminology and usage

Fragment concept lets us avoid field repetition in request definition. It’s like an interface declaration that can later be used with …FragmentDef.

GraphQL is accessible via a single endpoint. Client therefore only needs to make a single request. No over or under-fetching which we typically have with REST, where multiple endpoints need to pulled and aggregated in some way.

GraphQL has a type system = predictable.

When just starting with GraphQL, we can use LINQ with source generators. It’s not the preferred way, but allows more developers to get into it. « Victor: Here is a tutorial I found.


Blazor for web developers means writing less JS, and more .NET (ideally, no JS at all), which gives us near native performance, built on WASM, which is supported by all modern browsers.

Blazor comes in two flavors - Server and WebAssembly. Server came with .NET Core 3.0 (works on top of SignalR). WebAssembly is relatively new (May 2020).

Like React, and other popular JS frameworks, Blazor uses shadow DOM to speed up rendering.

Microsoft are experimenting with Blazor for progressive web apps, for Electron (.NET 5), for desktop (.NET 6) etc., there is a good chance Blazor will be omnipresent in the near future.

Migrating .NET/dapper from SQL to NoSQL (Couchbase)

By Matthew D Groves, Microsoft MVP, Product Marketing Manager for Couchbase.

Matthew is an author of SQL Server to Couchbase conversion tool, which is available on GitHub.

According to him, Couchbase is the most relational friendly NoSQL DB.

SQL-to-Couchbase Dictionary

SQL ServerCouchbaseNote
ServerCluster+Scalability / High Availability / Built-in caching
SchemaScopeOften just “dbo”
TableCollection-Pre-defined columns/constraints
RowDocument+Flexible JSON
Primary KeyDocument Key-Compound keys
-No keys

Couchbase doesn’t have pre-defined columns/constraints, for primary key which is Document Key – no compound keys. We can work around that – just use a delimiter.

N1QL language makes it possible to move SQL queries almost without modification (or simple changes). All relational operators are supported. Couchbase SDK for .NET is very similar to dapper.

JetBrains DataGrip has hierarchical display of DB objects (better than SQL Management Studio).

There is no entity framework support yet, but we can use LINQ.

What can’t be automated yet

Views / Sprocs / Triggers / FunctionMigrate Queries
Couchbase M/R Views are the closest analog to SQL Server ViewsThere are no foreign key constraints or unique constraints (other than document key) in Couchbase.
Couchbase UDFs are the closest analog to Sprocs and FunctionsCan be approximated with Couchbase Eventing / key construction
Couchbase Eventing is the closest analog to triggersSome queries should be replaced by K/V lookup

Once you are in the NoSQL realm, Key/Value lookup might give better performance than parsing SQL queries. Although not required, some rewrite might make sense.

Other Tools for other SQL Databases

Contact Matthew Groves: Email, Twitter, LinkedIn.

Containerize .NET Apps and deploy to Kubernetes

By Martin Beeby, Principal .NET Advocate, AWS

Suppose your client has an unstable app but doesn’t want to spend money on engineering / to fix. Martin explained how to make apps more stable and reliable by just moving infrastructure to the cloud.

The general idea is this - take an existing app and containerize, then move containers to the cloud (sometimes referred as lift-and-shift). Old technology is harder to containerize than new, but still possible.

Rider IDE can add “Docker Support” from a right click menu. It adds a Dockerfile, that for .NET project is configured for multi-stage docker build of ASP.NET Core app. You can also create new ASP.NET Core solution already with docker support.

To get the most of containers, keep container size as small as possible, multi-stage build helps reduce container size. First, build using .NET SDK base container, then run using aspnet base container. You could make a simpler build by only using SDK to build and run, but it would result in a larger image size.

AWS tools for containers and IDE support

There is a tool called AWS App2Container (A2C), you can use it if you need to convert a legacy app for which you don’t have source code, this tool can can generate a container from binaries.

Amazon provides container registry service, it’s called Amazon ECR (elastic container registry), and can be private or public. There is a plugin for Rider IDE called AWS Explorer, you can explore your ECR there.

Rider can also do live debugging Elastic container service.

Benefit of containers vs VMs

VMs are heavy, and usually take a long time to start (Martin gave ~5min as a baseline). While the VM is loading, other VMs are running under increased load. If this increased load is caused by an activity spike, it could cause cascade failure of the whole cluster.

Containers are much lighter. Similar to managed VMs, an orchestrator will restart failed containers if necessary, but here it only takes ~100ms. Numbers are approximate, main point is a few order of magnitude difference in favor of containers. Now because containers are quicker to start, it makes the overall system more reliable.

Container terminology

There is Amazon ECS (older tech, proprietary) and EKS (managed Kubernetes). You probably want to use the latter to avoid vendor lock-in.

Node – smallest unit of computing for Kubernetes (logically, like a VM).

Containers are placed on nodes. If one of the nodes fails, Kubernetes will take care of it (restart, move apps to other nodes etc). Cluster of nodes can span multiple availability zones. They are usually far away from each other geographically. It’s unlikely to several multiple availability zones to fail at the same time.

A Pod is a set of containers that make up a microservice, which itself is a portion of the app.

A pool of nodes is called a cluster.

AWS and Kubernetes

Pricing structure for Amazon EKS - you pay for control plane, and for each node.

kubectl is a command line tool to work with clusters (use AWS CLI). In AWS a node is an EC2 virtual machine. It’s a standard Kubernetes tool, not specific to AWS.

To add an application:

kubectl apply -f filename

Where you point it to a JSON or YAML file, or a URL. This file contains a definition of which images to run, how to run them etc.

AWS also offers a service called Amazon Fargate, which is a cluster managed by Amazon, which gives you an option to only pay for resources (CPU and memory). You do not need to manage (and pay for) the VM or control plane.

Victor Zakharov
Victor Zakharov
Web Developer (Angular/.NET)