This page looks best with JavaScript enabled

Converting to .NET 6 Core

 ·  ☕ 7 min read

Suppose we need to convert a legacy .NET 4.5 MVC or API project to .NET Core 6.0. It might sound like daunting task at first, and if you never did this, might cause you headache for a few days/weeks, or even months depending on the size of the project.

In this article I will do my best to reduce that headache by giving you starter info and some follow-up articles to learn more about the topic. Feel free to jump to any section of interest using below table of contents. It’s for those advanced folks who are just missing a few pieces of the puzzle. Otherwise buckle up and let’s get into it, top to bottom.

Assumptions

  • Conversion target is a real production service directly or indirectly used by thousands or millions of customers. We are not converting a demo app just for fun.

  • You are familiar with .NET 4.5, and have some basic knowledge of .NET Core, such as what goes into Program.cs, how dependency injection (DI) works in general, service lifetime and .NET middleware.

  • You have good knowledge of enterprise development practices and development lifecycle. Advice will be given with enterprise in mind, and might not apply to a pet project.

  • Your setup may vary; some advice might not fit your situation. So, no guarantees you will be able to convert your app without issues using below ideas.

Where To Start

A good starting point in my opinion is tryconvert utility. It will convert all library projects to .net core with minimal effort (just running a command). API project needs to be converted manually. You can do this by creating a default .NET API Core project, and copy the files over. After that it’s pretty much all manual process to wire it all up. I will cover some of the common patterns below.

Keep both old and new project side by side, if you need to compare how things are/were implemented before your changes. Unload the old project, so it doesn’t cause build errors.

Important! If you want to keep git history, which is a good idea when coding for the enterprise, it’s best to rename the new project to the old name, when done with conversion. You should also squash your changes into a single commit, this way file history will be correctly mapped.

General Architecture

DI is good, however it’s better used only at the integration point, the project API itself. Libraries should not have cross cutting patterns such as logging, DI or configuration (like web config). Makes it a lot easier to reason about.

Use factory classes, sometimes referred to as facades, to instantiate library classes that need some configuration passed into them. Get configuration from IConfiguration via DI.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public class MyClassFactory : IMyClassFactory
{
    private readonly IConfiguration _config; // or use a custom IAppConfig

    public MyClassFactory(IConfiguration config)
    {
        this._config = config;
    }

    public IMyClass CreateMyClass()
    {
        string param = this._config["Param"]; // or this._config.Param if using custom IAppConfig
        return new MyClass(param);
    }
}

A good idea is to have a proxy class that communicates to IConfiguration and then provides strong typing support to your consumer classes. Because reading config is a cross cutting concern, it’s very important to think about proxying it via your own class.

This not only enables you to change later how your configuration works, but also provides live documentation of where each setting is used (find references). Find all references is possible with plain strings, but more difficult, involves more steps and prone to human error.

Use interfaces everywhere. It is not a strict requirement, but a good practice to get used to. You don’t need to copy/paste code to create interface files, there is “extract interface” feature in modern IDEs. Both VS Pro and Rider have it.

Rider can also “pull members up” which allows you to update an existing interface as the implementation evolves. Very useful, since you will have to deal with lots of boilerplate.

Nuget Packages

There is no need to convert everything to nuget, but it’s a good idea in general. You can move versions up/down very easily, sometimes needed to fix critical vulnerabilities or use a specific version due to breaking changes. It also lets you quit storing dlls in source control.

Check if nuget package you are using was designed for .net core. Sometimes you can find old packages targeting .net 4.5, for example, and while often they work fine, they might have some edge cases or not have the same level of performance. VS Pro will show those dependencies with an exclamation mark. Not all IDEs support old .net version target detection. For example, Jetbrains’ Rider does not.

Serialization

You were likely using Newtonsoft’s JSON serializer in the old project. It might be a good idea to set it up in .Net Core as well and avoid native JSON serializer which is very strict by default. And yes, you can extend it and customize its behavior, but it’s not worth doing at scale. Instead, you can do this:

1
2
3
4
5
builder.Services.AddControllers().AddNewtonsoftJson(opt =>
{
    // Without this override, Newtonsoft uses camelCase for serialization
    opt.SerializerSettings.ContractResolver = new DefaultContractResolver();
});

Private and Public Keys

Your legacy application might keep private/public keys in App_Data. While there are better approaches, you can keep everything as is, at least as an iteration checkpoint (get everything working first, before thinking about improvement). For this, inject IWebHostEnvironment and combine path with ContentRootPath, like this:

1
string privateKey = File.ReadAllText(Path.Combine(this._env.ContentRootPath, "App_Data/PrivateKey.xml"));

If you are starting from scratch, here is a good article that also shows how to generate the key pair.

Authorization

Authorization can be implemented in many ways. But, if your old project has some legacy auth code, you probably want to keep that. You can do it via policy.

1
2
3
4
5
6
7
builder.Services.AddAuthorization(configure =>
{
    configure.AddPolicy(SomePolicyKey, policy =>
    {
        policy.Requirements.Add(new YourAuthRequirement());
    });
});

and then:

1
builder.Services.AddScoped<IAuthorizationHandler, YourAuthHandler>();

Check this article on MSDN for more details. You’ll need to inject IHttpContextAccessor if you need to peek at the http request and make authorization decisions based on that, for example conditionally force https in local dev via app config.

1
builder.Services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();

Unit Tests

I have been using xUnit lately, and it’s pretty good, but I recommend picking the test framework you are most familiar with. For testing DI I like to use Moq, which is a popular framework with somewhat steep learning curve.

Because of that, I recommend keeping Moq out of your unit tests, and have helper methods as a plumbing layer. It helps separate business logic from plumbing, reducing human errors and simplifying the debug process. It also gives less chance to junior developers to make an error when making changes to code.

This advice is also applicable to other test frameworks – try to capture only the core part of the test in any method. For example, if you pass a table with 1 row, your output should be some value derived from that row. If you pass no rows, then return null or throw exception. Resist the temptation to check that all methods were called with respective arguments.

Purists would disagree, but I think it draws the attention away from business purpose. Cognitive complexity should be the determining factor. And even if you did test 100% of the code, including the arguments, it still does not guarantee the end-to-end result. So, favor quick iteration cycles with less code, rather than perfect coverage.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
using Moq;
// ...top of the file

[Fact]
public void Should_Do_Smth()
{
    var mockService = MockMyService("test");    
    var svc = new TestService(mockService);
    var result = svc.TestMethod(null!); // null forgiving operator ;)
    
    Assert.Equal("test value", result);
}

private IMyService MockMyService(string returnValue)
{
    var mockService = new Mock<IMyService>();
    mockService.Setup(x => x.MyMethod(It.IsAny<string>())).Returns(returnValue);
    return mockService.Object;
}  

You might have noticed a null-forgiving operator in the above code block. It’s there to enable testing where your IDE would normally detect a warning, if you try to pass null value where non-null is expected.

This concludes my list of thoughts related to my recent .NET 6 conversion work. Good luck and happy coding!


Victor Zakharov
WRITTEN BY
Victor Zakharov
Web Developer (Angular/.NET)