by Chris Simon
Principal Consultant at CMD Solutions

In part 1 of this series, we explored the reasons to modernise and upgrade a .NET Framework application to .NET.  In part 2, we’ll explore some architectural strategies to help with planning the migration of large applications. Next, in part 3 we’ll dig into the nitty-gritty of working with .NET Code to execute a migration.

Progressive Modernisation

When attempting the migration of legacy codebase, especially one that is tightly coupled, organisations may be tempted to modernise the whole system all at once – the so-called ‘big bang’. This might be because it seems that this will be more efficient, or it might be too difficult to identify any other way.

However, this approach is not recommended: inevitably after doing so much work, there will be issues – and when problems arise, you will have made so many changes it will be very difficult to isolate which change is responsible for the error.

Big Bang Migrations are Unlikely to Succeed

In contrast, the practices of continuous delivery in software development demonstrate the value of executing large changes in small bite-sized steps, ensuring that the overall system is fully functional after each change.

In fact, you should be able to deploy to production with a mixture of modernised and not-yet-modernised code.

This approach mitigates risk by providing regular feedback on the progress of the migration, which allows you to confirm that each step is working before proceeding to the next step.

Progressive Migrations Reduce Risk

It is well worth taking the time to identify potential incremental steps, and we’ll explore a few options to consider below.

Safety & Quality

The first step to a successful migration is ensuring you have the tools in place to allow you to make changes safely and maintain quality.

These tools are known as Continuous Integration / Continuous Delivery (CI/CD) systems.

They are designed around a series of stages that give you progressively more confidence in the quality of the current state of the code:

  1. A build stage that confirms your code compiles
  2. A test stage that executes an automated test suite within the build environment and  and possibly satisfies other checks such as linting and static security analysis
  3. A deployment process that deploys the system into a production-like test environment (e.g. a docker container into an ECS Fargate cluster)
  4. A further test stage that executes an automated test suite against the boundary of the system in the test environment (e.g. testing APIs or UIs)
  5. Possibly further deployments to dedicated environments and tests, such as data migration tests, load/performance tests, security tests etc.
  6. Optionally deployment to production followed by production smoke tests

In the above scenario, there are three major components:

  1. The CI/CD Platform
  2. Automated Deployment Processes
  3. Automated Tests

Using a CI/CD Platform has never been easier, with a range of high quality commercial options available both as SAAS platforms and self-hosted, including Gitlab, Github actions, Azure Devops, AWS Developer Tools and many others.

Depending on your infrastructure model, implementing automated deployment processes is fairly standard these days and should not present too many challenges.

However, the quality and utility of the various test suites is at the heart of what makes the whole process useful and many legacy systems will not have a high-quality suite of tests.  Without such tests, teams will need to manually test each change. Due to the overhead of manually re-testing all functions, there will be an urge to batch a large number of changes together for each test run – leading back to the ‘big bang’ approach that we warned against at the start.

For these reasons, one of the best things you can do to ensure a successful migration is to establish at least a minimal set of automated tests.

If you currently have no tests, a viable stepping stone is to start with ‘snapshot tests’ (using a library such as Snapshooter.net) – tests that don’t explicitly check for valid behaviour, but rather commit a ‘snapshot’ of the output (e.g. HTML, or API json) and simply check that the output matches the snapshot.  If you capture the snapshots before the migration, you can have confidence that at each step of the migration the changes you’ve made have not had an impact on the expected output.  You can think of them as a ‘safety harness’ around your system.

As you go, it’s recommended to progressively add more and more tests that explicitly define the expected functionality for specific scenarios and requirements, as these will help identify and resolve issues when they crop up far more rapidly.

Architectural Approaches

If you’re tackling the migration of a monolithic application, depending on the size and existing architecture, there are a number of approaches to consider.

  1. If your existing application follows a ‘layered architecture’, and you are happy to keep it as a ‘monolithic’ architecture, then you can adopt a ‘layer by layer’ approach
  2. If your existing application is relatively unstructured (aka the ‘big ball of mud’), or you are hoping to decompose into autonomous services, then the Strangler Fig Pattern may be your best bet

Not Sure?

If you’re not sure which of the above approaches might work, the AWS Microservices Extractor for .NET could help.  Through a combination of source code analysis and runtime metrics it can identify the natural boundaries in your application which can help you make a determination on how to proceed.

Layer-By-Layer

For medium sized applications operated by single teams, it may be more appropriate to retain the monolith architectural style for your system.  Decomposition into services can add extensive overheads to operations and maintenance, which can be burdensome for small teams.

And if you’re fortunate enough to already have a fairly structured application that is made up of well separated layers, the layer-by-layer approach can be an effective and straightforward modernisation pathway.

To explain the layer approach, imagine a typical assembly architecture of applications written in the .NET Framework era:

The process to follow:

  1. Identify your assembly dependency graph and identify the assemblies that don’t depend on any other assemblies – these are the ‘bottom’ of your dependency graph.
    1. In the example above, this would be the “Common Infrastructure”
  2. Migrate the ‘bottom’ assembly to .NET Standard 2.0 – a build target that is compatible with both .NET Framework and .NET and intended to be used for class libraries.
  3. Identify the next dependency in the chain – the assembly that now ONLY depends on .NET Standard 2.0 assemblies – and repeat from step 2.
  4. When you get to the top-most assembly – the MVC layer, migrate to an executable target, such as .NET 6.0 – and you’re now ready to deploy a fully modernised application!

Following the continuous delivery model, you should test and deploy your application to production after each layer is migrated.

If your application is currently a single assembly, then start by identifying dependencies between the namespaces that represent each layer.  Setup a .NET Standard 2.0 class library, and step by step, migrate namespaces into the class library following a similar pattern to the above.

Strangler Fig

The Strangler Fig pattern was coined by Martin Fowler following a trip to Australia where he observed the way Strangler Figs gradually take over a host tree.  He decided it was a great metaphor for a process of rewriting a complex system – starting by taking over functionality at the edges until gradually all functionality is replaced by the new system. 

Feature by Feature

Rather than ‘layer by layer’ we approach modernisation ‘Feature by Feature’.  As well as an opportunity to update our code to .NET 6.0, this is an opportunity to remove technical debt and adopt a more modern architectural approach as each feature is implemented:

Asset Capture

One effective approach can be to utilise ‘asset capture’.  The terminology here considers each entity (e.g. each user, or each client organisation) as an ‘asset’.  When planning the modernisation of a feature, there may be a very large number of scenarios and edge cases that need handling.

The goal is to identify a subset of the feature scenarios which suit a particular subset of the assets, and to migrate just those assets (e.g. just those users) over to the modern implementation for that feature.

All other assets continue to use the old implementation until the modern implementation of the feature is fully capable of supporting all their needs.

Event Interception

One of the key elements of the Strangler Fig pattern is known as ‘event’ interception – the idea that a stream of inbound ‘events’ can be intercepted and certain events can be re-routed to the modern implementation.

In the diagram above, we can see this being done in two ways:

  1. A request router, which routes inbound HTTP requests to the appropriate backend.  This could be on a basis as simple as the path of the request, or it could involve content inspection to identify the asset.  Services that can help with this include:
  2. An event stream – within the legacy application, identify specific points in the code to publish asynchronous events from.  The modern implementation can subscribe to those events via an event bus and when consuming them, start to take ownership of the asset that the event relates to.  Services that can help with this include:

Decomposing into Services

The nice thing about this approach is that by implementing the request router and the event bus, you have already started down the path of a distributed system.  This means that decomposing your application into autonomous microservices is extremely straightforward – you can gradually start to extract functionality from the Legacy application and allocate it to the appropriate microservice(s).

The key challenge with taking this step is no longer technical, but rather a matter of logical architecture – identifying which services should own which features (or which parts of features) – sometimes known as identifying the ‘service boundaries’. This video presents a very helpful way of thinking about service decomposition and you may find Event Storming a helpful way of understanding your system and finding the logical boundaries.

Multi-targeting Shared Libraries

In all the above approaches, if you identify shared library’ code that could be useful in both the legacy and the modern service (perhaps common infrastructure code, perhaps DTO/contract classes for events, or perhaps a a data model or persistence layer) it’s a good idea to leverage .NET’s multi-targeting build feature, so that you can potentially target both .NET Framework and .NET and use the appropriate build output in each deployed service.

Note that if you are sharing a data model or persistence layer, this is only a good idea if you are retaining the monolith and utilising a single shared database. If you are decomposing into services, it’s strongly recommended to give each service its own data model/database to ensure that the services are truly autonomous and not coupled in hidden ways in the database.

Summary

Every application is different, and a modernisation effort requires careful thought and planning.  This post has explored a few architectural approaches that permit a progressive modernisation which is the best way to manage risk and start to see value more quickly from the modernisation effort.

The key recommendations are:

  1. Establish an automated test “safety harness”, starting with snapshot tests if need be
  2. Practice Continuous Integration and Delivery
  3. Modernise progressively (step by step) and ensure the modernised components are used in production as soon as they are ready
  4. Have the discipline to complete the modernisation – don’t let it get stuck ‘half way done’.

What’s Next?

The logical architectural patterns described above rely on rock-solid infrastructure capabilities.  A trusted and experienced partner like CMD Solutions can help with best practice adoption of mission-critical services for hosting, event messaging, request routing, CI/CD and more. As well as providing advice on logical decomposition approaches and strangler fig planning. Feel free to reach out if you’d like to explore this further.