Back to blog

Public sector organisation

Journey from WebLogic to AWS Fargate without losing delivery control

A public-sector case study on taking a customer and order platform from Ant, SVN, JSP scriptlets, and manual releases to a cleaner engineering baseline that could support real modernisation.

March 2026 Legacy modernisationGradleGitHubWebLogic

This series is based on a real migration delivered for a large public sector organisation. For security reasons, the client, domain language, and some delivery details are anonymised here. We describe the platform as Atlas Orders: a customer and order management application that captures the technical shape of the work without exposing the original service.

The starting point was a classic enterprise Java estate that had become expensive to change and expensive to keep standing still.

It was built as an EAR and deployed to WebLogic. The web tier used JSP and scriptlets. The integration layer relied on EJB, JAXB, and JAX-WS. The application was wired through web.xml and weblogic.xml. The build used Ant. Dependencies were poorly controlled, with generated classes and third-party libraries effectively living inside the source tree. Source control sat in SVN. Deployments happened from a locked-down virtual machine into an on-premise data centre using a release process that had become more ritual than engineering.

Nothing about that application was unusual for its era. The problem was that every improvement had become expensive.

Why this mattered commercially: the organisation was carrying the cost of legacy release friction, specialist runtime knowledge, and a platform where even low-risk changes pulled in infrastructure and operational overhead. The real problem was not age alone. It was delivery drag.
Transformation map showing the major migration shifts: SVN to Git, Ant to Gradle, WebLogic to Spring Boot, Oracle to Postgres, and On-premise to AWS.
The programme was not one change. It was five linked shifts across source control, build, runtime, data, and hosting, each needed to move the platform away from legacy operating costs.

The first objective was not cloud. It was control.

Before we could talk about containers, AWS, or Spring Boot, we needed to put the platform on a better engineering footing without breaking a live customer and order flow.

So Phase 1 was deliberately conservative.

We focused on three things:

  • replacing the build spine
  • moving the codebase from SVN to GitHub
  • making the source tree easier to reason about

What the client needed from this phase

This was not a “developer happiness” exercise.

The client needed a path where modernisation could start without forcing a destabilising cutover or a rewrite commitment the programme had not yet earned. That meant Phase 1 had to create immediate engineering order while preserving business continuity.

The commercial test was simple:

  • can we reduce the cost of change without interrupting live service?
  • can we make future phases cheaper and safer to deliver?
  • can we do it without asking the organisation to bet everything on one release?

Why we introduced Gradle first

We chose Gradle because it gave us a safer bridge between the old world and the target state.

That mattered.

If we had picked a tool that only worked well once the application was already modern, we would have created a migration dependency before we had earned it. Gradle let us improve the build incrementally while still accommodating the awkward parts of the estate.

It was the right choice for a few reasons:

  • it was flexible enough to wrap or reproduce Ant-style tasks where legacy packaging still depended on them
  • it gave us proper dependency management instead of a growing pile of checked-in binaries
  • it supported multi-step build orchestration cleanly, which was useful for EAR packaging, code generation, and environment-specific outputs
  • it gave developers a more standard local workflow and easier IDE integration
  • it created a better route into linting, test automation, and later CI/CD

In other words, Gradle was not just a build replacement. It was the first piece of engineering discipline we could introduce without demanding a full rewrite.

Technically, it also gave us a better place to:

  • model third-party libraries as actual dependencies rather than ad hoc checked-in assets
  • prepare for generated-source handling in later SOAP work
  • define repeatable packaging paths for both legacy and modern runtime targets
  • start introducing build validation that would eventually support CI

Moving from SVN to GitHub without losing the story of the codebase

The source control migration had to be more careful than a simple export and import.

We wanted the benefits of GitHub, but we also wanted to preserve as much useful history as possible so the team could still trace why changes had happened, when risky code had entered the system, and which branches or release tags were worth keeping.

The migration path was well understood:

  1. Audit the SVN repository structure, especially trunk, branches, and tags.
  2. Freeze change windows long enough to avoid history drifting during migration.
  3. Map SVN authors to Git identities so commit history remained readable.
  4. Use a structured migration approach such as git svn or svn2git to import history rather than copying a snapshot.
  5. Review the imported branches and tags, removing noise that had no value in the GitHub model.
  6. Validate the result with the delivery team before declaring GitHub as the new source of truth.

That gave us a repository with meaningful history instead of a fresh repo that pretended the past did not exist.

At this stage we still had no new deployment pipeline.

Builds and releases were still performed in the traditional way. A controlled VM still pushed artefacts to WebLogic in the on-premise estate. That was intentional. We were modernising the delivery foundations without mixing that work up with runtime change too early.

Cleaning the code before the bigger migration steps

Once the code was in GitHub, we started standardising the codebase.

This was not cosmetic work.

Legacy repositories usually contain a lot of drag:

  • dead classes that nobody trusts enough to remove
  • inconsistent formatting that obscures the real change in a review
  • unused imports and commented-out code that distract from actual risk
  • multiple styles of doing the same thing depending on when a file was last touched

So we introduced linting and formatting aligned to a simple Google-style Java baseline that developers could also enforce in their IDEs.

That gave us a few immediate benefits:

  • reviews became easier because formatting noise disappeared
  • refactoring diffs became smaller and more legible
  • developers stopped colliding over whitespace and brace style
  • we could identify genuinely unused or suspicious code more quickly

We also used that pass to remove low-value code, clean up obviously stale fragments, and make the build outputs more predictable.

That sounds small until you are migrating a live enterprise application. Cleaner formatting and easier reviewability reduce migration risk because teams can tell the difference between structural modernisation and accidental noise. That saves time in code review, in defect investigation, and in governance conversations with stakeholders who want evidence that change is controlled.

What changed for users at the end of Phase 1

From a user perspective, almost nothing.

Customers could still browse their accounts, view orders, and work through the same WebLogic-hosted application they had before. The operational model was still old-style. The deployment target was still the same.

But the internal position had changed materially.

By the end of Phase 1 we had:

  • replaced the brittle Ant-only build with Gradle as the new control point
  • moved the code from SVN into GitHub with usable history preserved
  • established a common formatting and linting baseline
  • reduced some of the day-to-day friction that had made every change harder than it needed to be

That was enough to begin the next stage properly.

The platform still looked like a WebLogic application. That was fine. The point was that it was now a WebLogic application we could move with more confidence.