Public sector organisation
Cut over cleanly and retire WebLogic with confidence
How the public-sector case study completed with a controlled Route53 cutover, temporary parallel running, JSP replacement with Thymeleaf, and a lower-cost, better-managed platform with stronger uptime.
The last phase of a migration often looks simple on architecture diagrams.
It rarely feels simple when you are doing it.
By the time Atlas Orders reached this point, the application was already running successfully in AWS, but that did not make the cutover risk disappear. It just changed the type of risk. Now the challenge was moving traffic safely, validating the new platform under real user conditions, and keeping the old estate available long enough to absorb surprises.
Cutting over with controlled DNS change
The cutover itself was primarily a routing decision.
We used Route53 to shift traffic away from the WebLogic-hosted estate and towards the ECS cluster. Around that, we kept communication simple and pragmatic: an information page to set expectations, a clear maintenance window, and a plan that assumed a degree of volatility rather than pretending DNS behaves with perfect punctuality everywhere.
That is worth saying plainly.
DNS cutover always contains some uncertainty because propagation is not fully under your control. What you can control is preparation, observation, fallback planning, and how quickly you can correct a mistake if one appears.
That is why we treated the switchover as an operational change problem, not merely a DNS update. The technical move was simple on paper. The real work was in reducing the blast radius if anything behaved unexpectedly.
Keeping the old system alive briefly on purpose
We did not switch the traffic and immediately destroy the old environment.
WebLogic stayed alive for a short period after the AWS cutover so the team had a practical safety net while real usage settled onto the new platform. That made sense operationally. If there had been a serious issue immediately after go-live, the recovery options were much clearer than if the legacy estate had already been dismantled in the same maintenance window.
Shortly after confidence was established, we decommissioned the old servers.
That was the real end of the legacy runtime, not the first Terraform apply or the first successful container deployment.
Using the move to improve the interface layer
Once the platform was running in a faster, better-managed environment, we could finally turn more attention to the interface layer itself.
The first obvious target was JSP.
JSP and scriptlets had served their purpose historically, but they carried a lot of presentation-layer friction:
- UI logic and rendering concerns were mixed together
- page behaviour was harder to reason about
- styling and markup changes carried more delivery overhead than they should have
So we started moving from JSP towards Thymeleaf.
That change mattered for more than just code neatness. It improved the separation between template logic, CSS, and JavaScript, which made it easier for interaction designers and engineers to work from the same understanding of how the application should look and behave.
That reduced the maintenance burden of the interface layer while making it easier to improve the user experience over time.
It also created a much better working relationship between engineering and design. With less presentation logic buried in JSP scriptlets, changes to layout, styling, and page behaviour became easier to reason about and cheaper to deliver.
What the migration delivered in practical terms
By the end of the programme, the application was not just in a different hosting location. It had a meaningfully better operating model.
Compared with the original WebLogic estate, the new platform gave the organisation:
- lower operating cost because capacity could be scaled more appropriately and fewer bespoke legacy servers had to be maintained
- lower maintenance cost because build, deploy, and environment management were now far more standardised
- better uptime because the platform was no longer concentrated around a brittle legacy runtime footprint
- faster recovery because infrastructure and deployment steps were defined, repeatable, and easier to rehearse
- better security hygiene because dependency management had improved and tools such as Dependabot could flag new vulnerabilities continuously
Those are exactly the kinds of outcomes organisations with older Java estates tend to need:
- fewer people tied up in keeping old infrastructure alive
- lower friction around release planning
- a stronger basis for uptime and recovery
- a platform new engineers can join without inheriting years of accidental complexity
Those gains are often more important than the slogan of “moving to cloud”.
The point of the migration was not to collect modern tooling. It was to create a platform that was cheaper to operate, easier to change, and less likely to punish the business every time delivery needed to move.
Why this kind of migration works best in phases
Looking back, the success of the programme came from sequencing.
We did not start by asking the team to swallow Docker, Spring Boot, Terraform, PostgreSQL, GitHub Actions, Redis, and UI modernisation all in one release train.
We:
- took control of the build and source control model
- upgraded the frameworks
- created a dual-runtime transition
- built the AWS landing zone
- validated internally
- cut over deliberately
- improved the interface layer once the platform risk was lower
That sequence let a difficult legacy application become a well-managed, containerised platform without pretending the original estate could be skipped over.
For customer and order systems in particular, that matters.
These platforms are usually too important to replace carelessly and too expensive to leave untouched forever. The best route is often a disciplined series of changes that steadily removes legacy drag while keeping the business moving.
That was the real lesson from this engagement. The success did not come from one technology choice. It came from sequencing architecture, delivery, and infrastructure work in a way that let the organisation keep operating while the platform itself was being transformed.