Public sector organisation
Create a container path without breaking WebLogic
How the public-sector case study was prepared to run both in legacy WebLogic and in a newer Tomcat and Spring Boot shape, without forcing a risky cutover before the infrastructure was ready.
Once the base upgrades were in place, the work stopped being just about framework versions and started becoming about runtime strategy.
The team needed to hold two truths at once:
- the live system still had to work in WebLogic
- the destination was a containerised Spring Boot application running away from WebLogic entirely
That is where many migrations become awkward. Teams often feel pressure to pick the final runtime too early, then discover half the application still assumes the old one.
We chose a more deliberate transition.
Designing for two destinations
Rather than tying the application permanently to one runtime while we were in the middle of moving it, we separated the application from its packaging destination as much as we could.
That led to an important pattern in the build:
- one Docker path based around a WebLogic image
- another Docker path based around a Tomcat image
The point was not that both would be long-term production targets. The point was that both gave us a way to keep delivery moving while we reduced our dependency on the old model.
Gradle helped again here.
Because Gradle was already the build control point, we could use it to translate or template legacy deployment artefacts where necessary. That included handling configuration and mapping differences that were still rooted in web.xml, while letting newer packaging move closer to a runtime that did not want those legacy assumptions.
Architecturally, this was the moment where the application stopped being identical to its container. We were isolating business code from deployment destination, which is one of the most useful shifts you can make in a legacy Java estate.
Allowing the live application to keep evolving
This mattered operationally.
The customer and order platform was not frozen for the sake of the migration. The business still needed fixes and occasional features. By creating a controlled dual-runtime approach, we avoided a common trap where a migration branch becomes a detached science project while the live branch continues to accumulate more drift.
Instead, the application could keep moving.
Developers could continue supporting the WebLogic-hosted estate while also validating the newer container path behind the scenes. That significantly reduced the political as well as technical risk of the migration.
For organisations stuck on older platforms, this is often the hardest thing to picture: you do not necessarily have to choose between “freeze the estate” and “rewrite everything”. A controlled dual-runtime strategy can buy the room needed to modernise safely.
Replacing XML configuration where it actually helped
This phase also included a more selective move away from XML-heavy configuration.
We did not replace XML with annotations just because newer code looks tidier in presentations. We replaced it where it removed friction and made the new runtime shape more natural.
That meant:
- switching from XML-based wiring to annotation configuration where it reduced duplication
- preparing the Tomcat-oriented build to run with embedded container support
- removing the need for
web.xmlin the Tomcat build path - keeping
web.xmlin place for the WebLogic build path because the live estate still depended on it
This is another important migration pattern: temporary duplication is sometimes the price of safer change.
In a clean-sheet build you would not preserve WebLogic-era deployment descriptors. In a real transition, you preserve exactly what you must for as long as you must, and no longer.
Moving into Spring Boot without breaking WebLogic too soon
The switch from standard Spring usage into Spring Boot starters was the point where the application began to look obviously more modern to engineers.
But even here, the team had to resist over-simplifying the transition.
We still did not have the AWS runtime ready. That meant the platform needed to become Spring Boot-capable while still remaining deployable inside WebLogic. So we packaged the application as a WAR and made sure that WAR could still participate in the EAR packaging expected by the old environment.
That required discipline in the build and in the application bootstrap. Spring Boot was introduced as an acceleration layer and configuration model, not as permission to pretend the old runtime had already gone away.
By this point we had also introduced automated build pipelines, which mattered more than it might sound on paper. Developers no longer had to remember which build to run where, or which machine had the right combination of steps and dependencies. The build became repeatable, visible, and easier to trust.
Just as importantly, that did not force the client to change the operational process too early. The old estate still did not have automated deployment, so releases into WebLogic remained manual. But the release team still received the same kind of artefact they were used to: an EAR file that could be taken and deployed through the existing controlled process. From their perspective, the deployment shape stayed familiar even while the engineering underneath it was improving.
That kept the old runtime viable for long enough to finish the infrastructure side properly.
At the end of this phase we had:
- automated pipelines producing consistent build outputs instead of relying on local build knowledge
- a release process that still handed operations an EAR file for manual WebLogic deployment
- a codebase that could support both old and new runtime assumptions during transition
- Docker images representing both the WebLogic and Tomcat paths
- less dependency on XML-centric configuration in the newer path
- Spring Boot idioms entering the application without forcing an immediate final cutover
The commercial value of this phase was straightforward: we had reduced migration risk while keeping delivery moving. Engineers gained a more reliable build process, operations did not have to absorb a brand new deployment model overnight, and the client was funding controlled progress rather than platform theatre.
What we did not yet have was the new production platform.
That came next.