An example for Workload Migration from on-premises to Cloud using AWS Migration Service

Andre Marzulo
5 min readMar 1, 2022

The SoftwareOne approach for A cost-effective solution using AWS Migration Service

Photo credit: link

When migrating workload to the Cloud from on-premise, your application must always be at least re-platformed, and better if re-factored, in order to start get benefit from the new environment in Cloud. Even knowing that Re-host would be get advantage in terms of cost, Refactor always bring ROI much more faster, and not mentioning the benefit of Technical Debt that I can share my thoughts in another post.

Even though, there is some other points to take into consideration to decide the right migration approach. This evaluation for each application, together with the Business Case, will lead us to decide to take one the path listed below (7 Rs):

  1. Rehost: we can simply replicate the workload — (ex. Oracle DB to EC2)
  2. Replatform: make small changes to migrate (ex: update OS version or Database to RDS)
  3. Repurchase: typical drop and shop scenario like select a SaaS or “off-the-shelf” software to replace the workload
  4. Refactor: Rewriting and redesign the application — here we start to pay the Technical Debt! (ex. migrate Oracle DB to AWS RDS on PostgreSQL)
  5. Retire: decommission the software because is not needed anymore
  6. Retain: keep in the source location to revisit in the future
  7. Relocate: hypervisor-level lift and shift

A customer requested us to advisory them in migrating his internal insurance system from his existing hosting provider to AWS.

After an inventory of all servers, apps and satellite Infrastructure, it was discovered that a considerable amount of re-architecting and re-platform was required.

Some servers with outdate Oracle Linux and Windows version, plus legacy application wrote in unknown framework where we could realize that was a significant amount of technical debt to solve, but no time to think in the Refactor approach.

The project itself was not so difficult and the requirements were clear, and the final infrastructure would be essentially EC2 servers and 2 AWS RDS Oracle database. The client decide to not port any application to containers due lack of internal knowledge of their team.

The main focus was on the cutover time. So the migration strategy was designed to be guarantee the minimal outage during the switchover. We had available 30 calendar days to prepare, sync, test cutover time and be prepared for the big day.

Murphy’s law entered in the chat:

During the migration phase we discover some outer aspects from the old provider that was not mapped during the assessment phase, such as:

  • Central FTP server from the old provider serves some files to the BI system;
  • Specific configurations over the Load Balancer that was not mapped since no one was aware
  • A common NFS server used as integration between old provider service with the client applications.

Of course, we could identify all these points in the test phase of AWS Application Migration Service (old CloudEndure and in migration phase to be internal in AWS). Quickly we improve our Landing Zone with:

  • AWS FTP service: we decide to keep FTP and not use SFTP due the time to improve the application would not match our timeline
  • ALB: we preferred ALB because some rules uses Sticky Session, other rules balance per URL (path-based) and we are thinking in have the ability to use Lambda as a target in case with we found something new during the tests
  • EFS: we decide to EFS due the resilience, but we are thinking in change the application to remove this dependency.

This Application Migration Service is awesome because we could sync the data between provider premises and AWS and make as many Dry Rehearsals Cutover we want until we achieve the expected downtime that was given by the client. Not only that, in project that the Refactoring would take longer than the expected time to migrate, this bring a bold and trustable solution that brings control to the project and let client confident to do the first movement to the cloud.

Follow (high level) AWS workflow for this service:

Extracted from AWS service reference page

Follow below our version workflow for this service:

The process was used by us in this migration — also could be used in general for Rehost and Replatform scenarios

Caveats:

  1. Some operational system for a few servers was running on Oracle Linux 4 and is not supported by this service, so in these few cases we migrate using “rsync” to a server in newer OS version. Fortunately the application is Apache Tomcat based and was easy to adjust libraries.
  2. Most of the servers we migrate to EC2 in c5 family, except for a BI system running on Windows that required us to use m4 family.
  3. The strategy adopted for the Oracle 19c was a direct data pump between source and destination, since the DB was not big (around 100GB without compression and 20GB compressed) and we did migrate in couple of hours and landed the data on AWS RDS for Oracle.
  4. Our cutover plan was considering 6 hours outage, and everything almost went fine except that when finishing the last sync, the source service started the daily backup which slow us a little bit, but we could start destination validation after 4 hours of outage.

Final thoughts:

This type of project, frequently face unexpected situation and it is normal. But using the right tooling and approach, “fix the plane in the air” is easy. Changing on requirements and request to add features are very common as well. Having a good Landing Zone with good strategy to connect services on cloud will always help to better architect changes “on-the-fly”.

This migration was only the first step, since we already started some refactoring for specific application using ECS, but I will share detail of this AppModernization delivery using The Twelve-Factor App, in the next post… :-)

--

--

Andre Marzulo

Cloud enthusiast, AppModernization and DevOps consultant, AWS SA Professional Certified