Micro Focus is now part of OpenText. Learn more >

You are here

You are here

5 tips for tackling cloud migration

public://pictures/derek_swanson.jpg
Derek Swanson CTO, Silk
Hand reaching for the clouds
 

A pandemic-driven surge in cloud adoption is driving global business and technology. Cloud-native startups and legacy companies alike are investing and scaling to leverage the cloud’s agility, reach, and customer-focused capabilities.

Still, deploying cloud as operational and governance models change takes practical understanding, and moving your business-critical data is especially tricky. Traditional implementation models aren't always viable. It will take time to shift from legacy capital expenditure–based, on-premises, fixed IT models to more agile cloud-based operational expenditure–based, infrastructure-as-code (IaC) designs.

Spending on public cloud services is predicted to exceed $480 billion in 2022, Gartner says, and will account for more than 45% of enterprise IT spending by 2026, up from less than 17% in 2021. Executive teams seem confident, too; McKinsey says that by 2030, the cloud will have an estimated $1 trillion in business value.

As cloud has evolved, complexity has increased. While migration at first was relatively straightforward, the introduction of mission-critical data has presented new challenges. Some workloads require more performance than what is offered, and if the performance is compromised, the results could prove ruinous. Projects face large cost overruns, spiraling timelines, and spotty service.

With multiple safeguards, mission-critical applications must be handled by thorough planning, testing, and airtight business continuance (BC) and disaster recovery (DR) plans. 

Consider the following when moving mission-critical data to the cloud.

1. Not all data is created equal

Migration plans depend on an organization's needs, workloads, goals, infrastructure mapping, and budget. Consider the 80/20 rule: 80% of data is fairly easy to migrate, but the remaining 20% is often problematic.

Referred to as “anchor” workloads, that 20% frequently has high data gravity—it pulls in other applications and services. It can migrate but creates friction. It's critical to identify it as early as possible in the planning phase to avoid errors affecting the applications that rely on it. It's common that anchor workloads are most crucial to operations and typically make up the most expensive and complex infrastructure.

There are four common paths for migrating mission-critical workloads and databases and for shifting from a CapEx to an OpEx model.

Refactoring to PaaS

Refactoring applications for the cloud entails rewriting them for PaaS (platform as a service) to improve compatibility with the cloud environment. Refactoring alleviates technical debt, provides a managed framework for expansion and innovation, and protects against go-live issues such as a decline in performance.

Companies benefit from cloud API features and extra flexibility, improving efficiency and functionality, but complex applications can take years to refactor, requiring major disruptive changes to the core code. While every organization wants to preserve the integrity of its mission-critical applications, refactoring requires a large development shop and a significant budget.

Refactoring is wise if the existing application is resource-intensive, lives on a legacy system, or involves intensive data processing.

Lift and shift

This process reinstalls an application (installer, file system, and data) in the cloud on an infrastructure-as-a-service (IaaS) platform (typically Windows or Linux). It's generally faster and simpler and presents less risk and cost than refactoring. However, it lacks the features and benefits of a full refactor, such as cloud-native APIs, managed frameworks, and scalability.

Luckily, not all workloads need robust features and scalability. Soon-to-be rationalized legacy applications with dwindling lifespans will function "as-is" in a new environment.

While lift and shift is easy to model, it's hard to test peak loads. Scale is crucial in the cloud, and this particular model may not meet performance needs.

Containers

Containers combine refactoring with lifting and shifting, ideally allowing the gradual migration of an application to the cloud without a complete app refactor.

Containers are simpler, more cost-effective than refactoring, and more lightweight than full lift and shift. They don't require total rewriting but offer many cloud benefits. Note, though, that containers aren't the answer if you need more performance than that of a cloud-native app—for example, for that 20% discussed earlier.

Serverless microservices

Serverless microservices are a newer architecture that eliminates considerations such as server provisioning. They utilize only what's needed, and customers are billed for only what they use.

Small independent services run across as many servers as needed to provide the application and data services. Serverless architecture lowers the barriers to entry for app development and requires less ongoing maintenance and optimization.

Serverless architecture should be avoided if the cloud is not already being used. For high-performance computing, bulk provisioning the servers you need to handle the workloads proves less expensive. Long-running functions can also raise the cost of serverless computing and latency can be an issue.

2. Reduce risk

Organizations expect cloud resources to deliver at least the same performance as on-premises resources. A critical app disruption from a shortage of cloud resources can halt operations, causing financial losses, decreased productivity, loss of brand authority, and reduced customer trust.

On premises, most applications run fine on "normal" hardware. But some critical workloads are run on specialty equipment to achieve sufficient performance, resilience, availability, and enterprise monitoring. 

While you can achieve these resources if you invest in either bare metal or dedicated hosts running dedicated tenants in the cloud, these options are expensive, with spotty availability. Many customers ultimately decide against single-tenant due to both cost and complexity. The option is not very cloudlike, being more of a colocated/hosted solution.

So, while resources that enable sufficient performance, resiliency, availability, and enterprise monitoring do exist in the cloud, they often aren't a viable strategy, and we won't include them here.

To reduce risk during migration, consider these factors.

Performance  

This will determine where and how a mission-critical application is situated. CPU and memory footprint, network speed, and storage determine performance. To significantly increase one, you must increase the others.

This makes efficient sizing a challenge, since you can end up over-provisioned or resource-starved. For workloads running on big virtual-machine shapes, guaranteeing solid data latency (typically response times of less than 1ms) combined with high throughput of data (typically over 2GBps) is critical.

Insufficient throughput (or IOPS: input/output operations per second) or high latency can devastate customer-facing applications. As response times rise, customer experience degrades. Consider where your workloads live and determine whether you can increase performance before moving environments. You may need a third-party platform.

Data mobility

This is about getting workloads running effectively on the appropriate infrastructure. Providing simple data mobility requires decoupling the data from the underlying infrastructure. Lift and shift and containers work well for high data mobility if you have a platform or tool set to drive and manage the movement.

High availability

Architect a no-single-point-of-failure design so that no component failure can cause a full system outage. Rock-solid resiliency, 99.9999% availability, transparent data replication, and self-healing capabilities simplify DR and BC, covering zone-level and even regional outages. Cross-cloud availability offers the ultimate failsafe of multi-cloud. These services can be done at the platform or application level.

3. Eliminate data silos

Siloed data is a risk. Keeping multiple copies of production data in different environments mitigates this risk but introduces others. Separating data in a multi-cloud or hybrid cloud systems may present performance differences, making it difficult to get a clear view of where data resides and what is live.

A single platform and management tool can provide a unified view of the global environment.

A common data platform across clouds is instrumental to eliminating silos and valuable for mission-critical data and its accompanying stacks. Data must be quickly and easily moved from one cloud to another, avoiding the need to refactor for separate vendors and supporting total production workloads at peak while maintaining user experience.

4. Test and test again

Establishing an average and peak performance baseline sets expectations for the required cloud architecture, avoiding post-migration slowdowns or outages as user load scales. Test to establish service-level objectives and monitor them automatically.

Part of the rigorous testing may involve pulling historical reports from production systems during past peak times that can't really be emulated. This often occurs for systems that require an extra layer of security and privacy, barring you from simulating a real-world load effectively.

Testing can prove key and should be leveraged as much as possible.

Remember that the cloud is a shared environment, though it appears nonshared to the user. Performance will vary based on time, region, utilization, maintenance events, and even "noisy neighbors." Test results in one region or zone may not match those in another.

5. Post-migration monitoring

Use proactive monitoring to ensure workloads are performing well after the migration process. Business leaders will look for key performance indicators that include resource utilization and costs. It's impossible to achieve 100% efficiency, but managing resources without incurring excessive wasted overhead is an ongoing indicator of success.

Enterprise data services that provide "free" data copies—such as instant zero-footprint clones, thin provisioning, inline compression, deduplication, and replication—are essential for maximizing resource efficiency and controlling costs. Those are important considerations if your test/dev requires data copies.

Ready, set, don't go … yet

Before kick-off, ensure that the right people and processes are in place. This includes the usual suspects: CIOs and CTOs, cloud architects, and perhaps IT and database administrators. Today's cloud culture includes app developers, cloud IaC engineers, data scientists, R&D, DevOps, and security operations personnel. If there are gaps in your organizational skill sets, look to a solution integrator to provide the necessary expertise and capabilities to help you along the way.

Finally, caring for your mission-critical data is the key element. Of course, migration planning focuses on applications. But for a successful migration, the data performance of each application is critical. Where data resides can significantly impact latency and monthly cloud costs.

Migrations provide enterprises the opportunity to shift their data's center of gravity from legacy CapEx-driven, on-premises data centers and toward the OpEx-based managed cloud.

Migration is the necessary first step to capturing cloud value. It provides scalability, agility, resilience, security, and lower total cost of ownership—unlocking new value for the people in the business and their customers.

Keep learning

Read more articles about: Enterprise ITCloud