Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The hybrid-cloud mainframe: What it means for enterprise apps

David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting

Migrating to the cloud? What will you do with your mainframe-based applications? Traditional options include refactor/rewrite, rip-and-replace or rehosting. Luckily, a new approach is coming into vogue, one where you can partition, refactor, and then move mainframe apps. Parts of the application run on the mainframe, and parts run as cloud-native applications

Using this technique, you're running a distributed application that is essentially a hybrid-cloud mainframe. On the cloud side you can leverage any modern development platforms, such as containers/Kubernetes, serverless, CI/CD tools, etc., or even use cloud-native databases as augmented storage or as a complete replacement.  

This is becoming a popular technique. Mainframe applications, or any "traditional platform" for that matter, are broken apart, or partitioned. Then the pieces are moved to the cloud using one of the approaches listed below, while the leftover pieces remain on the mainframe. Typically, these are the pieces that won't or can't move easily, such as applications written in older mainframe assembler or that use other methods where the skills, languages, and tools are no longer available or aren't understood.  

Here's what you need to know to take advantage of this new hybrid-cloud mainframe method.

A quick rundown of mainframe-app options

Core to cloud migration is the way that you can deal with different types of applications and data. While there are many options, the basics for mainframe applications are: 


Find a platform analog on the public cloud and port the code and the data with few or zero code or data modifications. Considering that the mainframe has few or no native platform analogs on the cloud, this is typically not a practical option. 


This means that you're taking the code from a mainframe application and changing the code (refactoring it) so it can run within a container, perhaps running that container within an orchestration engine such as Kubernetes. This works for mainframe applications, but it also means heavy lifting, since that application is basically being rewritten to work within containers.   

Move and wrap

You can move mainframe applications, and then recompile the code to run in a mainframe emulator (a.k.a. "wrapping") hosted in a cloud machine instance. This approach works and takes a minimum amount of effort. 

This process typically starts with moving applications to a cloud-based mainframe platform emulator, and then migrating the database to a cloud-based database. 

While this approach requires the least amount of work other than doing nothing, this also means that your application runs within two layers: the platform emulator, and the native platform on the cloud. Most cloud architects argue that this is an inefficient approach, considering the resulting app's potential performance issues and its lack of direct access to cloud-native features. 

Stay put   

You might determine that moving mainframe apps is too complex, too costly, and/or too risky—for now at least—so you keep the applications and data on the on-premises mainframe. Thus, doing nothing can become a more attractive alternative when compared to moving other workloads that have direct, native paths to the cloud, and are much easier to modernize. 

Partition, migrate, and modernize well 

The objective here is to understand how to leverage modern cloud development approaches and platforms—such as containers, serverless, and other cloud-native development services—to modernize mainframe applications. 

Moreover, you want to do this by using a mix of refactored and migrated applications, also considering whether to keep on-premises mainframe applications where they currently exist for cost efficiency reasons and to reduce risk. 

Core to this effort is focusing on how to partition applications, so they can be refactored for the cloud with minimum impact to the core value of the system. In short, you are selecting parts of components of mainframe-based applications that make sense to move, and aspects that do not make sense to move. 

This results in a hybrid IT approach to mainframe modernization, allowing parts of mainframe apps to work and play well with migrated and modernized mainframe applications.

Core to this objective is defining a repeatable process that allows you to look at each mainframe-based application for possible relocation to the cloud. Or, more likely you'll be looking to relocate and modernize application components as well as any data that may be linked to those applications. 

How to define your process

While you may have some special requirements, typically the methodology includes these phases.

Application assessment  

This is usually the most overlooked step, but it's the most critical. It's where you determine what the application is, what it does, how it works, and what resources it depends on.

While there are code- and data-scanning tools that you can use, for mainframe-based applications this is typically best accomplished by doing two things:  

  • Talk to those maintaining these applications. They typically have the best understanding of how the above questions are answered, and often, that information will be different from anything the code- or data-scanning tools will reveal.  
  • Review the code and data design yourself and make your own independent assessment. Anyone reviewing these applications should have a hybrid understanding of both the mainframe and the cloud development environments and can access the costs and risks of making the move.    

Logical decomposition 

Simply put, this means breaking the application down logically into parts and pieces. For example, an inventory application may be broken down into the following:

  • User interface processing
  • Data access
  • Data validation
  • Data security
  • Depletion analysis
  • Reorder processing
  • Audit processing

In many instances, these pieces were all written in the same code on the same mainframe-based development platform. However, more often than not, they leverage different types of development and enabling technologies that may make that logical part of the application a candidate for moving to the cloud.   

For example, in many instances data analytics services for access, validation, and security are "bolt-on" technology for mainframe applications; they may have analogs on the public cloud, or are just easier to move as a partition of the application.  

What's core here is considering how the application could/will be partitioned by looking at how it's designed, including the enabling technology and which development technology to leverage. The output is a logical decomposition of the application and the data in order to make physical partitioning considerations, looking at the potential benefits of creating a hybrid cloud mainframe.

Physical partitioning

While this sounds difficult, if you do the previous step correctly, the application is going to be physically partitioned or broken apart with parts residing on the public cloud and on the on-premises mainframe.  

Keep in mind that work needs to be done on both sides. If the mainframe application is broken apart into components that are partitioned between the mainframe and the cloud, then both sides will need to refactor, test, and redeploy to allow them to work at all.    

Also keep in mind that you’ll need a layer of middleware between the mainframe and the public cloud to allow the parts of the application to work and play well together. These may be asynchronous, synchronous, or both depending on the needs of the application.

Finally, you’ll need to pick where the data needs to reside and on what database technology. While a best practice is to migrate all data to the public cloud and leverage a cloud-±native database, this may not be possible given the latency requirements of the applications parts that you’re leaving on the mainframe that need access to the data.       

Enabling technology

This is the part of the process where you select the best approach for allowing the mainframe application parts to run on the cloud partition. Your choices here are wrapping, containerization, serverless, and other development tools that will allow the mainframe application parts to run in the cloud. 

You're not limited to one type of technology. Indeed, shops often mix and match technologies based on the requirements of the application parts and the partition as a whole. Containerization and serverless are often used together, for example. The idea is to migrate and modernize parts of mainframe applications in the cloud that are purpose-built to work with any remaining on-premises mainframe components.

Also keep in mind that for most of this, you're recreating or rewriting the cloud-based application components from scratch. It's just going to be too difficult to work with COBOL on the cloud longer term, so you might as well modernize the applications that are going to reside on the cloud partition. 

More work? Yes. Less trouble and cost over time? Also yes.

Should you attempt it?

The core question is not whether you can do this; surely you can with enough time and money. The real issue is whether you should.  

The reality is that we're more than 10 years into cloud computing, and the relative costs of running applications and data in the cloud versus a traditional mainframe are not even close. Given the specialized skills required for mainframes, the fact that you'll need a data center with special power, and the way that mainframe software is licensed, cloud is typically going to be a lower-risk, lower-cost alternative. 

Of course, there are other options, such as managed service providers that offer mainframe services, or you could even use a co-location service provider if you're willing to do-it-yourself within somebody else's data center. But even those strategies may just be delaying the inevitable.  

So this becomes more about when than why. The hybrid approach should allow you to move to cloud sooner and do so with less cost and less risk.  While you can't shut down all of your mainframes just yet, you can reduce the number of workloads on them, and thus reduce cost at the same time to pay for the migration. You need to figure out a path from the mainframe to the cloud, even if the first step is only a half-step.

Keep learning

Read more articles about: Enterprise ITCloud