Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How containers, microservices help maintain core apps

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
 

We’ve all heard the term "legacy." It strikes fear in the hearts of millennial developers who have no experience with systems over 15 years old. However, most enterprises can't eliminate the older systems entirely, at least not right away. And so, these systems that dwell in data centers are a part of our past, present, and future.   

At the same time, cloud is taking off. The relocation of modern applications and data to public and hybrid clouds is revolutionizing the way we think about platforms—and IT in general. Application deployment can happen in a matter of minutes. There are no more worries about outdated concepts such as capacity management, and scaling is automatic. 

Also, advanced cloud application development-enabling technologies—including containers, container orchestration, microservices, and serverless technology—provide opportunities to make those core systems look and act modern. While they may never be fully part of DevOps and cloud computing, older systems that leverage modern interfaces will hide any hint of "legacy" from developers, admins, and ops. Only the cloud computing architects will know for sure. 

So how do you make your workloads easy to deal with in era of the cloud? It's not only possible, it's easy—if you're willing to adopt new technologies and spend time figuring out the best ways to manage your applications and data. 

Here's how new tech such as containers and microservices make older tech—even those systems for which you must purchase parts on eBay—easier to deal with.

The ops dilemma

If you have older ops around, you need the required skills. This means dealing with best practices for business continuity (BC), disaster recovery (DR), and capacity planning. The worst of it is dealing with security when modern requirements meet less-than-modern security systems.

The objective of ops modernization is to make the systems appear more like your more current systems, some cloud-based and some not. This means using modern interfaces via APIs or services that speak the language of modern ops tools, with much of the complexity hidden from those who operate systems with services- and microservices-based interfaces. 

Cloud ops versus older, core ops

Legacy ops is hard and complex, with almost nothing automated and command-line interfaces as the common standard. Cloud ops is where the majority of ops technology providers' R&D love has been focused for the past several years. It is where you'll find the most automated abstraction of complex interfaces, resilient to the point of not having downtime for the last several years. In comparison, legacy systems are further behind the curve.

Core to this effort is to move away from siloed ops. If you believe that cloud ops is more aligned with best practices and more effective ops technology, then abstracting legacy systems under cloud ops, and avoiding siloed ops, is an emerging best practice. Most enterprises have yet to discover this, however.

Truth be told, legacy systems are a hodgepodge of any platform and system technology that is considered "legacy," which is just a matter of opinion. Thus, you may have one to ten types of legacy systems, ranging from traditional mainframes to minicomputers, to proprietary systems that are difficult to categorize. 

Indeed, you may have some odd things to operate as well, including early attempts at IoT, dedicated and proprietary servers that support a critical business application, or a client/server application running on PC platforms and a LAN. 

Trade in legacy systems for microservices

The ability to manage legacy systems as a single set of microservices or containers will remove the complexity of dealing with several types of legacy systems, and make them manageable using newer cloud ops tools. This means you can operate both older and newer cloud- and non-cloud-based systems using the same ops tool sets. 

Another key benefit is cost savings, which can justify the expense of "wrapping" older ops interfaces as modern interfaces. With microservices, or even with traditional APIs, in most instances the newly wrapped old ops systems will operate at 30% to 40% of the cost of traditional ops. The new interface will also save on the downtime business impact of legacy systems. 

Finally, if a cloud-based database is coupled to a legacy system, then the ability to manage those systems using the same microservices interfaces allows for automation of ops to occur between the systems, which significantly reduces operational costs. 

Get service-oriented

The best way to describe what we're attempting to do here is common points of management. This includes the ability to "fake out" the operational tools into thinking they are managing a homogeneous set of systems. The idea is to place complex systems that all use different interfaces for operations between a single service-oriented layer of abstraction. The core idea is to enable all systems inside and outside of the cloud to communicate using microservices interfaces, either from containerized applications or not.

Containerization and microservices for ops simplification are what really needs to be done, and that’s a big ask for most enterprises because no best practices exist to explain how applications, databases, platforms, devices, or other older stuff should be microservice-enabled. 

However, there are some guidance points that should be followed before you select your specific solution, including the following:

Applications that can be containerized should be containerized (i.e., modernized)  

When apps are placed into the container, you're forced to do a few helpful things. First, you must leverage microservices-based architecture to get the most from a single application, or you must reuse behavior and data from other applications that already leverage a microservices-based architecture. Remember, the use of services within any architecture is about the reuse of those services. 

The management interface to ops tools is a secondary benefit. That said, the use of containers is not indicated for all applications, and many applications will have to undergo a significant amount of refactoring. Thus arises the cost tradeoff that enterprises need to consider.

Legacy systems are typically problematic and expensive to interface with 

You'll need specialized tools—such as those that use High-Level Language Application Programming Interface for 3270 to microservices conversions, or the ability to microservice-enable older databases. For some larger core ERP systems, interfaces are set by the providers, which can vary from not helpful to complete microservices management solutions. 

Provide a common set of interfaces

Create interfaces that behave and work together without any additional programming or augmentation to those interfaces. If you can’t leverage the interfaces in this way, the value of abstracting legacy systems goes down the drain.    

Toward the single management interface

Managing generations of systems using a single interface is the obvious objective. However, reality sets in when faced with the amount of work to be done to both modernize the applications and APIs, as well as bind them to the cloud ops tools in such a way that the tools remove most of the complexity. 

Enterprises tend to get in trouble when they don't account for specialized ops processes that must be dealt with for legacy systems, and then attempt to force-fit them with cloud ops processes. 

For example, removing processes from the background needs to be done manually on some legacy systems, but is automatic on more modern systems. Other things to consider would be BC/DR, security, and governance processing—all of which may or may not be automated from cloud ops when dealing with legacy systems through a microservices layer of abstraction. 

How to measure success

Measuring the success of legacy operations simplification and integration via more modern operational technology such as cloud ops is not yet an exact science. This is really a true application of the term "time will tell." Simply enabling the legacy systems for operations does not automatically equate to long-term success. 

If you see failures in the making, it's typically due to overlooked specialized ops tasks required for mainframe systems, such as previously discussed. 

However, if done correctly and with operations centralized under a microservices and containerized blanket, there can be a 1,000% ROI. You can do more with fewer tools and people, and do so with more up time and user satisfaction. 

Indeed, all aspects of leveraging cloud computing, such as DevOps and the ability to integrate modern systems with legacy, make the use of both cloud computing and legacy that much easier and more cost effective. 

While this is still a learning process for most enterprises, best practices and enabling technologies are quickly evolving. Watch this space closely.

Read more articles about: Enterprise ITIT Ops