Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How dev and ops can prepare for computing's next big leap

public://pictures/davidl.jpg
David Linthicum Chief Cloud Strategy Officer, Deloitte Consulting
 

Computing is changing. Need proof? Consider HPE's Memory-Driven Computing architecture, which combines photonic data transmission with non-volatile memory (NVM) that can retain information even when it isn't drawing power. Its systems-on-a-chip design packages processors and memory to greatly speed data processing. HPE calls this "The Machine," and it claims that it is "the world’s largest single-memory computer," a system with 160TB of memory.

The questions that come up next are: What development approach will support these emerging systems? What skills will be required? And how much change should you expect? 

In the past, answers were relatively easy. Development and platforms were largely decoupled, meaning the developers considered the target platforms as part of the "waterfall." Moreover, the evolution of platforms was mostly moving from the same to more of the same. 

Here's a deeper look at the next generation of computing systems and challenges—and three steps to prepare your dev and ops teams.

Serverless, machine learning, and NoOps

These days, that's not the case. The industry has responded to the flat evolution of processors, memory, and storage with new approaches to computing. Serverless computing dominates application development in public clouds. And DevOps led to "NoOps," or the ability to automate platforms so they largely take care of themselves. 

Of course, there are direct changes in the platform. In addition to The Machine described above, quantum computing is now cheap enough to be within the reach of the Global 2000. The same is true for a whole new mix of Internet of Things devices, from sensors with advanced processing and storage capabilities to edge computing that pushes the processing out of the centralization of public clouds and down to the devices themselves.  

So, what changes in the world of DevOps as we move to these more advanced platforms? The answers range from "Not much" to "A great deal," depending upon what advanced platforms you're talking about. 

For the world of serverless computing, the idea is to remove the developer from having to deal with infrastructure. What developers need is provisioned for them and removed once they are done. The developer pays only for resources used. 

When considering the integration of serverless computing with DevOps and continuous delivery and continuous improvement, as well as links to operations, serverless could be the Holy Grail for DevOps and the objective of business agility. The reason is that both sides, Dev and Ops, are greatly simplified. 

Eliminate infrastructure concerns

If DevOps is really about reaching an automated process that allows applications to undergo testing, integration, staging, and then deployment using the coolest DevOps tools, the use of serverless technology can actually reach this goal for both the humans and the automation. This is due to the fact that you no longer deal with infrastructure. Eliminated tasks include ensuring that you provision the types of servers you need, the OS, and other platform configurations, and that you provision the correct number of servers as well.

Indeed, the most common mistake within DevOps is to not provision the machine instances in the proper number, with the proper configuration. The use of serverless technology removes that issue from DevOps processes.

While serverless has made the Dev side less complex, the Ops side benefits even more. Serverless-based applications automatically manage their own infrastructure. That means the Ops or CloudOps team does not need to deal with infrastructure monitoring, budgeting, and planning. These tasks do not go away with the use of a public cloud, but do go away with serverless-based applications. 

Machine learning, while providing learning capabilities to application development, does add some complexity to DevOps processes. The changes that need to be considered are in the centralization of the learning models. Will they be coupled to a single application, or will they apply to many different applications? The latter provides the most benefit, since we would be able to share knowledge for experiences gathered from many different types of workloads and data sets and combine them into a single knowledge engine. 

Moreover, machine-learning-based applications need to be tested and deployed differently, and the operations are a bit different as well. For instance, the knowledge models must use a different type of regression testing, where, in essence, you run an automated lie detector on the knowledge engine. Moreover, once in operations, you have to take special care to ensure that the machine-learning knowledge engine runs decoupled from the applications that leverage the engine. This is for both performance and disaster recovery purposes. 

3 steps for success

So, lots of changes coming to technology mean lots of changes coming to DevOps. How do organizations prepare? Most, even if they are all-in on DevOps, are only partway down the path right now. Most are still missing automated capabilities, including some testing, some deployment, and some integration. 

The good news is that doing the work needed to integrate new capabilities to support the new technology is much easier at the "partway" stage than if you take a process that's already stable and change it to adjust for new capabilities. 

Let's take a look at the specific steps to prepare for this change.

Step 1: Understand short- and long-term DevOps requirements  

Since this is typically a work-in-progress that's in the midst of change, the requirements are key. Indeed, continuous requirements seems to be the new buzzword out there, referring to the ability to look one or five years down the line to define where the ball will likely be kicked in terms of advances in technology, and how to help DevOps acquire the capabilities to support it. 

Step 2: Define a DevOps process that's changeable 

DevOps is ability agility. That means having not only the agility to quickly crank out or change applications, but also the ability to change the DevOps process as requirements change.

For example, when new database capabilities come to the enterprise, you need to accommodate those capabilities with new testing and integration components that become part of the DevOps process. The same can occur around the use of serverless application development, or even future use of The Machine. 

Step 3: Implement continuous training  

Skills are needed along with the technology. Machine learning, for example, takes months of training and experience to leverage these cloud- and non-cloud-based systems. The same can be said about serverless systems that reside on public clouds and the waves of all new technologies that come down the pike. 

The best approach to continuous training is to keep a skills inventory going within the DevOps organization and to make sure that you send people to training in time to use the new technology. Even today, DevOps organizations can be found in need of skills as they look to leverage technology such as containers. Most DevOps organizations learn on the run, which proves less than optimal. 

The best advice is to be proactive with training. A clear best practice is to maintain a skills inventory that links to a training plane. An issue can arise, however, with the number of skills that you have to keep around over time. There is a risk of over-adoption of technology, what we used to call "managing by magazine." If you find yourself there, you need to understand the differences between adopting every trend, versus picking and choosing the new technology that becomes part of enterprise IT and DevOps. 

Get proactive

Technology never stops evolving. While there are opportunities to leverage new technology to increase the value of the enterprise, you need to understand the impact on DevOps, including automation tools and the people. 

Dealing with requirements is something that needs to become second nature. You need to see beyond the hype around new technology, understand what is needed and how DevOps should change, and then plan for what could most likely happen at least five years down the road. 

Zero mistakes and clearly defined best practices are never guaranteed with new technology and the changes it brings. However, you can be proactive about absorbing change into your processes when needed and take a big step toward ensuring future success. 

Keep learning

Read more articles about: App Dev & TestingDevOps