You are here

Taking DevOps to the next level: When bare metal hits parity with the cloud

public://pictures/32df84f.jpg
Frances Guida, Manager, HPE OneView Ecosystem Program, Hewlett Packard Enterprise

DevOps has changed the way organizations develop, deploy, and manage software. DevOps and microservices accelerate development and enable companies to be more agile, but they also require a more dynamic network—a composable infrastructure. This is an exceptional challenge for organizations that already have an established data center and need to be able to leverage their existing investments in hardware in addition to cloud-based services.

The traditional server and network ecosystem suffers from a variety of fatal flaws when it comes to DevOps. Physical resources are finite, which means they don't scale well. When demand spikes, you can't just magically make new servers or network bandwidth appear. Cloud services and virtualization effectively address the need for scalability, but they have some inherent issues of their own in terms of privacy and data security, in addition to rendering the investment in physical servers and network resources useless.

Organizations making a switch to embrace DevOps often end up with both. In many cases companies are maintaining and managing two separate infrastructures using separate tools and processes. Developers and IT operations teams must learn different user interfaces, processes, and management requirements depending on whether they're dealing with physical hardware in a data center or virtual resources in a cloud platform. This can greatly hinder the DevOps objective to enable better efficiency.

[ Digital transformation can be a costly failure without proper controls. Find out how IT4IT value streams can help in this Webinar. ]

A new approach

What businesses need is a way to put the traditional physical infrastructure on a level playing field with virtualization and cloud technologies—a platform that addresses both the technical and organizational challenges they face in making a transition to DevOps.

In order to embrace DevOps effectively, organizations need to implement composable infrastructure. They need tools that are inherently automated, software-defined, and intuitive to use. The tools need to unify existing server and network resources with new virtual and cloud technologies in a way that can scale to meet demand without adding layers of management complexity. Finally, the solution should also be open, with a broad API ecosystem that enables it to integrate and work with other tools to fit the needs of the customer.

So how can organizations meet these needs? How can they effectively merge the traditional physical network infrastructure with a virtual cloud ecosystem in a way that simplifies and streamlines IT rather than adding complexity?

[ Looking to bring innovation into your enterprise? Learn from others' Enterprise Service Management (ESM) implementations—and get recommendations for deployment. ]

Game-changer

It's important for vendors to work together wherever possible—partnering to create interaction that helps customers do things better rather than confusing the market with proprietary solutions that paint customers into a corner. We launched Project Synergy to work with other vendors and find those collaborations that make sense for composable infrastructure.

One of our biggest goals has been to provide customers with a "public cloud" experience that can also leverage the hardware in their own data centers. What does that mean? It means giving customers the tools to be able to automate provisioning of server and network resources and treat everything as "infrastructure-as-code"—including physical hardware. The Chef Provisioning Driver for HP OneView is a significant step in the right direction because it lets customers configure and manage a composable infrastructure that envelops both the local data center and cloud services.

The ability to treat bare-metal resources the same as virtual and cloud resources and manage it all as infrastructure-as-code is game changing. Hardware can be grouped based on attributes like the amount of processors, RAM, or storage capacity so it can be provisioned and used most effectively.

It would be too complex and cumbersome for infrastructure-as-code if the hard connections to the network had to be changed to meet the needs of a composable infrastructure. We've addressed that challenge with a solution that enables the network connectivity to be handled through virtual ports.

The future of DevOps

A customer recently summed up just how we feel about what we've accomplished so far and the potential for where we're heading. We showed off our new technology to a major financial services customer, and the representative proclaimed, "Man. This is like showing fire to a caveman."

DevOps and microservices architecture have tremendous momentum, but it's still not enough to completely break free from the inertia of existing investments in hardware and software. That's why I believe that the next big thing in this space is enabling technology that can seamlessly mesh your existing environment with cloud services and provide a single management platform for a software-defined data center (SDN). We want to establish an open ecosystem that works effectively and efficiently to manage and support all aspects of a composable infrastructure.

What do you think? How can organizations embrace a composable infrastructure and effectively straddle the line between physical servers in a local data center and the new virtual/cloud paradigm at the same time?

[ Ready to manage your hybrid IT future? Download Crafting and Managing Hybrid Multicloud IT Architecture to get up to speed on unified infrastructure management. ]