You are here

Don't avoid cloud vendor lock-in. Here's why you should embrace it

Don't avoid cloud vendor lock-in. Embrace it

public://pictures/Bernard Golden Photo 3.png
Bernard Golden, CEO, Navica

You don’t have to look far into cloud computing before you'll find someone telling you to avoid lock-in. Typically, people give out this advice in the context of telling you to reduce your overall commitment to using cloud services. As a general IT guideline, this advice makes sense. One has only to look at how proprietary software vendors are leveraging customers’ dependency to understand the vulnerability that dependency can cause.

But when it comes to the cloud, they're wrong.

Applying such general advice to cloud computing as it stands today is a big mistake. Far from avoiding cloud vendor lock-in, IT organizations should embrace it. How can a recommendation that makes sense in a proprietary software package environment be wrong when applied to a cloud computing environment? It all has to do with the changing nature of IT.

[ Enterprise Service Management brings innovation to the enterprise. Learn more in TechBeacon's new ESM guide. Plus: Get the 2019 Forrester Wave for ESM. ]

A break with the past

Until recently, IT focused primarily on internal process automation: running email, ERP, CRM, and so on. Essentially, IT kept the lights on. Standard software packages ruled, and companies used them to operate commodity processes. Company executives looked to IT and saw a cost center, one they could squeeze to reduce total spend.

Regarding this traditional approach to IT, Nicholas Carr asked his famous question: "Does IT Matter?" His answer was no. Cost center IT adds nothing to a company’s competitive stance. But that was then.

This is now.

Today, we live in a “software is eating the world” environment. Company executives look to IT for new, IT-saturated products and services to help compete in a rapidly changing business environment. Today, it’s not enough for IT to run standard packages at the lowest possible cost. As Carr astutely noted, that just makes you look like everyone else and removes any source of competitive advantage.

Succeeding in this world requires stepping away from standard IT offerings and crafting bespoke solutions. You must reimagine the way your company delivers products and services, how those products and services operate, and even how customers interact with your company’s offerings. In short, IT groups must move beyond standard software packages, create sophisticated aggregations of software components, and customize those to deliver exactly what the company needs to provide a differentiated product to its customers.

[ Learn how robotic process automation (RPA) can pay off—if you first tackle underlying problems. See TechBeacon's guide. Plus: Get the white paper on enterprise requirements. ]

Cloud as competitive advantage

Here's an example of this approach: A products company delivers a hardware device that constantly streams operational data to the company. It performs real-time analytics on the data to determine how each customer is interacting with the product. The analytics may trigger new behavior by the device operated by that specific customer. It may trigger a preventive maintenance part replacement to avoid product downtime. Or the analytics may be used by the company’s product management group to determine what features the next generation of the product should have.

So what you have here is a set of software components that enable:

  • Streaming event capture and support for highly variable data rates

  • Real-time analysis of that data

  • On-the-fly notification to maintenance groups of new work orders

  • Later analysis of batch data to determine usage patterns and user failure

What does it require to put together an app such as this?

  • An Internet of Things front end to capture events. That front end needs to have device security, message protocol support, API management, message throttling, and integration to the event management process. And, by the way, it needs to dynamically scale to manage erratic workloads. You have lots of choices about how to implement IoT functionality.

  • Event management software. This component captures the message, pulls out the data, and forwards it to real-time analytics and the data warehouse system. The event management software needs to dynamically scale as well. Apache Kafka is a good choice for this.

  • Real-time analytics. Apache Spark is a common choice for this. It’s powerful and delivers lots of analytical capability.

  • A data warehouse. You can use something such as Hadoop for offline analytics or one of the columnar databases for this purpose.

  • Visualization software to enable data slicing and dicing. Consider Tableau or an open-source alternative.

  • Queue or ESB software to send messages to a maintenance scheduling app. RabbitMQ is a common solution for this need.

That’s six different systems, each requiring installation, configuration, management, redundancy, and patching. There’s tremendous value in this application, but it comes at a cost of significant, ongoing investment.

To run these new “software is eating the world” applications, IT organizations have two choices:

  • Install, manage, operate, and upgrade all the components themselves, or

  • Look to a cloud provider that offers them in an “as-a-service” fashion and concentrate on running the application that sits on top of those services.

Focus on apps, not infrastructure

Most IT organizations struggle to reliably operate standardized packages. Trying to operate the kinds of packages used for next-generation applications requires far more IT capability. The ability to install complex components securely, to quickly apply patches and upgrades that ensure functionality, and to manage at scale, ensuring availability and performance, is well beyond the typical IT organization.

Many IT staff and executives convince themselves that they’re different. They put together a prototype of an application made up of many open-source components. Then they run a low level of events or transactions through it, and everything works just fine. So from that they conclude they can put the system into production and handle production loads.

But every day brings new issues. The application load was bigger than expected, so the system ran out of capacity. One of the components needed to be upgraded, and its new API version required a new data format. An old server failed, and because there wasn’t enough capacity to make the system fully redundant, the application went down for five hours until an old, disused server was pressed into service. But it ran slower, so the system couldn’t process the transactions fast enough. You get the idea.

These organizations never draw the right conclusion: that running these kinds of services is hard. Really hard. And they don’t have enough skilled staff and can’t get enough funding to run them at the level needed to operate at “eating the world” levels. They will never succeed at running the infrastructure for the kinds of applications their companies need in order to compete in today’s IT-infused world.

Innovative cloud services drive next-gen apps

The right choice is to find a provider that can implement these products as a service. Providers can hire the talent, provide enough infrastructure, and ensure redundancy and scalability. Of course, that raises the specter of lock-in. And then you must focus on implementing the right functionality on top of the services the provider offers.

Some IT organizations attempt to avoid lock-in by insisting that they only use fungible cloud services. They convince themselves that if they just use a provider’s virtual machine or object storage service, they’ll protect themselves from from the lock-in bogeyman. But that approach precludes the use of the more sophisticated cloud services, bringing them right back to the self-managed dilemma I talked about above. They still need to install, manage, and so on. The only difference is that they’re now using a provider’s virtual machines rather than their own servers. It’s a little bit better than a fully self-managed solution, but it falls far short of what’s needed to get the job done.

The only way to really deliver next-gen applications is to take advantage of a provider’s higher-level services—a.k.a., embrace lock-in.

The choice for IT organizations is stark: stick with the old model of application operation, which is clearly inadequate for the new mode of applications, or recognize that delivering what company executives demand requires committing to a particular cloud provider.

That’s why you’re seeing companies such as Capital One and GE make an all-in commitment to public cloud computing. They recognize that their core competence is not running next-generation IT infrastructures. Even if it were, focusing on that diffuses their efforts to deliver modern applications.

So the next time someone counsels you to avoid getting locked into one vendor's proprietary cloud computing offering, recognize it for what it is: advice that keeps you bound to the last generation of IT. Are you ready to lock in?

[ Ready to manage your hybrid IT future? Download Crafting and Managing Hybrid Multicloud IT Architecture to get up to speed on unified infrastructure management. ]