Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Three edge computing challenges for developers

public://pictures/1593629226655.jpg
Walt Noffsinger Vice President of Product
Livin' on the edge. Man sitting on building gutter at night surrounded by skyscrappers
 

Many organizations want to migrate more application logic to the edge, with an eye toward improving performance, security, and cost-efficiency. Typical cloud deployments involve a simple calculation of determining which single cloud location will deliver the best performance to the maximum number of users, then connecting the codebase/repository and automating build and deployment through CI/CD.

But what happens when you add hundreds of edge endpoints to the mix, with different microservices being served from different edge locations at different times? How do you decide which edge endpoints your code should be running on at any given time? More importantly, how do you manage the constant orchestration across these nodes among a heterogeneous makeup of infrastructure from a host of different providers?

It is worth diving deeper into this topic from a developer perspective. The question is relatively straightforward: Is it possible to enjoy all the benefits of the edge without significantly changing how you develop and deploy applications?

The development challenge can be distilled down to three areas: code portability, application lifecycle management, and familiar tooling as it relates to the edge.

Code portability

An ideal code portability state allows for similar development/deployment across various ecosystems. But workloads at the edge can vary across organizations and applications. Examples include:

  • Micro APIs—Hosting small, targeted APIs at the edge, such as search or full-featured data query, to allow faster query response while lowering costs
  • Headless applications—Decoupling the presentation layer from the back end to create custom experiences, increase performance, or improve operational efficiency
  • Full application hosting—Hosting databases alongside applications at the edge and then syncing across distributed endpoints

This can almost be viewed as a hierarchy or progression in edge computing, as the inevitable trend for application workloads is moving more of the actual computing as close to the user as possible. But as developers adopt edge computing for modern applications, edge platforms and infrastructure will need to support and facilitate the portability of different runtime environments.

It’s important to recognize that, while choosing between private cloud, public cloud, and the edge may seem like architectural decisions, these are not mutually exclusive. Centralized computing could be reserved for storage or compute-intensive workloads, for example, while edge is used to exploit data or promote performance at the source. Seen through a developer lens, this means application runtimes must be portable across the edge-cloud continuum.

Achieve portability with containers

How do we get there? Containerization is the key to providing portability, but it still requires careful planning and decision making to achieve. After all, portability and compatibility are not the same thing; portability is a business problem, while compatibility is a technical problem. Consider widely used runtime environments, such as:

  • Node.js, used by many businesses, large and small, to create applications that execute JavaScript code outside a web browser
  • Java Runtime Environment, a prerequisite for running Java programs
  • .NET Framework, required for Windows .NET applications
  • Cygwin, a runtime environment for Linux applications that allows them to run on Windows, macOS, and other operating systems

Developers need to be able to run applications in dedicated runtime environments with their programming language of choice; they can't be expected to refactor the codebase to fit a rigid, pre-defined framework dictated by an infrastructure provider.

Moreover, the issues of portability, compatibility, and interoperability don't apply just to the question of private cloud versus public cloud versus edge. They are also important considerations across the edge continuum as developers adopt global, federated networks featuring multiple vendors to improve application availability and operational efficiency while avoiding vendor lock-in.

Simply stated, multi-cloud and edge platforms must support containerized code portability while offering the flexibility required to adapt to different architectures, frameworks, and programming languages.

Application lifecycle management

In addition to code portability, another edge challenge for developers is how to easily manage their application lifecycle systems and processes. In the DevOps lifecycle, developers are typically focused on the plan/code/build/test portion of the process.

With a single developer or small team overseeing a small, centrally managed codebase, this is fairly straightforward. However, when an application is broken up into hundreds of microservices that are managed across teams, the complexity grows. Add in a diverse makeup of deployment models within the application architecture, and lifecycle management becomes exponentially more complex, impacting the speed of development cycles.

In fact, the additional complexity of pushing code to a distributed edge—and maintaining code cohesion across that distributed application delivery plane at all times—is often the primary factor holding teams back from accelerated edge adoption.

Many teams and organizations have turned to management solutions such as GitOps and CI/CD workflows for their containerized environments. When it comes to distributed edge deployments, these approaches are usually required to avoid increased team overhead.

Familiar tooling

The third challenge for edge deployment is tooling. If developers plan for code portability and application lifecycle management but are forced to adopt entirely different tools and processes for edge deployment, it creates a significant barrier. Efficient edge deployment requires that the overall process—including tooling—is the same as, or similar to, cloud or centralized on-premises deployments.

GitOps, a way of implementing CD for cloud-native applications, helps. It focuses on a developer-centric experience when operating infrastructure, using tools developers are already familiar with, including Git and CD tools. These GitOps/CI/CD tool sets offer critical support as developers move more services to the edge, improving application management, integration, and deployment consistency.

Beyond more general cloud-native tooling, as Kubernetes adoption continues to grow, Kubernetes-native tooling is becoming a stronger requirement for application developers. Kubernetes-native technologies generally work with the Kubernetes command-line interface kubectl, can be installed on the cluster with the popular package manager Helm, and can be seamlessly integrated with Kubernetes features such as role-based access control, service accounts, audit logs, and so on.

Consistency is key

So, yes, it is possible to enjoy all the benefits of the edge while continuing to develop and deploy applications in familiar ways. The key to accelerating edge computing adoption is making the experience of programming at the edge as familiar as possible to developers, explicitly drawing on concepts from cloud deployment to do so.

But the added complexities that a distributed edge deployment brings introduces new challenges to achieving consistency across these experiences.

For example, emerging solutions should allow developers to leverage code portability and use simple, familiar lifecycle management processes and tools at the distributed edge, while offering GitOps-based workflows, Kubernetes-native tooling, CI/CD pipeline integration, RESTful APIs, automated SSL/TLS integration, and a complete edge observability suite.

This should, in turn, give developers the cost and performance benefits they're looking for, without the need to master new skills such as distributed network management.

Keep learning

Read more articles about: DevOpsDevOps Trends