Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Event-driven computing: A best practice for microservice architecture

public://webform/writeforus/profile-pictures/andypernsteinerone4x4.jpg
Andy Pernsteiner Field Product Engineer, Igneous Systems
Engine start button
 

Today’s leading-edge applications are capable of dynamic and adaptive capabilities, and that requires you as a developer to use increasingly dexterous tools and supporting infrastructure, including microservices.  You might be asked to build data-centric apps to automatically index documents, as in Google Drive, performing facial recognition for photos, or run sentiment analysis on video and audio newscasts.

All of these applications leverage data in new ways.  And in some cases, the decoration and tagging of data with intelligent metadata has become more important than the data itself. To keep up with continuously evolving needs and expectations, enterprise developers across industries are shifting away from traditional application architectures in favor of more fluid architectures for building data-centric applications.  

Here are several ways that microservices, connected via event-driven techniques, can help you replace the capabilities of older monolithic systems with more flexible and easier to maintain code.

 

The old challenges of monolithic systems

Key elements that enable the new paradigm are found within tool chains as well as on the underlying infrastructure or platform.  Applications are moving from monolithic paradigms, where single applications are responsible for all aspects of a workflow. While effective for many legacy use cases, monolithic applications have challenges with:

  • Scalability. In many cases, monolithic applications are designed to run on a single, powerful system. Increasing the application’s speed or capacity requires forklifting onto newer, faster hardware, which takes significant planning and consideration.
  • Reliability & Availability. Faults or bugs within a monolithic application can take the entire application offline.  Additionally, updating the application typically requires downtime in order to restart services.
  • Agility. Monolithic code bases become increasingly complex as features are added, and release cycles are usually measured in periods of 6-12 months or more.

How are these challenges being met? To build applications capable of dynamic and ever-changing capabilities, architectures should be composed of smaller chunks of code. Which is why event-driven computing and microservices are gaining in popularity. The relationship between these two things is as follows: microservices should be designed so that they notify each other of changes through the use of events.

Microservices are the way forward: Automation and decentralization

As you know, microservices break more traditionally structured applications up into manageable pieces that can be developed and maintained independently.  Because these smaller components are more lightweight, the codebase for each can be significantly simpler to understand,  leading to a more agile development cycle.

Additionally, microservices are often decoupled, allowing for updates with little to no downtime, as the other components can continue running.

Event-driven computing: Triggering adaptation

Event-driven computing is hardly a new idea; people in the database world have used database triggers for years. The concept is simple: whenever you add, change, or delete data, an event is triggered to perform a variety of functions. What's new is the proliferation of these types of events and triggers in applications outside of the traditional RDBMS.  

Cloud and open source to the rescue

Public cloud vendors have taken notice of this proliferation, and they've offered fundamental building blocks required for microservices-based applications.  AWS Lambda, Azure Functions, and Google Cloud Functions all offer robust, easy to use, scalable infrastructure for microservices.  

These services are also handling the generation of events by various components within their respective ecosystems.  Amazon S3, its object storage offering, enables its Buckets (logical containers of objects) to be configured to trigger AWS Lambda functions whenever objects are created or deleted.  Microsoft Azure Blob Triggers can trigger Azure Functions.  Similarly, Google has Object Change Notification.  

In the open source world, Minio offers events.  Additionally, NoSQL systems such as Cassandra (triggers) and HBASE (co-processors) give developers this same functionality for key-value applications.  On-premises commercial options for ‘event-producing’ infrastructure have historically been hard to find, but offerings from vendors such as Igneous Systems and MapR offer developers tools for next generation applications.

Integration with messaging systems such as Apache Kafka, AWS SQS, and Azure Queue provide the mechanism necessary for feeding those events into a rich ecosystem of decoupled microservices, allowing powerful, dynamic, data-driven pipelines to be built.  As new data arrives, it can be automatically indexed, transformed, and replicated.  In addition, notifications can automatically be sent to systems which can display a dashboard for real-time monitoring and decision making.   

A Google Drive example

Consider an example based on Google Drive, where newly uploaded files cause an event to be generated, which is then passed off to multiple microservices, each responsible for a different function:

  • Index metadata, enabling user friendly search.
  • Index full document text (when applicable), enhancing search.
  • OCR images containing text.

In this scenario, the event-driven object store kicks off all the resulting actions, while multiple decoupled microservices allow for rich processing and decoration of metadata, without impacting object store performance.  These same principles can be applied to facial recognition, as well as the analysis of audio to perform functions like transcription and sentiment analysis.

Why is it important for events to be generated by the underlying platform?  Applications require guarantees that whenever a file, object, or record is committed, there will be an event notification, the contents of which are 100% accurate.  Unlike alternative methods which can be both inefficient and prone to edge cases, the underlying storage platform can more reliably inform the application that the data and its associated metadata has been successfully written, and what the associated metadata was.

And here are two alternatives: 1) having to either write that logic into ingestion code, application writes, or a proxy, or 2) relying on fragile techniques such as log scraping (as is the case with MongoDB and most traditional filesystems).  The former is not readily portable, and the latter can break easily with even the most subtle changes by the platform vendor.  By enabling the underlying infrastructure to handle this heavy lifting, you can focus on the key business logic of your applications.

Are you shouldering too much of the burden?

Many developers are well aware of the shift that is occurring towards event-driven computing, and microservices architecture.  However, what is often less well-understood is that the platform or infrastructure components upon which these technologies are deployed must be capable of generating events and publishing them using open, common APIs.  Developers should not settle for legacy systems which put the burden on them to build this functionality.

 

Keep learning

Read more articles about: App Dev & TestingApp Dev