Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Beyond microservices: Why you should avoid local optimization

public://pictures/kaimar.jpeg
Kaimar Karu Partner, Strategic Advisory, Mindbridge
 

Interest in microservices has been steadily growing since 2014, and examples from pioneers such as Netflix have helped countless organizations start replacing their monolith-based application architecture with a loosely coupled, much more resilient one.

Because microservices are now widely accepted as a good architectural model, you sometimes hear the question, Why wasn’t this done to begin with? Why build a monolith, when the maintenance of microservices is clearly so much easier? In a somewhat similar way, the Chinese emperor Hui of Jin, during the 4th-century food shortage, asked in wonderment why the starving peasants didn’t just eat meat if there wasn’t enough rice for everybody. Understanding the context is often a useful thing.

A typical firmware package for a modern inkjet printer is about 50MB. That’s about 35 floppy disks' worth of files, equal to a roughly one-minute download over low-speed broadband today, but over four hours on a 28.8Kbps modem connection. Put this in the context of continuous integration (CI) and continuous delivery (CD) and imagine downloading that package on a biweekly basis after every completed sprint back in 1995.

Things have changed. The Facebook app uses about 1.5MB of data per minute. Instant connectivity and near-unlimited data usage are expected as a norm. Not everything that is possible today was like this 10, 15, or 20 years ago. Most systems from those times are not built to leverage what is possible today, be it CI/CD or microservices. They were optimized for what was possible, and clearly, some rethinking is required.

Here's one approach to deciding which technologies (microservices, and so on) to adopt, and which to avoid.

Technology adoption should track customer value

The worldwide dossier of IT projects is filled with well-intended failures, where the beauty of the technical solution has been put above what matters: customer value. Whatever the new technology, we are being told that now this is the solution to all our biggest pains—and unless we buy the platinum package, our competitors will take over and we will be out of a job in no time.

The same way customers didn’t care which database platform or server hardware was used for business applications in 1995—except of course when the budget for next year’s maintenance costs was discussed—customers today don’t really care whether their software provider uses serverless computing or relies on an "old school" private data center. They don’t care whether the application layer is written in Python or Java, or whether the database is relational or not.

What the customer does care about is the outcomes of technology—and, by proxy, the technology their partner supports. It makes a lot of sense to assess every new technology from this aspect. Does adopting a specific technology and the new capabilities it provides truly help to increase customer value? This is an important litmus test when we assess improvement opportunities—which, in most cases, do come with a price tag.

This approach does not require organizations to set up investment committees to argue about the benefits and disadvantages of a particular opportunity until the ship has sailed. Being agile and moving quickly does help and often provides a significant advantage over competition.

What this approach does help with is to test the "unless you do it you will perish" claims, and, especially when it comes to internal IT teams, explain the value of the opportunity in terms of the organizational value, rather than technical coolness or, worse, the current FUD-fueled noise level.

Understand the "why" of technologies

It is also important to ensure you have understood the "why" of the specific technology and the mindset it requires for success. Blindly following trends leads to cargo-culting, and this does not benefit anyone.

Customers also expect the outcomes to be facilitated—they don’t want to be the ones who have to connect the dots between all components of the solution and learn the specifics of each component and the integrations to make it work. They expect their partner to do that and deliver the solution as a holistic solution—as a service—and this is much more than just code, or clean architecture.

This holistic view allows the service provider to avoid local optimizations and investments in technology that could indeed be relevant—and in many cases, foolish to not adopt—but not an answer to the challenge at hand. In the context of DevOps, for example, a significant part of potential improvements sit on the software development practices’ side.

Go with the flow

The improved velocity, or capacity, of the development team is irrelevant if there are bottlenecks further down the work stream. It’s the flow that matters—and if the challenge is actually on the IT Operations side, the improvements upstream will not result in improved customer value.

IT Operations is much more than infrastructure management, much more than can be automated with serverless computing. From planning to development to performance in production, IT Operations has the skills and the experience to improve the flow. And now it also has much better technology available to do that. If you work in software development and your experience with IT Operations is limited to slow change management and painful code deployment, I strongly suggest checking out Charity Major’s blog post on IT Operations.

For those interested in serverless computing, my recommendation is to head over to the AWS Lambda website. But before giving the orders for the immediate decommissioning of your data center and moving everything to AWS, take a moment to think about your customers. Do you know which item from your vast list of improvement opportunities is most important for them today?

Keep learning

Read more articles about: App Dev & TestingApp Dev