How to write killer DevOps automation workflows

Prioritizing automation is a crucial step toward becoming a DevOps organization. There are some pretty compelling reasons to use a workflow engine for your DevOps automation. Other than giving you a centralized place to create all your automations, workflows can serve as the glue between processes and systems while allowing you to easily track their progress. If you choose your workflow tool carefully, you can also benefit from other advanced features such as logging, fault tolerance, and out-of-the-box integration with other systems. 

Whether you use a graphical or textual authoring tool, there are some basic principles of workflow authoring that you should think about every time you sit down to create a flow. We asked all the workflow geeks we know to share their processes, tips, suggestions, and best practices for writing killer workflows. Here’s what they came up with.

The State of Application Delivery Management in the Enterprise

Plan your workflows

The most important point when it comes to planning your flow is that you actually do it. This is not the time to show how much of a free spirit you are. Save that and your bellbottoms for your next '60s theme party. Having a good plan before you start will end up saving you a lot of time and heartache in the long run.

Our experts differ in exactly how they go about their planning, each choosing a medium that works best for them. Some suggest drawing out the flow to help visualize the flow logic, while others prefer writing out a sketch in textual form. The swim lane concept was recommended by quite a few people as an approach to help identify what systems are involved and who is doing what.

Either way, a broad overview will get you started on assembling your flow from the top down. Don’t be too eager to fill in the blanks right away. Use placeholder steps as you gradually build up your flow’s interconnecting parts.  

Create focused workflows

Now that you’ve looked at the big picture, it’s time to start the actual authoring. Most of the advice we received for this stage of the process leads to one idea: A big flow is a bad flow. Big flows are hard to create, hard to read, hard to test, hard to maintain, and hard to reuse. People like easy, so why not give it to them? Break a large flow into smaller, more manageable units, which can be flows themselves. Each unit should have a single, well-focused purpose.

Obviously, creating a small flow is easier than tackling a behemoth. It’s also quite clear that trying to wrap your head around a flow with a hundred steps is a lot harder than figuring out what a flow with five steps does.

Narrowing the scope of each of a flow’s components makes each component easier to test and easier to fix when something goes wrong or needs updating. Smaller, less complex units are easier to test independently. Also, because your subflow is decoupled from the rest of your flow, when the requirements change, the resulting changes in your flows are easier to update because they aren’t intertwined with unrelated content.

All that is important, but probably the best reason to use smaller units is that they are more readily reused. A flow that does one thing and does it well is much more likely to be reused than a flow that does what you want but also happens to include a bunch of stuff you don’t need. Reuse obviously saves time, but it also means you don't have to write new tests, and it gives you greater confidence in the flow.

Sometimes it pays to think about what several steps have in common and extract that functionality into its own unit for reuse. Let’s take a simple example. Suppose you were creating steps for an HTTP GET, POST, PUT, and DELETE. Start by creating a generic HTTP call and then build the more specific ones on top of that.

While we’re on the topic of reuse, another great tip we got is to always keep flexibility in mind. You might be creating a subflow for a specific use case, but try to think of how that flow can be used in other situations as well. Take the HTTP call, for example. Although you might not need it for your specific use case, it’s a good idea to let the user define what the valid responses are for the call. That way, if someone else wants to reuse your unit in a slightly different context, it will be flexible enough to handle the differences.

Make readable workflows

We touched upon this point in the previous section, but it’s worth repeating. Your flows should be easily understandable to others. Other than keeping the size manageable, this means documenting everything that isn’t self-evident. Here it’s best to err on the side of caution. What’s self-evident to you might be indecipherable to someone else. A good rule to go by is that someone should be able to run your flows after reading the documentation without looking at the actual content of the flow. And don’t forget the famous saying that an example is worth a thousand words.

But who are we kidding? Not everyone likes to read documentation. Sometimes people need to tinker with the content itself. So it pays to use short, consistent, self-descriptive naming throughout your flows. Default values are a good idea that can save time and effort. However, don’t be tempted to overuse them. There’s a good chance that a port number will be 8080, but it’s highly unlikely that a username will actually be john_doe. The wrong value is often worse than no value at all.

Use your parallel capabilities

Does your workflow engine support parallel capabilities? Good; use them. Every time you have an iterative task, ask yourself if it can be done in parallel instead. Same work, less time, happy users.

We’ve all been there. You need to wait for an event to finish before moving on so you just quickly throw in a sleep task. And then it stops working. So you lengthen the sleep time. And then it stops working. So you lengthen the sleep time again. And then it stops working. So you finally use a polling mechanism. Why didn’t you just do that in the first place? We didn’t either.

The point of writing flows is to automate stuff. Wouldn’t it be nice if everything could be automated? I guess that depends on your feelings about robots taking over the planet. In the meantime, it helps to realize that not every process is automated. Consider human intervention, when needed, as part of orchestration. You know that guy in the office down the hall making the big bucks? Let him earn them. 

Prevent errors and make debugging easier

Errors and debugging are a part of life. We all understand that. Still, it’s a good idea to do what you can to minimize errors and make your debugging as easy as possible.

One idea we received to stop errors dead in their tracks is to start your flow with an “initialization” step that sanitizes all parameters before they’re used. Along the same lines, it’s always a good idea to make sure you’ve remembered to handle all possible null values that might arise.

Results such as “success” and “failure” are simple and easy to understand, unless you’re trying to figure out what actually went wrong. Not all failures are created equal. Consider using a more robust set of results that will impart a bit more information as to what went wrong. Names such as “invalid_credentials” and “connect_failure” just might end up being the keys to your “success.”

We’re going to go out on a limb here and guess that if you’re looking for ideas on how to improve your workflow authoring, you know how to author a workflow. Well then, how about authoring some workflows to test your other workflows. A little effort now will save you a lot of time later.

Engage the community

Remember, you’re not alone. That is to say, you might be alone, but there’s probably someone somewhere trying to do what you’re trying to do. Find them. And if you’re the first to automate something, let people know what you’ve done.

How about you? Have any tips for writing flows? Let us know.

Image credit: Flickr

Forrester Digital Business 2018: Benchmark Your Digital Journey
Topics: DevOps