How to put ChatOps to work in your organization

ChatOps has many benefits: The practice improves transparency, helps build culture for distributed teams, and offers exciting potentials for DevOps automation.

But how should you put ChatOps to work within an organization? Here is a practical, step-by-step guide to rolling out ChatOps. Following best practices and case studies from the field, here's the direction you need to get chatting.

The State of Analytics in IT Operations

1. Choose a chat platform

If you haven't already done so, first select a chat platform. With Hipchat retired, other chat clients such as Slack, Flowdock, Basecamp, and open-source Rocket.Chat will do the job. Newer options such as Mattermost have been constructed from a ChatOps perspective and offer more out of the box.

To reap the benefits of ChatOps, a chat app should be persistent across web, mobile, and desktop presences. Other features such as video chat, individual chat, push notifications, drag-and-drop file sharing, and the abilities to see who's typing and to have multiple accounts are helpful additions.

2. Organize teams into the right channels

Before you begin integrating a chatbot and adding scripts with advanced functionalities, consider how to organize your team and plugins. Many teams choose Slack because of its wide app-extension marketplace, but also because of the ability to easily administer chat room permissions.

To make the most of ChatOps in Slack, it's quite easy to quarantine groups and delimit abilities. Creating project- and team-specific channels will assist in access control later on, increasing security from the get-go. (For the jargon-loving folks, consider this ChatSecOps.)

3. Choose a chatbot

ChatOps is based on the idea of easy, quick interaction with a chatbot. The bot accepts plain-English commands and initiates actions with background apps, utilizing API connections to affect CloudOps and other business functions.

Bots such as Hubot, Errbot, Lita, and other frameworks mentioned here can all be used for ChatOps, and offer adapters for most chat platforms. Each chatbot is unique, with varying support from the organization behind it, built-in extensions, and diverse community scripts.

Because the bot is the basis for much that happens in ChatOps, a dependable foundation is crucial.

4. Grab some useful community scripts and plugins

To truly leverage ChatOps for IT control, the chatbot will need access to your systems, deployment triggers, event scaling, continuous integration platforms, source code management, and more. This requires a lot of up-front work to assemble, but thankfully there are many community-provided plugins and integrations to reduce the overhead.

For example, check out the hundreds of scripts available for Hubout or Lita. Hubot, a chatbot developed at Github, works with many chat apps and has many community-generated adapters for your favorite ops tools.

There are plenty of scripts ready for instant use, whether they are for interacting with a Jenkins CI server, announcing successful Travis CI builds, or manipulating Rackspace load balancers.

Whichever chatbot you choose, there will be a learning curve. Eric Sigler of OpenAI aptly calls this stage "the long game of learning-by-demonstration."

5. Write and deploy your own scripts

At this stage in the process, the goal is to integrate the bot into your daily workflows. Find repetitive tasks to automate, and iterate until everything (within reason) is in ChatOps.

Hubot offers a guide for writing custom scripts that interpret commands from CoffeeScript or JavaScript files. Similarly, Lita's guide explains how to create custom handlers to perform HTTP requests and return appropriate content to your chat environment.

Using custom scripting, ChatOps architects can integrate their chatbot with any cloud app that offers an API back end. This opens an exciting door for developers to be creative in how they utilize and act upon data. Similarly, this will involve much trial and error, and ChatOps veterans recommend starting small.

6. Make the cultural shift, and get everyone on board

Next, create a culture that buys into ChatOps. For ChatOps to be effective, everyone has to be on board. As the chatbot is gradually integrated into operational routines, demonstrate the rewards and actively communicate the benefits.

Transparency and team collaboration are major benefits from ChatOps. Since you've embraced the same tool, teams can now better help one another. Onboarding is easier, and error resolution is quickened. Many of these bots have plugins for hilarious memes and funny posts—great motivators to continue ChatOps.

Sigler added that teams you didn't believe would even consider using ChatOps will start doing so. And if you desire it, ChatOps can democratize access to advanced functionality across the board.

7. Maturing and securing your ChatOps setup

You may soon find the chatbot becoming a critical service in your operational arsenal—so much so that it needs to be running to perform your daily work. With great ChatOps power comes great ChatOps responsibility.

Managing access to restricted data is certainly not a novel problem for enterprises. As you grow your chatbot extensibility, you may want to instigate fine-grained control over scripts and how they are loaded. This is where access control comes in. Box handled this issue by developing a custom middleware layer for Hubot.

Authorization, permissions setting, and two-key or two-factor authentication will help lock down chatbot commands to those with the correct privileges. Some implementers believe that Errbot and Cog are more security- or compliance-aware from the start, while Hubot rectifies security later on.

For larger teams, security is especially important. You don’t want someone without privileges to restart a system or query sensitive logs. To bolster ChatOps security, Jen Andre, co-founder and CEO of Komand, added that "you really, really need to tie it down. Restriction is key."

Case Study No. 1: HelloSign

Nicholas Whittier, DevOps manager at HelloSign, described the company's foray into ChatOps. It first orchestrated read access to its Amazon cloud data, and then deployed to nonproduction environments. Eventually, it was using ChatOps to control all deployments.

Throughout its ChatOps journey, the team slowly added functionality until the chatbot became a critical utility—so much so that the team now receives alerts when a bot function is down. For security, the team created a Hubot-auth-middleware. Ultimately, to aid access control, the HelloSign team settled on one bot per environment and custom script-loading per chatbot:

“You need to have some strict chat room quarantining in place, custom script-loading per chatbot, different repos/releases per environment, or some combination therein.”

Case Study No. 2: Flowdock

Flowdock turned to ChatOps to improve the visibility of its application development pipeline. It wanted to initiate all Flowdock deployments by chatting with Hubot.

The firm ended up orchestrating deploys with Hubot-deploy, a script that provides Hubot commands to initiate deployments, configured with a file that tells Hubot about the possible deployment options.

The team created deployments using the GitHub deployments API, and it leverages Github Webhook as well. With some additional configuring, it used a Heaven server to then perform the actual deployment.

You too can do this

HelloSign, Flowdock, Box, and StackStorm are examples of companies that have gradually expanded a ChatOps process to improve automation and greatly increase continuous delivery.

Uniting discussions related to software deployment into one thread has greatly increased organization and transparency for a number of teams. With the proper cultural mindset and access control in place, a ChatOps implementation can bring power to the people while retaining security.