Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Jenkins performance: How to avoid pitfalls, diagnose issues—and scale

public://pictures/owenmehegan.jpeg
Owen Mehegan Senior Developer Support Engineer, CloudBees
public://pictures/ryansmith.png
Ryan Smith Senior Developer Support Engineer, CloudBees
Jenkins Nirvana
 

We've been using Jenkins, the open-source automation server, for more than 10 years, and we've seen how it has helped teams build and deliver thousands of apps quickly and efficiently.

But as great as Jenkins is, no tool is infallible. Implementations sometimes run into performance issues that slow builds, put projects on hold, and force teams to scramble in troubleshooting mode.

Fortunately, there are steps you can take to smooth out, if not totally eliminate, performance issues at each stage of your project. If you make configuration changes early on, before issues crop up, your Jenkins workloads will perform better, for a longer period of time.

You can monitor workload performance based on certain criteria so that you get alerts that you may be outgrowing your resource allocations. And even if you're not a Java expert, there are intuitive tools you can use to diagnose issues. Ultimately, if you outgrow one master, there are easy ways to divide your workload between two or more new masters.

We'll be going into this in depth during our upcoming presentation at DevOps World / Jenkins World. In the meantime, here are a few thoughts about how to avoid Jenkins pitfalls, diagnose issues in your implementation—and scale for future growth.

A closer look at Jenkins

There are several reasons why Jenkins sometimes runs into problems. For one thing, it doesn't ship with any explicit settings configured for memory usage or garbage collection. Historically, the thinking was that developers or administrators would want to tune it for their environment or workload, but in practice that often doesn't happen.

Also, the load that Jenkins can handle depends on factors beyond just the number of build jobs you run. It can get dragged down if you keep a lot of build history stored in the tool, if you do a high number of builds per day, or if you integrate with many external services.

You might also face issues with plugin overload. To its credit, Jenkins has a mind-boggling 1,700 plugins available. That library gives Jenkins much of its power and flexibility, but the community-driven nature of it means that quality varies. Also, you should keep performance in mind when considering adding a plugin to your installation.

Do a proper configuration

To mitigate these issues, there are several steps you can take at the configuration stage. One is to make sure you're rotating your build history or discarding your old builds. That helps reduce Jenkins' memory footprint, which improves performance. We recommend keeping 30 to 60 days of build history, but you should configure this on a per-job basis.

Other steps to follow? Use SSD storage. It's fast, reliable, and far more affordable than it used to be. Fast storage makes a big difference with Jenkins.

Use the G1 garbage collector to manage its memory usage, rather than Java 8's default ParallelGC. And set the size of the "heap"—the maximum amount of memory available to Jenkins—at 4GB to start, to give yourself room to grow over time.

You can find all of our specific Jenkins initial tuning best practices here.

Diagnosing problems

Diagnosing performance issues is a multi-step process. Initially, you should look at things such as kernel "dmesg" logs, which tell you if your disk has errors or if your memory is failing.

Also, check CPU usage to see if it's high (but keep in mind that this can be a false indicator). Jenkins doesn't normally need a lot of processing power to work, but memory and storage performance issues can make your CPU load spike as well.

If you don't see something obvious, such as a hardware issue, look at how much memory Java has allocated from the system for Jenkins, and how much of that it is actually using. Look at the garbage collection logs to see if there's a memory leak.

There are tools that can run a performance analysis of those logs to determine where your potential leaks are. If there's no leak, allocate more memory to the Jenkins master.

You can also analyze thread dumps that show everything Jenkins is doing at a given time. Some threads handle specific jobs; others perform housekeeping tasks or run the user interface. Blocked threads might indicate a current, or future, performance issue.

When and how to scale out

At this point, if you've found the culprit, which is usually a hardware or configuration problem, congratulations! Now you can fix it and move on. But if you're still facing performance issues, it might be that you've outgrown your initial setup.

When is it time to rework your Jenkins architecture? That depends. Jenkins is driven by many factors beyond the actual number of builds you run, so our rule of thumb is to allocate no more than 16GB of memory to Jenkins. After that, Java's garbage collection doesn't perform as well, and performance suffers. And if your workload starts to need that much memory, it's time to split up the configuration and assign jobs to a second Jenkins master. You can always divide up the workloads by team, with every team getting its own master.

As for how to actually scale out your workloads, the actions you take depend on what version of Jenkins you're using. If you're using open-source Jenkins, you must scale it manually.

In that case, when you spin up a new master and copy jobs over, make sure they run correctly. Jenkins' new configuration-as-code feature makes it easy to create repeatable, identical environments with the same versions of everything.

You can also find commercial tools (such as ours) that can make it easier to manage your Jenkins masters. These can do the heavy lifting when you move jobs to new masters, figuring out which plugins are required and doing the actual installations. You can also get free, verified distributions of Jenkins, as well as commercial support for it. That gives small teams or individuals the same kind of backing that enterprise users enjoy.

Think before deploying

Jenkins is a great tool—when configured correctly. So think about how you're going to deploy it before moving forward, how you'll diagnose any issues, and what steps you'll take to scale it as demands grow.

With proper administration, Jenkins works well; it gives you the resources you need to deliver the best possible software.

Want to know more? Come to our presentation at DevOps World | Jenkins World, which runs August 12-15 in San Francisco. TechBeacon readers receive a 20% discount when they register with code DWJW20.

Keep learning

Read more articles about: App Dev & TestingDevOps