Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Should you use AI to make decisions about your software team?

public://pictures/Anders-Wallgren-CTO-ElectricCloud.jpg
Anders Wallgren CTO, Electric Cloud
 

New technologies such as artificial intelligence (AI) are changing the way software organizations measure development performance and allocate resources. But there is an open question as to whether you should use AI to measure developers' performance and make HR-related decisions. 

AI was one of the hottest topics of 2018. The reality is, however, that we are just scratching the surface of what AI holds for the future. In 2019, AI and machine learning (ML) will make businesses of all sizes and verticals smarter, faster, and more agile.

Recently, I sat down with Stephen Wu, a shareholder at Silicon Valley Law Group, and Peter Gillespie, a partner at Laner Muchin, to talk about one of the newest ways AI is being deployed: as a way to intelligently forecast risk and measure the development performance of software organizations.

Among the questions AI can help address: 

  • What is contributing to the risk in a particular software release?
  • How much of that risk is contributed by the code itself and how much is due to the developers?
  • Where are teams performing well, and where can they make up ground?
  • What skills does a specific developer excel at, and where does she need more training?

This new method of using AI can give software companies a new level of understanding of their software delivery pipeline's performance.

But should you use an AI system to judge the performance of people? We discussed the ethics behind this new use of technology and what it holds for the future of performance management and software development.

Here's what your team needs to understand about using AI for HR related decisions on software teams.

The goal: AI-driven insights into the release process

When technologists originally set out to create software delivery-related AI technology, the primary goal was to provide AI-driven insights into all of the factors affecting the success of a release. The idea was to help organizations deliver more value and get more efficiency from their software pipelines.

It works by performing ML on the mountains of data collected from key parts of the DevOps tool chain, to identify the patterns hidden in that data that predict the success or failure of builds, tests, deployments, or overall releases.

Envision a platform that would help organizations assign development talent and teams to specific projects by using extremely thorough and accurate data that could indicate how well they would complete that task. Imagine if you could take a project and pick the perfect resource to assign against it every time? That could be a huge value for your business.

But when customers, prospects, and industry analysts began asking if this data could be used for punitive or disciplinary reasons, I realized that this subject needed a frank and open discussion.

Let's start by talking more about the goal to provide a means to analyze a process. Just as an auto manufacturer uses a production line to assemble a car, software companies have production lines for assembling a software product.

Instead of rivets and fasteners, they're building with code, and instead of mechanics and technicians, they have developers and engineers. But in both cases, there are many things you can improve upon, simply by measuring and understanding their performance.

Like anything you want to measure, the metrics you choose are extremely important. You want to make sure you are choosing metrics that are not easily gamed. More importantly, you want to choose metrics that are hyper-relevant to your company's performance.

The problems with using AI to measure people's success

When you're talking about using AI to measure the performance of humans, your metrics need to be even more fine-tuned—which is one of the ways Laner Muchin's Gillespie, Silicon Valley Law Group's Wu, and I believe that this technology can be deployed ethically.

By choosing metrics that are not easily manipulated and that are relevant to your business processes, you can remove much of the ambiguity that would cause concern about using machine-based measurement.

One of the points brought up during our discussion was distinguishing between AI and data collection. From our perspective, data is just data—bits stored on a disk somewhere collecting dust.

Historically (and prior to AI), the way to gain insights into that data was to put it into a spreadsheet and sort it until something popped out. Today, we deploy a method of pattern recognition that can find interesting correlations and help to deduce insights.

Legal hurdles

Gillespie pointed out the potential legal challenges a company could run into when leveraging these types of insights regarding people's performance. From his perspective, one of the challenges of human resource employment decisions is the need to prove that the decision to let an employee go was made for objective reasons. From that perspective, he sees AI as being a great thing.

When all employees are objectively being measured by the same system with the same metrics, it becomes much easier to have an apples-to-apples (and legally supportable) comparison. The challenge, however, is that when you have to go in front of a judge, the concept of "the computer said to do it, so I did it" might not get you very far.

You are going back to a human understanding, and you must take into consideration that that the judge probably does not know what AI is or how it works.

And that brings up another potential challenge: There is almost no precedent for this type of litigation, or cases dealing with the impacts of AI.

Another potential hurdle from a legal and HR standpoint for using this type of AI is that it is difficult to take into account mitigating factors. For example, if there is an employee who must work from home because he has asthma and the air quality in the office isn't good enough, how can your company account for the lag time he experiences when accessing the network?

Focus your analysis on resources

At the end of the day, the best use case for this sort of intelligence is not to make employment decisions, but rather to make resource decisions. By leveraging AI to understand which teams and individuals are the strongest at a given task or type of project, you can better understand not only how to assign projects, but where your organization has weaknesses and where your team could benefit from more training as well.

This technology, even in its fledgling state, holds a great deal of promise for increasing pipeline efficiencies and helping organizations to predictably deliver better software products when the business demands it, with the analytics and insight to measure, track, and improve their results.

Keep learning

Read more articles about: App Dev & TestingDevOps