Micro Focus is now part of OpenText. Learn more >

You are here

You are here

6 strategies for building AI-based software

Jecky Toledo R&D Director, Micro Focus, Functional Testing

Developing software that incorporates artificial intelligence (AI) can be unpredictable, and you need a unique set of knowledge and skills to code, test, and make sense of the data. What's more, tuning the system can take time, and the decisions AI-based software makes can sometimes be difficult to explain.

My organization specializes in developing software test automation tools that help users develop tests that run on different platforms, such as desktop computers and mobile devices. We wanted to make it even easier to write and run these tests, and avoid having to customize the test for each platform.

Our research led to adopt natural-language processing, which allows users of our software to describe a test using simple English, and computer vision with optical character recognition to identify the objects on a screen.

Here are the lessons we learned that you can apply as you incorporate AI concepts into your products.

Make data an integral part of planning

An artificial neural network (ANN) is a layered structure of algorithms designed to use data to make intelligent decisions without human intervention. We incorporated an ANN in our system, fed it with hundreds of thousands of data samples, and let it do its magic to make informed decisions.

In a system that’s heavily based on data, planning is essential. We had to address:

  • What data we needed to train the model
  • How to acquire, clean and classify that data
  • How to obtain additional data from customers

This required expanding the role of the product management team, which traditionally focuses on the features and capabilities of the product, to include overseeing the data-related aspects of the system. That included defining the scope of the data, the acceptance criteria for the data, and how data was to be used within our AI models.

Lesson learned: Data must be front and center of everything your team does, and your product managers must become familiar with the AI techniques your team is using in order to ensure consistency and reliable outcomes.

Decouple the AI model from your product

Developing and tuning an AI model can take a long time. If your application is tied closely to the model, you can only progress at the speed of the model’s development.

The AI model should be decoupled from the rest of the system and treated as a separate pipeline. This allows each piece of the system to progress at its own pace, and you can apply updates to the AI model independently. This has two key benefits:

  • You can develop and test your main product independently of the model, giving you fast feedback on product features unrelated to the AI portion of the product, and you can continue developing and training the AI model without being impeded by unrelated issues, such as a code change to the main product that breaks the build and holds everyone up until it’s resolved.
  • You can release your main product and the AI model at different cadences. This is particularly significant for users of our on-premises product, since they can install the product once and apply subsequent updates to the AI model without going through an extensive upgrade process. Given that the nature of AI models is to continuously learn, adapt, and improve, this is an important capability that allows our users to stay on the cutting edge of AI without having to wait for updates to the entire product.

Designing the system correctly to allow the AI model to be developed and deployed separately is a crucial capability that you should tackle early. Our release timeline now consists of two parallel timelines; one for the product, and one for the AI model updates.

Create cross-functional, multi-disciplinary teams

After we decoupled the AI model from the main product, our teams could develop and test it independently. But we also needed to test the system as a whole, with all of the components deployed and working together. For effective end-to-end testing, you need expertise in both AI and software testing.

We created cross-functional teams that included software engineers, data scientists, data analysts, testers, architects, and the product manager. This gave us the best of both worlds—we have experts in designing and developing AI models working alongside our software engineering and software testing specialists. In this way we can leverage the knowledge and experience of the entire team to develop, test, and deliver each component independently, as well as test the entire system holistically.

This approach has helped to cross-pollinate specialized knowledge across the team, so that our developers and testers have come to understand AI better, and our AI experts have learned to become better developers and testers.

Understand that explaining results in an AI system can be challenging

We like to think of our deep learning system as a black box that knows how to think and make decisions, but sometimes it makes decisions we weren’t expecting. When a regular software system does something unexpected, you can debug it. It might take time, but you’ll figure it out. But in an AI system, it's almost impossible to determine the combinations and sequences of data and logic that led to a decision.

Lesson learned: The most efficient way to influence a model’s decisions is through supervised trial and error, coupled with guidance from the AI experts who understand how the model works, and who can guide the learning and tuning process toward more accurate results.

Expect longer cycle times when building the product

Traditional software products compile quickly—even large enterprise software products complete a build in no more than a few hours.

AI models are different. Training a neural net first involves gathering data samples, and then cleaning and tagging the data, which can take days, depending on the quantity and quality of the data you need. Only then can you start the training process, which can take several days for each training cycle. In our case, it takes about three days on a machine with a powerful processor to train just one model.

This is a major motivation for splitting the AI model out from the rest of the product, reducing dependencies and making it a separate and independent pipeline, as discussed above.

Retrain your AI models with customer data

It’s impossible to achieve 100% accuracy and zero defects in an autonomous, continuously improving, self-learning system. We train our AI models extensively in our own labs, but when the model is exposed to the customers’ environment, it has to make decisions about something it may not have seen before. The most effective way to tune the system so that it makes the best decisions is to augment the model's training data with the customer's data.

We work with our customers to improve the accuracy of our systems in their environments by obtaining their approval to use their data to retrain and optimize our models. This helps the model make better decisions, and creates better outcomes for our customers.

Apply these strategies to your own AI development

The software industry is undergoing an AI revolution, and vendors are adding new AI capabilities to their products every day. My organization has made significant adjustments to the way we develop and deliver software, and we have restructured our teams to include experts in AI—something you'll need to do as well. We also work more closely than ever with our customers to learn from their environments and improve their outcomes.

AI development might be challenging, but it’s worthwhile. If you’re joining the AI revolution, make sure you apply these strategies to your software development process to get the most out of it for your team, your product, and your users.

Keep learning

Read more articles about: App Dev & TestingApp Dev