Tricentis Applies Visual AI to Improve Application Testing


Dave Colwell, vice president of AI and machine learning at Tricentis, said Tosca 14 automatically recognizes and identifies visual user interface elements and controls across any form factor to identify the most appropriate tests to run as an application is being developed. That approach enables testing to be shifted further left during the application development process as successive iterations of an application are developed, he said.

While AI has been incorporated into other AI tools, Colwell said Tricentis is applying the same types of machine learning algorithms that are employed to enable self-driving cars to detect objects to the testing of controls included on an application screen.

Colwell said it’s now only a matter of time before most developers routinely expect testing capabilities based on visual AI technologies to be included in testing tools. Rather than having to continually encounter the same design issues, visual AI will enable developers to craft applications that end users will employ much faster.

Given the large numbers of applications that many end users haven’t embraced because of user experience issues, there’s clearly a base of developers who could use some advice from AI tools that have been trained to identify common problems that have affected previous generations of applications. Tricentis is now not only committing to providing testing tools but also curating the content required to train AI models.

It may take a while for developers to fully trust those recommendations, but the number of potential issues that could be averted earlier in the application development life cycle has significant implications for the productivity of DevOps teams. In theory, at least, the number of applications successfully moving through the build process should increase. The amount of testing required after an application is created should also decline as well.

Less clear is to what degree the shifting of testing of visual elements of an application might have on dedicated application testing teams. Arguably, IT organizations should be able to devote more resources to building versus testing applications.

Whatever the outcome, AI is not going away. Providers of testing platforms are now effectively engaged in an algorithms arms race. In fact, it’s becoming increasingly apparent AI will be applied with greater frequency across the entire application development and deployment life cycle. It may take a while for IT organizations to adjust to that new reality, but the fact remains that organizations that don’t incorporate AI into their DevOps processes will be at a distinct disadvantage compared to those that do. Members of DevOps teams would be well-advised to start assessing what manual processes today are most likely to be automated by algorithms tomorrow.

In the meantime, the overall quality of the user experience should also soon improve—assuming, of course, the AI models being created to test applications are indeed being trained properly.