The digital tester

The challenge of providing high-quality code is as old as programming itself, and as applications evolve to meet complex business needs, consume and process much more data than ever before, and leverage on all advances that new and more powerful technology has to offer, the challenge has become even greater. Traditionally, validation is left in the hands of manual testers who perform tests through screen navigation and/or backend database queries to confirm the code is providing the expected behaviour. The past few years, however, have seen an accelerated shift from this traditional way of testing and into automation, which in itself has significantly evolved to be more than just record and playback and into a complex multi-faceted solution.

A bit of history

Automated testing started in the unit testing area and simple User Interface (UI) record and playback tools mimicking the click and keystroke user interaction with limited flexibility to test functionality.   Then Selenium and Web driver frameworks revolutionized the competency by allowing for write once and run anywhere concepts.

However, as technology innovation pressed forward, automation shortcomings surfaced, such as legacy patterns adding complexity to the code, application changes breaking the framework that was not able to be overcome, and the effectiveness of automation eroded with ever shorter release cycles.

Enter Robotic Process Automation (RPA) combined with Artificial Intelligence (AI). These two complementary technologies are the new disruptors that are taking software quality to a whole new level enabling more effective testing of the most complex, multi-system entry combined with an ability to perform analysis on testing results and generating recommendations for manual efforts to be focused on. Automation is no longer limited to testing the UI or databases but now can be used for a variety of areas, including Interfaces, output validation, and even beyond testing into the development space.

Usage

In the early years of automation, the goal was to automate the regression test suite based on manual test cases. This is not the case anymore. Automation capabilities have expanded to include test data creation, output validation, test data management, and quality risk analysis.

RPA tools have evolved from once requiring a developer with a depth of skills to program expected behaviours to low/no-code solutions that can be managed by a semi-technical business analyst or tester. The new solutions enable a faster “idea to implementation” cycle with fewer delays and less rework as the person who will use the tool is the person programming it.

Test Data Creation

One of the biggest challenges in testing is the creation of “real-world” test data due to the amount, variety, and combination requirements sufficient to cover the multitude of application scenarios. Couple this with the now-standard need to test with data from external systems and the challenge increases many folds. For example, claims data may require a make, model, matching VIN for a customer with three accidents of a specific type from the sample DMV record and for a unique accident location. RPA and AI can help address these challenges. RPA can be leveraged to automate the data creation process by looking up data from multiple external systems, adding a randomization event, and placing it into a standard set of records. AI algorithms can consolidate the data and apply meaningful algorithms to adjust the data and provide a variety to truly exercise the system instead of just repeating the same top 40 bands from years past, which can prevent the identification of defects.

Testing efforts

At the core of automation frameworks remain the goal to replicate testing efforts of entering data, validation of elements on the screens, but as automation capabilities have grown, so has its reach of testing efforts. Now automation can perform detailed and intricate testing of interfaces between systems by replicating the response from the other system. Automation is now being used to achieve a more comprehensive validation between the UI, interfaces, and the database.

The days of fragile and regularly maintaining test scripts have been replaced with frameworks and componentized scripting. Instead of a holistic end-to-end script, scenarios are defined as components that test each step’s variability. For example, a screen may be a component that uses framework based functions to perform various validations such as a lookup validation. Instead of hard coding script data, the script now uses an external data source such as a spreadsheet or database for all input parameters. The componentizing of the script allows for easy adjustments of the automation efforts as the screen is updated or changed.

Silo Systems

Every system either receives/sends data from other systems before the business process chain or downstream for other systems to review and export. Testing these systems is generally limited due to the complexity of moving data in test environments and the limited knowledge of the other system capabilities. Through RPA, it is possible to script the process to transfer data, kick-off batch jobs, and test external systems from the same set of scripts that tested the newly developed system. RPA is not limited to screen testing; it can also perform lookups on report output to verify the data entered on system 1 as “A” is displayed on system 2’s report as “1”.

Output validation

RPA, combined with AI can analyze unstructured data, such as social media posting, reading documents, and even analyzing speech patterns to determine intent. One of the most tedious testing efforts and often under testing portions of the core policy/billing/claims systems is the output validation. There is a lot of data and variables that must be tested from declarations pages, claim letters, or billing invoices.   The RPA framework can read the output pages of text/images and confirm the output matches the expected results. Once the base set of test cases have been built, it will be possible to create thousands of tests to confirm every variation of a form, which can provide a significant lift, especially on countrywide forms with a large number of text variables.

Spot Bots

With the low coding capabilities, testers can create spot bots to help them bridge the manual testing efforts. For example, a tester is working on a new policy administration system, and the first five application pages have proven to be meeting requirements through prior manual testing efforts. The tester could create a spot bot that enters the data to enable the tester to test the balance of the application process. The spot bot would not be a formalized testing effort but rather an aid to allow the tester to complete the testing effort more quickly.

Edge cases

The power of automation testing augmented by RPA and AI has proven to be a powerful tool in the testing competency. While the majority of the efforts to date have focused on improving the speed and accuracy of the manual testing efforts, there are many new creative implementations of automation just on the horizon.

Exploratory testing

Conventional test cases focus on the known and expected behaviours of the system for a given scenario. To truly test the system and ensure the production quality, exploratory testing must be performed using real-world like behaviours, actions outside of the reasonable expectations and try completely random entries. Usually, this would be accomplished towards the end of a testing cycle by the test team for a control duration as quickly the team will repeat the same behaviours. Using a tool like AI, the paradigm has shifted as AI can learn what real-world users are doing on the system, areas they focus on, etc. through user screen tracking and heat map development. Based on this information, AI can provide a direction of where to focus, the types of testing and errors seen by the end-users to be used as a test case for exploratory testing.   Through this type of testing, the implementation team can quickly find areas of weakness and process breakdowns that could cause downtime or customer dissatisfaction.

System Continuity

Misspellings, grammar issues, and other UI issues always put the development team’s hard work overcoming incredibly complex technical issues into a bad light. Often a user would say if they can’t spell the word correctly, how can I trust the rest of the system works? The use of AI-augmented with Natural Language Processing (NPL) can help identify possible UI issues. The AI would be able to scan through all of the text, error messages, and other communication methods to ensure the same tone, wording, and style are used throughout the application. Also, it would be able to run spell checking, proper word usage, and consistency of messaging throughout the form.

AI developed Test Cases

Using AI’s ability to train and pattern recognition, it is possible to build test cases for similar functionality of the system in a completely automated fashion. When rolling out a policy administration system, states are generally grouped into like categories such as the Midwest is one group, NY/NJ/PA is another group, etc. AI can learn from the testing team’s manual and automation efforts for one of these states, such as KS from the Midwest grouping. Applying this learning to another state such as Nebraska will be incredibly easy as each state only has a minor if any variation on the screens and output except the rating. If AI finds a difference in the system behaviour, it can be flagged for a tester or BA to review and direct how to test the difference.

Self-Healing Code

With the enlargement of AI, computerization structures have begun the interaction of self or auto-recuperating test contents. When there are changes in the screens or framework practices, the analyzer will work with the code to make the important updates. As more updates are made, AI can distinguish natural examples and begins to perceive the updates and starts making proposals dependent on related knowledge. This empowers the analyzers to zero in the endeavours on new highlights and capacities rather than upkeep of the current computerization.

Enlarged Quality Risk Analytics

Regularly senior heads ask the number of experiments are we running, and the higher the quantity of experiments, the more agreeable they feel there is sufficient trying. Through methods like pairwise testing and others, more inclusion is furnished with less more focused on experiments zeroed in on legitimate, demonstrated variable blends rather than saw inclusion. Presently, AI can be utilized to investigate hazard territories in the codebase utilizing a blend of test code inclusion results from different kinds of tests (unit, useful, execution) against the progressions to the code. Also, excess tests can be called-out and dispensed with from the test suite taking into consideration more limited test executions.

Imperfection grouping can be improved by AI learning and breaking down huge number of trials to quickly recognize whether a blunder was brought about by the application, climate, or obsolete computerization. Deformities are dissected by seriousness dependent on use of the framework and effect from practices AI gained from the clients.

 Creation Validation

Applications keep on getting more refined, utilizing information from numerous frameworks by means of micro services and APIs. The test turns into the capacity to screen creation at a miniature level and give significant alarms to the creation support groups to rapidly resolve gives that could affect creation uptime. Through RPA, little controlled testing and observing can be played out that won’t make creation records however confirm administration levels.

Outfit Automation Testing

Testing utilizing manual test contents has immediately gotten ancient and supplanted with enlarging innovation empowering quicker, more productive, and better yield. Testing should now be possible nonstop by a solitary group performing manual testing during the day and computerized testing around evening time. Before long RPA and AI will empower the analyzers to be significantly more laser-zeroed in on distinguishing deviations from expected framework conduct in minutes rather than days. How might you bridle mechanization testing to improve your prosperity?

To Subscribe Our Newsletters