Test Automation Environment Management

Test Automation can’t be successful unless right environments are available and managed regularly. Our study indicates that 20% of total bugs found in a release are categorized in environment issues which in turn is impacting test cycle and increasing the project cost. Test automation environment management requires to upkeep for correct hardware, right OS with patches and dependent/related software with correct versions. Test environment management is categorized in two areas based on type of activity that is being performed. There are various activities to be performed in each category.

  1.  Development machines for developing automation scripts. Automation scripts are developed and maintained by automation engineer
  2.  Execution environment for automation script execution. The automation test suite for execution is identified and deployed in execution environment by automation testers.

Development Machines: These machines are used for building and maintaining automation scripts by individual automation engineer. Following environment activities are being done on these machines.

  1. Initial hardware provisioning and upgrades in the future to meet automation requirement
  2. Operating system and regular patch upgrades.
  3. Tool/framework IDE installation and upgrades to newer version as required
  4. Automation Software installation and upgrades to newer version as required

In case of open-source software, development machines are used for developing and maintaining test automation framework as well. Depending on the automation tool/framework, relevant software needs to be deployed. For open-source framework with Python+Selenium, latest Python version and selenium drivers need to be deployed along with IDE for building the scripts. In case of commercial tool, relevant tool IDE to be installed (Tosca commander, UFT IDE, Worksoft Certify etc.,). The software versions require to be updated frequently. The required hardware configuration and OS version are identified initially to support building and maintaining the scripts. Any upgradation to new version requires to re-look into hardware configuration as well. OS is required to be updated for patches and to latest version to support tool/framework software.  Development machines require managing hardware, OS and dependent/related software and automation engineer is expected to have right privileges to manage his/her environment.

Execution Environment: Execution environment is used for executing the automation scripts.  This environment is like production system for testing team. Test team will have read-only access to ensure that environment is not compromised by installing additional software. Execution environment is normally owned by infra/environment team in the organization. Following environment activities are being done on execution machines.

  1. Initial hardware provisioning and upgrades in the future to meet new requirements
  2. Operating system and patch upgrades
  3. Run time software installation (No IDE to be installed). example in case of .Net, only .Net runtime to be installed.
  4. Optional Server and database installation and regular back up in case of commercial tools.

The hardware capacity and configuration are identified based on no of scripts to be executed per day. The scripts can be executed in parallel and/or distributed fashion for achieving cycle time reduction. For software, in case of open source using Python+Selenium, a Selenium Grid is required to be set up and latest selenium drivers are updated regularly. License management is not required but right plugins are required for integrating with software configuration management, CI/CD pipeline, test management, test data management platform, third party cloud device providers etc. Commercial tools would require dependencies but bundled with its executable. For example, Tosca would require .Net runtime environment which is installed before deploying software. Tosca server, database and DEX is set up and maintained regularly as part of environment management. Optionally a license server also to be deployed and managed for commercial tools.

Test automation environments will not be able to meet enterprise needs unless integrated with enterprise platforms such as CI/CD, Software configuration management, Test Management, Test Data Management platforms. Right plugins and adapters to be installed on Development and execution environments for integrating with enterprise platforms.

Cycle Time Reduction with Test Execution Environments

Test Automation is often started in projects to reduce project schedule, increase quality and give faster feedback to developers about their code. Cycle time reduction is the reason for companies to go for test automation. Most of the companies start the test automation but fail to achieve the cycle time reduction in the project schedule due to various reasons. Then they feel to resort to manual testing. Test execution environment plays important role in achieving the cycle time reduction. This blog is to take you through important requirements, issues and solution to address test execution environment. And finally will take you through different types of execution environments available for different types of applications.

Test automation should have two types of environments
1. Development environment – for building and maintenance of scripts
2. Execution environment – for executing scripts in remote machines. Most of the organization are missing this piece in overall test automation strategy.

Following are the reason for not achieving cycle time reduction in spite of test automation
1. Scripts can be executed attended only during office hours timings
2. Absence of proper test automation execution strategy in place

Execution environment plays crucial role in reduction of project schedule as scripts are executed round the clock in remote machines. Scripts can be executed in remote machines provided below criteria are achieved

1. Unattended scripts
2. Scripts are atomic in nature
3. Test data is provisioned on demand during execution
4. Framework and scripts are not tied to any machine. Everything is done through configuration parameters

There are three types of execution environments which are available depending on type of application technology. It is suggested to have execution environment at organization or departmental level to get the benefit of shared environment and keep the environment cost low. This also achieves organization sustainability goals.

1. Distributed execution – Scripts of all types of applications (Web, API, Desktop etc) can be executed. In this set up, one script is executed at a time in one machine in sequential manner with automation suite getting distributed in multiple machines. Thick client application suite can be executed in distributed environment only. Desktop applications like WPF, Java Swing, Mainframe applications are examples of thick clients, can be executed in distributed application set up. Application has to be installed in remote machine prior to execution start. The installation strategy can be just-in time or part of deployment procedure. Spin-up and spin-down the execution machines on need basis requires meticulous strategy and planning.
2.Parallel execution – Scripts are executed in parallel with multiple threads in one machine and execution spanned across multiple remote machines. This execution environment is best suited where application installation is not required in remote machine. you can also spin up and spin down environment using containers on need basis. Web and API executions can be performed in this set up. This drastically reduces execution time of automation suite.
3. Mixed execution – Scripts of different types of applications such as Web, Mobile, Desktop etc can be executed on this type of enterprise execution environment. Few machines have be dedicated thick client applications for distributed execution but these machines can be used for parallel execution when machines are idle.

You can also consider third party cloud device providers for providing test execution environments. This comes with license cost. Just to remind this will work for non-thick client applications only. This arrangement will allow us to concentrate on our core work of building and maintaining frameworks and scripts.

Good test automation strategy, following automation best practices for building framework and scripts will help to achieve cycle time reduction with test execution environment.

Please feel free to provide your comments to enrich the blog.

Building SDET Organization

I have come across organizations wanted to start test automation journey to reap the benefits of reduction in schedule and increase of quality. But they are struggling on how and where to start. There are two options for organizations to choose before they start the journey

Organization can implement their test automation in two ways – one with scriptless/codeless and second with script/code based.
For scriptless – Organization are using scriptless framework and tools for automating their applications. Tricentis Tosca, AccelQ and Worksoft certify are few scriptless automation tools. Few organization are using open source or commercial tools to build scriptless frameworks to take care of their automation needs.
For code based – Organizations are reskilling their workforce to learn programming, building framework and RCA for problem identification skills with selected technology and programming language. You call this as workforce transformation towards SDETs.

This blog will concentrate on approaches to be followed for having strong SDET organization which should work as roadmap to build the same.

1. Root Cause Analysis (RCA) – One of the basic skills needed for workforce in SDET is RCA skill. The RCA of bug/issue is now moved from Dev team to testing team. Earlier test team used to log the bug with bug description and Dev is to do thorough analysis to identify root cause of the bug. With SDET, it is required for tester to provide RCA of bug while logging the same. Hence strong technical skilled resources are needed for SDET organizations.

2. Integrated Development Environment (IDE) – SDET organization is required to use same IDE as other cross teams are using especially development team. This is one of the unwritten requirement. If project is using Microsoft technology – Visual Studio to be used. For Java, it can be intelliJ or Eclipse. This helps the entire team speaking the same language during delivery such bug reporting, RCA, results etc

3. Automation tool selection – Organization has to be careful in selecting the right tool for their needs which should align to SDET and overall organization goals. Selecting a tool that would have its own IDE and uses different technology for automation will be difficult during communication. Selecting MF UFT One with VBScript when system under test uses Java would not help resource to make SDET rather should select UFT Developer/LeanFT with Java is right choice. Another example is using Java with Selenium for ASP.Net application is complete mismatch rather one should use Selenium with C#/VB.Net using Visual Studio IDE.

4. Technical testing – currently organization testing is based on business process and requirement validation. When organizations are adopting test automation for validation, it is important to consider technical testing of the application. For web application, you should consider local store/cookies, URL encoding, I18N, L12N, authentication etc. For API, understanding its specification is important. SDET should closely work with design team to understand the high/low level technical implementation details and consider them in scope for technical testing.

5. Architectural layers – with organization adopting modern application architectures for building the applications, the workforce should be capable for validating the business processes defined under UI layer such as API, DB, MQ etc.

6. Project to Products – All organizations are moving from project based to products based organizations so that services can be delivered efficiently and effectively to customers/partners using latest and greatest technologies. Hence it would be important for SDET to possess skills such as unit testing, TDD, CI/CD, defining & using code quality standards etc

7. Define organization structure – It is important to define various roles and responsibilities required to take care of enterprise automation needs. The roles can be such as Enterprise automation architect, SDET developers, SDET data engineer, SDET environment engineer, SDET software configurator etc. SDET organization with different roles is required to be set up to manage framework/tools and scripts.

8. Enterprise platform knowledge – Automation can’t work in isolation. It has to be well integrated with various enterprise platforms such as DevOps, Test Management, Test Data Management (TDM), Software Configuration Management, Enterprise Service Virtualization etc. Hence SDET should possess working knowledge of these platforms and require to work with these teams to understand integration requirements.

9. Left shift – SDET should understand the left shift activities in this space such as participating, understanding and reviewing requirements, design, code etc. At the same time, getting SDET artifacts such as scripts, strategy etc are reviewed and signed-off by other engineering team members. SDET should involve business in testing phase to bring quality upfront. Also SDET should participate in build phase to left shift the activities.

10 . Cloud – With cloud becoming pervasive, SDET organization should understand various cloud services and its usage in applications. Having SDETs certified in relevant cloud will help to accelerate building quality scripts.

I have attempted to put together this draft as how to setup strong SDET organization by building knowledge and skill in above areas. I will continue to update this blog based on your feedback and new areas that emerges as industry evolves.

Chaos Test Automation Engineering

Chaos engineering is to infuse the faults in the application. Faults can be infused on various areas such hardware, network, build etc but this article talks about injecting faults in functional testing area using automation. The chaos engineering for test automation is expected to improve application availability, helps the developers to take proactive approach in correcting the issues. It helps to identify issues before they become outages. Chaos engineering requires to break the application hence requires approach to identify faults and attack the system with faults.  Chaos functional test automation falls in below two areas

  1. Technical Fault Infusion
  2. Business Process Fault Infusion

Technical Fault Infusion:

This type of fault requires to infuse fault in technology aspects of the application. Below are some of the areas but list is not exhaustive.

Attack/ExperimentDescriptionHow
Run time environmentApplication code requires run time environment like JRE, .Net etcChange the version of run time environment (higher or lower) and watch the application behavior
Dependent SoftwareApplications may use third party components, software to meet the requirementChange the version or remove the dependent software component to check its impact on application functionality
ConfigurationEvery application will have configurations which will be used by application at runtime to provide its functionalityDelete configuration files to observe that system operates with default values Remove, change configuration values to see its impact on application functionality
Integration/Interfacecommunication between modules and other software applications implemented using API and Datachange the interface value such as mapping, definition, parameters
AccessSecuring the applicationRevoke, lower and higher the access to application
Bring down one tierApplication consists of databases, application, business and front-end layersBring down database or other layers and observer behavior of application. What functionalities are available?

Business process fault infusion:

In organization, there are several business processes to meet organization goals. These business process work each other with seamless flow of information. Each business process should work independently and able to test but also should have good handshake with other business process. For example – User places the order where order placement process gets involved. Once the order is placed successful. Order fulfillment process in the backend fulfills the order. Now infuse the fault by bringing down the order fulfillment process but still user should be able to place the order. Once order fulfillment process is up, orders should get processed.

It is always suggested to experiment the faults in application using separate environment.

Cloud Test Automation – Migration Strategy

Organizations have started adapting cloud first approach to achieve their digital transformation. Cloud helps to reduce the cost, burden of maintaining IT infrastructure and provides pay for use model. In order to migrate applications, all applications in the enterprise undergo 6Rs disposition strategies illustrated in below table. Each application scope is evaluated against 6 widely used dispositions a.k.a 6R’s migration strategies. Testing is always in dilemma what should be the approach and strategy for test automation. This blog is to provide automation strategy aligned to 6R’s with detailed approach to reduce complexity.

Below is each R definitions, criteria, automation approach and Kemper example against each disposition.

DispositionDefinitionCriteriaAutomation ApproachExample
RetainRetain in data center/on-premiseApplications running on AS/400, Mainframes, older technologiesUse existing automation scripts otherwise develop automation scriptsApplications using AS/400, mainframe and older technologies
RehostLift and Shift for like- for-like migration. No code changes except minor configuration changes such as IP address etcPlatform, applications OS, DB and middleware will operate without any changes hence requires no code changesReuse existing automation suite if available otherwise develop automation scriptsApplication & its components such as OS, DB, Middleware are currently running on cloud supported version
Re-platformLift, Tinker and Shift with few enhancements without changing the core architecture and design of the application and code· Platform and technology types & versions of OS, DB and middleware will operate with changes/upgrades · Cost optimization, standardization and cloud vendor provided technologyReuse existing automation suite if available otherwise develop automation scripts· Unix to Linux · Websphere to Tomcat · Upgrade application Software Version
Re-factor/Re-architect· Changing core architecture and code. · Use cloud native features · New features/ enhancements can’t be applied in current environment· Business needs for modernization · Use cloud vendor provided technology · Versions of OS, DB and middlewareReuse and update existing automation suite if available otherwise develop automation scripts· Code changes, configuration changes · DB Platform (Oracle to SQLServer)
Repurchase/ Replace· Moving to different product· Cost optimization · Availability of SaaS enabled productsDevelop automation scripts· CRM to Salesforce · HR System to Workday · ERP to SAP · CMS to Drupal
Retire/ RemoveApplication not useful and not providing business value. These applications can be turned-off but data archival might be required· Usefulness of application · Retiring in 12-24 months · Part of client legacy program shutdownArchive existing automaton scripts. No automation is required· Business process getting moved to new applications · Applications becoming obsolete due to older technologies

The automation strategy for retain and retire will not get changed. Applications falling in this category will continue to use existing automation strategy.

Sl.NoDispositionTesting strategy
1REHOSTBefore rehosting:
Check availability of manual test cases.
Check availability of Smoke and regression automated suites
Check availability of performance test scripts
Document application integration/interface points
Baseline performance metrics
Identify and document test data requirements for each application such as sub-setting, masking, synthetic data creation etc as required. Validate whether existing test data can be reused
During rehosting:
Optimize test case suites to remove obsolete and duplicate test cases
Develop automation scripts if not available for application. Use existing scripts if available for the application
Develop performance testing scripts if not available for application. Use existing scripts if available for the application.
Set up/reuse test data automation for application and develop test data to scripts
After rehosting:
Perform smoke testing suite initially.
When smoke testing is successfully executed, perform regression suite
Ensure that relevant access and integration points with other systems are working as expected by executing E2E automation scripts
Perform performance testing and note down the metrics.
Share the results with stake holders
Use manual test cases for scenarios where automation is not possible
2REPLATFORMBefore Replatform: Check availability of manual test cases
Check availability of Smoke and regression automated suites
Check availability of performance test scripts
Document application integration/interface points
Check availability E2E regression test cases for interface testing
Baseline performance metrics
Identify and document test data requirements for each application such as sub-setting, masking, synthetic data creation etc as required. Validate whether existing test data can be reused.
Identify impact of REPLATFORM on functionality/business flow of application
During Replatform:
Optimize test case suites to remove obsolete and duplicate test cases
Develop/update automation scripts for impacted areas where functionality/business flow is changed due to upgrades/changes to underlying technologies/platform. Use existing scripts if available and applicable for the application.
Set up test data automation for application and provide test data to scripts
Develop performance testing scripts for impacted areas where functionality/business flow is changed. Use existing scripts if available for the application.
After Replatform:
Perform smoke testing suite initially.
Execute updated functionality/business flow scenarios when smoke testing is successfully executed Perform regression suite
Ensure that relevant access and integration points with other systems are working as expected
Perform E2E regression suite for validating application interfaces
Perform performance testing and note down the metrics. Share the results with stake holders
Use manual test cases for scenarios where automation is not possible
3REARCHITECTBefore Rearchitect
Check availability of manual test cases and its applicability to rearchitected application
Identify and document application changes due to architectural changes
Check the availability of Smoke and regression automated suites and its applicability to rearchitected application
Check availability of performance test scripts
Document application integration/interface points
Check availability of E2E regression test cases for interface testing
Baseline performance metrics
Identify and document test data requirements for each application such as sub-setting, masking, synthetic data creation etc as required
During Rearchitect:
Optimize test case suites to remove obsolete and duplicate test cases
Develop automation scripts for changed functionality due to change in architecture of application.
Use existing scripts if available for the application which can reused.
Set up test data automation for application and provide test data to scripts
Develop performance testing scripts.
Update performance scripts if functionality/business flow is changed
After Rearchitect:
Perform smoke testing suite initially.
Execute updated functionality/business flow scenarios when smoke testing is successfully executed
Perform regression suite
Perform E2E regression suite for validating application interfaces
Ensure that relevant access and integration points with other systems are working as expected
Perform performance testing and note down the metrics. Share the results with stake holders
Use manual test cases for scenarios where automation is not possible
3REPLACEBefore replace:
Check availability of business/application scenarios and extent of its applicability to replaced application
Document Smoke and regression test cases
Identify and document applications flows/scenarios
Document performance test scenarios
Document application integration/interface points
Define performance metrics
Identify and document test data requirements for each application such as sub-setting, masking, synthetic data creation etc as required
During Replace:
Baseline and develop manual test case suite
Develop automation scripts for smoke and regression suites.
Develop E2E automation scripts for interface/integration validation
Set up test data automation for application and provide test data to scripts
Develop performance testing scripts.
After replace:
Execute smoke test suite initially
Perform regression suite
Perform E2E regression suite for validating application interfaces/integration points
Ensure that relevant access and integration points with other systems are working as expected
Perform performance testing and note down the metrics. Share the results with stake holders
Use manual test cases for scenarios where automation is not possible
4RETIRE/REMOVENo efforts will be applicable. Existing manual and automation scripts can be archived to meet legal and auditing requirements

Reporting, Power BI and excel based applications are not candidates for test automation. These applications with embedded charts, images etc can’t be automated using automation tools. Application complexity, high usage and business priority apps can be prioritized for automation and will go through above disposition criteria.

Automation Strategy: For REHOST, RE-PLATFROM, RE-FACTOR/RE-ARCHITECT applications, existing automation scripts will be used to large extent and test scripts will updated/developed as required
For REPLACE application, new automation scripts will be developed.
Automation approach will validate following areas irrespective of dispositions depending upon type of application
1. Compatibility Testing:
Browser : can access from different browsers – IE, Edge, Chrome, Firefox etc
Devices : can be accessed from different types of devices – Desktop, Laptop, Tab, Mobile etc
Interoperability/OS : can access from different OS – windows, iOS, Linux
Interface : external and old users able to access new cloud interface
2. Integration and Interface Validation as part of Digital Decoupling:
Cloud application connected to another cloud, on-premise and external application in Synchronous and Asynchronous ways.
2.1 Synchronous integration:
Microservices/API: Compatibility with cloud vendor API
Validate Payload accuracy
Response Management
Authentication and Authorization
Protocols, message format (SOAP, REST, XML, JSON)
2.2 Asynchronous and Near real time asynchronous with and without batch jobs integration:
Validate application behavior before and after job run without manual intervention
Validate inbound and outbound integrations – protocols, message format (JMS, MQ, CSV), technologies and industry specific protocols (Swift, Fix, ISO 20022, ISO 8583)
3. Data Migration and Integration:
Multi-Cloud – One cloud provider to another cloud provider
Hybrid – On-premise to Cloud and Vice Versa
4. Multi Tenancy System to ensure that application instances work in shared environment.
5. Data Loss:
There is lot involved in moving data to cloud – backup, compressing, sending, storing, corruption, missing. Our approach validates integrations, transformations and migrations to ensure that data lands in target system

This blog is to help you to give head start for test automation instead of building from beginning for migration to cloud. I will keep enriching this blog as new things and techniques gets emerged and with your valuable comments.

Test Automation – Auto healing

One thing in the world that is constant is change. Software industry is not immune to the change. It adapts changes taking place in society and consumer behavior. In IT, adapting change requires to come up with new requirements and/or changes to existing requirements for application. This change has direct impact on test automation as well. In fact one of the challenges in Test Automation is script maintenance. Organizations are spending around 40-50% of their time maintaining the automation scripts. This heavy maintenance is making the organizations think twice before starting the automation journey. The change has impact broadly on three areas in the application. This blog is to outline my thoughts automation areas and self healing techniques to address these changes.

1. Change of controls on the screen
2. Change of Test data
3. Change of functional flow – Navigation of screens.

The Sl No 1 above further can be categorized into following areas

1.1 Change of technical information about controls – ID, Name etc. getting changed during every build and every page load.
1.2 Adding new control(s) to the screen
1.3 Remove control(s) from the screen
1.4 Change in type of control

Organizations are trying to address maintenance of scripts using self healing techniques. Self healing is in starting stage but has huge potential on test automation area. Also currently this is restricted to web UI only. It has potential to expand to API, desktop UI, DB and other areas. Current tools and techniques addresses 1.1 only using control identification. This is implemented using rule based and conditioning techniques. Let us discuss few solutions available to implement self healing techniques

1.1. Change of technical information about controls

Each control on UI is identified using unique identifier. Name, control type, automationId, classname etc are used to identify controls on desktop application. Id, name, xpath, linktext, partialtext, css, class, tag etc. are used for identifying controls for web applications. During identification, controls are identified using all identifiers with one of the identifiers as primary and rest as secondary and fall back. These are stored in object storage. During execution, script will start identifying using primary identifier. In case of control identification is changed, framework will fall back to identify the control using other identifiers and self heal the script. At the same time, framework updates object storage for primary identifier which will be used in subsequent runs. Any change to other identifiers can also be updated in object storage. Also after each run and proper analysis, object storage can be updated.
For web applications, controls are identified using xpath as absolute path and virtual path using above identifiers and stored in object storage. During execution, virtual path is used for locating the control. In case of non-availability of control with primary identifier, fall back identifiers are used. The script is healed accordingly and xpaths are updated. If virtual path with primary and secondary identifiers fail to identify controls, absolute path will be used for checking the availability of control using primary and secondary identifiers. The script is self-healed and execution is continued. Virtual path is updated and stored in object storage after the execution for subsequent runs.

1.2 Adding new controls to the screen

Enhancing existing requirements and new requirements may necessitate to add new controls to the UI. This may require to update the existing scripts to interact with new controls. During execution, the script identifies the additional new controls and adds to the object storage. Depending on the type of control and its attributes, test data required is generated. Label tag and attributes such as placeholder, value etc. are used to generate test data. After execution, automation engineer can add new code.

1.3 Remove control(s) from the screen

Change in requirement(s) may also require to drop controls from the UI. This would require the code that interacts with dropped control(s) to be disabled & ineffective. During execution, if script is not able to find the control, it will continue the execution for rest of the script and report the redundant code after execution. At the end of the execution, automation engineer can remove impacted code so that subsequent executions will have smooth runs.

1.4 Change in type of control

Sometimes requirements will require to change the type of control. During execution script will fail as required control is not available example textbox changed to drop down along with properties. In this case, framework would require to self-heal script using 1.3 and 1.2 techniques

Sometimes requirements will require to add new control(s) and remove control(s) from screen simultaneously. The framework should consider 1.2 and 1.3 to address this challenge.

2. Change of Test data

When requirement changes, the test data required also will get changed. The scripts require to self-heal for required test data to enter/select from required control. The label tag added for best accessibility can provide good information about the control requirements and required test data can be generated accordingly. The placeholder and value attributes of the control can also be helpful to determine what data is required for the control.

3. Change of flow – Navigation of screens.

The change in requirement sometimes requires to change the flow of steps in the screen. The change of steps in a given screen may not affect the script but flow across screens will have impact which requires rearranging the code snippets. This requires framework should have capability to group controls screen wise so that flow can be organized accordingly. During execution, script looks for the screen and then controls to interact with. When script encounters different screen, it looks for the current application screen availability in the object storage to continue the execution. Once execution is completed, automation engineer can correct the flow in the script.

The industry is currently addressing 1.1 only. The techniques and implementation specified above may be slightly different during implementation. However self-healing has huge potential in test automation area considering DevOps requirement of unattended automation scripts. I will continue to update this blog as and when new techniques become available.

Web Automation Framework Capabilities

The buzz word in the industry is automation in all areas. In IT, especially testing is no exception to this. Test Automation has become primary objective for all organizations to achieve the goals of speed to market, reduction of cost, test coverage and quality of product. Organization started looking into various tools to build automation frameworks to validate their business process. In the internet era, web applications are predominantly being used hence it is important to validate these web applications. Organization are lacking with definition of what is web automation framework that should be built and support enterprise web framework needs. This blog is to share what various capabilities that web framework should have irrespective of open source or commercial tools being chosen.

S.N. Web Controls / Capabilities Applicable to your landscape? Comments
1 Button
2 Link
3 Textbox Including password controls
4 Dropdown
5 Checkbox
6 Radio button
7 Date control Selection of particular date based on input data value
9 List control Single selection
10 textarea control Locating the textarea. Retrieving & populating the text
11 Moving Slider & Range selection (HTML5) Moving slider based on element properties and input data values
12 email control – (HTML5) A control that has type=email
13 Auto Focus – (HTML5)
14 Images / Audios / Videos / GIF’s (HTML5) Handling of controls for Images / Audios / Videos / GIF’s and retrieving expected values
15 CSS Style (font type/size/style/color) Identification of CSS-style controls and retrieving the font type, font size, font style, font color etc.
16 Conditional Wait Wait for certain time (parameterized) for an element to be visible / clickable with polling interval of XXX time
17 Web windows Identification of number of windows available on a web page and switching to desired window
18 iFrames Identification of number of iFrames available on a web page and switching to desired iFrame
19 Auto-complete/Auto-suggest Handling of Auto-Complete / Autosuggest controls
20 Calendar (HTML5)
21 Web-Table/Grid – Sorting Sorting of values in Web Tables / Grid
22 Multiselect/Combo box : Advanced Advanced List/Combo box with checkboxes
23 Hover Actions Mouse hover action to desired element
24 Browser capabilities/options Parameterize browser capabilities/options to be used while launching the web driver
25 Windows Dialog box Handling of windows dialog boxes (e.g. to provide credentials, to upload/download files etc.).
26 Hotkeys Perform keypress events (e.g. Enter, Tab, Down Arrow etc.).
27 Keyboard Shortcuts Perform keypress events for shortcuts like Ctrl + C, Ctrl + A, Ctrl + Shift etc.
28 Selecting text from a list function
29 Charts / Graphs Handling of controls like Charts, Graphs etc. and retrieve expected values
30 Map Handling on map embedded within a webpage (e.g. searching for location, moving location pointer, zoom-in/zoom-out etc.).
31 Tooltips: Advanced Mouse hover action to desired element to visible tooltip and capturing the tooltip text
32 Screenshots Capturing screenshots at runtime and attaching in report
33 Store value and use in subsequent steps Storing the input values at runtime and using in subsequent steps/scripts
34 Web Table Web Tables actions – Able to identify values in particular row/column
35 identification of controls with ID, Name, Xpath, linktext and partiallinktext Identification of elements by directly passing it’s id, name, xpath etc
36 Identifying controls with dynamic xpath with parameterization Need to handle with String concatenation at script level.
37 Popup Capabilities (confirm/alert/input) Support available for confirm, reject etc.
38 Get Attribute and Tag name Getting attribute and tag values of the elements
39 Select Submenu Select Submenu option after hovering over parent menu option when object Exists in DOM
40 View Toggle Control Setting On / Off the toggle control based on existing condition or input data value
41 Inline Edit (Ajax) Setting text to elements having capability of inline editing
42 JavaScript Executors Support of Javascript Executor methods
43 verifying text exact string, substring, contains, ignore case, startwith, endswith etc.
44 Verifying text data type Verify data for its type such as string, number, date etc
45 verifying element state Verify element properties like clickable, visible, enabled etc.
46 verifying element attribute Verify element attributes like type, value, name, id, label etc.
47 verifying element with stored value Verify element properties like clickable, visible, enabled etc. with respect to stored values
48 verifying element attribute with stored value Verify element attributes like type, value, name, id, label etc. with respect to stored values
49 Passing application data to other layers Capturing the data on application at runtime and storing in variables/files that can be used by other application layers.
50 reusable test scripts  Able to use existing scripts in other scripts
51 common test data retrieval Retrieving the test data to provide as inputs scripts. Test data is stored in excels, data bases etc
52 test execution report consolidated and individual test script report
53 context menus It’s menu that appears upon user interaction, such as a right-click mouse operation that offers a limited set of choices that are available in the current state, or context, of selected choice
54 logging Provide capability for logging when scripts traverse through various steps
55 common exception handler Handling of common exception and reporting error message/log in report/log file. Take appropriate action in the event of failure
56 execution when screen locked  To achieve unattended test suite execution
57 File Upload and Download Handling of file upload and download functionality.
58 file handling after upload/download Handling of file after performing upload/download operation.
59 Cross Browser Support Support for multiple browsers like Chrome, Firefox, IE, Safari, Opera, etc
60 Audio/Video controls Handling of Audio/Video controls that are embedded in web pages.
61 Color Picker (HTML5) Picking of expected color.
62 Encryption/Decryption Ability to encrypt any text and decrypt at run time
63 Property file handling Handling of Property file to manage configuration settings
64 Drag and drop
65 Closing browser windows Ability to close opened windows(s)
66 AngularJS Ability to support AngularJS applications
67 Gizmox Ability to support Visual WebGui .NET
68 Oracle ADF Ability to support dynamic page rendering
69 Embedded controls ActiveX and Applets
70 Adobe Able to support adobe applications
71 Integration with test management tool Able to execute and store status in TM
72 Integration with version control system Able to store scripts in version control systems

This list is not exhaustive and complete one. There can be your organization and LOB specific web requirements such as embedded and smart controls in applications. You should consider adding these special requirements and used along with common framework functionality. Certain capabilities such as AngularJS, Gizmox may not be applicable for your landscape. You can omit while designing and building the framework. In my next blog, I will share API capabilities. Meanwhile please share your feedback to enrich web application framework capabilities.

Enterprise Test Automation Requirements and Capabilities

In my earlier blogs, I have talked about best practices and capabilities to be considered while developing test automation frameworks. This blog talks about leading practices and requirements to be considered for test automation at enterprise level. At Organization level, often test automation is started/experimented with single or couple of applications in silos as pilot in one of the LOBs. After successful pilot(s), test automation is rolled out at enterprise level to realize the business benefits calculated in hefty excel sheets. After few months or weeks of journey, leadership would start sensing that automation is not giving intended business value and benefits. Also will start receiving the feedback during reviews from the ground that very difficult to automate and meet automation targets due to various reasons. After scrutiny, it is realized that automation is not considered holistically at organization level. The following practices and capabilities should help the test leadership to take informed decision for their successful automation journey.

 

Leading Practices and Capabilities Description
Reusability: Reusability is at the core of test automation. Automation approach should bring reusability in every area – frameworks, scripts, test data, environments and people. For example – Organization should be able to use the framework developed across applications in the enterprise. A framework developed for web applications should be reused across departments/LOB rather than developing multiple frameworks for web automation requirement. Similarly people can be moved across applications seamless as per demand without retraining.
Autonomous: Unattended test automation execution with no baby-sitting required, facilitating fully automated script execution with lights out mode in DevSecOps environment. In the event of failure, intelligent exception handling capability should allow continuation with next script execution. Pipelines configured to stop the build getting promoted to next environment in case of failures.
Anti-autonomous: Build is validated every day during agile development in DevOps environment. In the event of failure, execution to be stopped and build failure to be notified along with issue so that development team can fix. This helps to give faster feedback to the relevant stake holders.
End to End Enterprise test automation: Enterprise Automation should support flows spanning various technologies and modern architectural layers in enterprise by supporting varied application landscape – Web, API, DB, ETL, Integration, Mainframes, Desktop, Mobile, FTP, File Operations, COTS (Guidewire, Salesforce, Pega, Peoplesoft, Workday) etc
Auto healing: Change is constant in the world and software is not an exception to this rule. In order to meet ever changing demand of the market and bring product to market quickly, automation should be able to recognize changes to underlying build and auto heal scripts thus eliminating costly maintenance efforts.
Batch job tests: Able to develop and execute post job test cases without changes to test data and scripts. No manual intervention is needed during execution. Should be able to link pre and post job test scripts to have seamless batch job test execution.
Execution Environment: should have centralized self service execution environment to share the infrastructure and software which should lower the cost and total cost of ownership. This environment should have parallel and distributed script execution capability.
Enterprise Eco System Integration: Test automation can’t work in isolation in the enterprise. It should have seamless integration with enterprise eco systems such as DevOps, Software Configuration Management (SCM), Test Management, Virtualization, Test Data Management.
Test Data: Automation can’t be successful without providing required test data to scripts. AUT application is provided subset of data from production using enterprise test data management system. Automation scripts are to be provided with required test data inputs on demand during each execution. Test data once identified and developed should be reused for each execution without human intervention. The test data to be maintained when corresponding requirement is changed. A proper test data process should be in place which should take care of varied technology land scape and application requirements.
Reference Applications: Organization should build or identify reference applications which will be used for developing required capabilities in automation frameworks. Not having reference applications will make difficult for automation developers to build required capabilities and validate the same resulting brittleness in frameworks.

I hope the above guidance will help to start the automation on right foot for organizations embarking their journey. For existing organizations where automation is prevalent, it should help to course correct and realize the full benefits. Please do share your experience and feedback to enrich this blog

Limitations of Selenium

Selenium WebDriver has become de facto open source automation tool for testing web applications. People talk about great things and advantages that Selenium can offer in test automation. In deed Selenium is great open source tool for web automation. In this blog, I would like to share some limitations of Selenium that came across during my experience while providing solution and capability to frameworks using Selenium. I hope this helps you to take informed decision while suggesting/selecting Selenium for your automation scope of work. I have put the limitations in various buckets for easy reference.

Category Limitations
General Points HTML5 controls such as canvas and embedded objects like “SVG”.
Windows-based popup/OS based popup.
File upload and download features
No direct support for shadow root element. It can be done using workaround with Javascript executor.
bitmap comparison
no out of box support for OTP submission. It can be done by understanding existing application design
reCAPTCHA testing
Bar code reading
finding elements is only possible with Id, CSS, xpath, name, partial link etc. Cannot find elements by using image (image-based search in website).
Mouse over & Drag drop functionality will not work consistently.
Window authentication dialog box
Flakiness in test results
Web Technologies No out of box capability for web applications developed using open JavaScript framework like AngularJS, ReactJS.
No support for Silverlight, Adobe and flex web applications.
Can’t identify Microsoft ActiveX, Java applets, flash elements and embedded objects.
Other technologies Doesn’t support
Windows based applications
Desktop applications build using Java/Microsoft and other technologies.
API testing
FTP testing
Debugging Doesn’t have debugging capability. During script development, in the event of failure automation engineer must correct the issue and run the script by launching browser from beginning. It can’t run from point of failure to reduce the script development time
Design No built-in object repository
No built-in test data provision capability to store test inputs to scripts
Selenium doesn’t provide any IDE. Automation Engineers must use Eclipse/Visual Studio/IntelliJ IDEA for building the scripts.
A slight change in locator breaks the test easily. No in-built self-recovery mechanism is available with selenium to correct the test automatically.
Reporting Doesn’t have built in reporting capability. One must rely on plug-ins like JUNIT and TestNG for test reports.
Doesn’t possess capability to integrate with any test management tool.
No out of box capability to integrate with DevOps tools
image and audio/ video testing Doesn’t support testing of image based applications and has limited support for audio/video controls
Dependency It’s an open source tool so in case of any technical issues one must rely on the selenium community forums to get the resolution
Selenium requires knowledge of a programming language to write scripts that interact with web browser
A framework has to be developed first before start building automation scripts for web application

 Now that you are aware of limitations of Selenium, you can build scripts accordingly and recommend selenium considering automation scope and application technology landscape. Also please feel free to share your comments to enrich the blog.

Web API Automation Framework Capabilities

 

Test Automation has become primary objective for all organizations to achieve the goals of speed to market, reduction of cost, test coverage and quality of product. In continuation of my earlier blog on web automation capabilities, this blog is to share what various capabilities that web API framework should have irrespective of open source or commercial tools being chosen as Organizations are lacking with definition of what is API automation framework that should be built and support enterprise web API framework needs. With Organization started embracing QE in NEW, validating web services quickly enables left shift the quality and to take product to market rapidly. Hence you will see more and more organizations started adopting API to expose their business processes to be consumed by various clients.

Sl. No. API Capabilities Applicable to your landscape? Comments
1 Capability to connect with web services End point using service URL Ability to connect with endpoint to post request to the service URL
2 Building Request Run-Time Build Request Runtime with Parametrized Tag values
3 Ability to add parameter to request
4 Ability to add header to request
5 Ability to add input to request body
6 Ability to set SSL certificate to request Identity type of certificate
7 Provide capability to make API call in proxy environment able to support proxy capability while making service call to endpoint
8 Ability to submit the request to endpoint
9 support for certificate handling
10 Test Data Retrieval Ability to read test data from external sources (excel, database etc)
11 Test Data Retrieval from Specific Path Support centralized path (ALM/Shared drive) for storing and accessing datasheet
12 E2E API Testing Response tag values can be mapped to input request for another service supporting E2E Automation
13 REST Support Support for all Methods type GET, POST, PUT, DELETE
14 SOAP Support Support the capability to communicate with SOAP Services
15 MQ Support
16 Support protocol in Web API Handling HTTP & HTTPS protocols
17 Environment parameters configuration (UAT, QA, DIT etc) Ability to run same script for multiple environments with different test data set
18 Support for Multiple Application layers support end to end automation with flows spanning across layers in the application (UI->API->DB->FTP)
19 DB connection to validate values in Web API
20 Request Encoding Support Encryption of request for PII data. Encode request parameters
21 Response Decoding Decoding response data and parameters
22 Reporting Capability Consolidated and Individual Reporting. Ability to capture details of service interaction steps in the report
23 Common Error handling for continuous unattended execution Ability capture errors and continue to execute remaining steps/flows
24 Authorization on Service testing Basic, NTLM, oAuth, Kerberos
25 Parameterization of attribute values Ability to add attribute to request and take attribute parameters and values in request.
26 Parameterization of header values Ability to add header to request and take header parameters and values in request.
27 Response Validation for Status code Ability to verify response for status code
28 Validate response type ability to validate response type – XML/JSON/Text/image
29 Validate response header Ability to verify headers in response
30 Validate response parameter Ability to verify parameter in response
31 Validate count of parameters ability to verify response parameter count
32 DataType Validation Verify the data type(date, text, number etc) in Response
33 XPATH and JPATH assertion Ability to check if the particular field is present using XPATH/JPATH.
34 Verify content-type of response
35 Reusable test scripts Able to use existing scripts in other scripts
36 Validate response in XML format Ability to support actual response XML with expected XML. Also, tag and parameter in the XML
37 Validate response in JSON format Ability to support actual response JSON with expected JSON. Also, name and value pair in JSON
33 Verifying Plain text of response Validation of plan text response
34 Dynamic parameter passing to XML ability to pass value to XML and replace parameter place holder
35 Dynamic parameter passing to JSON ability to pass value to JSON and replace parameter place holder
36 Integration with ALM ability to execute scripts from ALM – offline and real time and status is updated

This list is not exhaustive and complete one. There can be your organization and LOB specific API requirements. You should consider adding these special requirements and use along with common framework functionality.