POSTED : October 24, 2017
BY : Jason Jobe

I was invited recently to speak about ProKarma’s Hands Free Test Pipeline for T-Mobile at the STARWEST Software Testing Conference, one of the longest running and most respected software testing and quality assurance conferences. We’ve found this conference to be the birthplace of some of the best ideas in the industry. Taking part helps us advance the industry as a whole and also bring innovative ideas back into our client work.

After showcasing the solution, I was lucky enough to connect with Paul Merrill, Principal Software Engineer in Test and Founder of Beaufort Fairmont Automated Testing Services. My discussion with Paul challenged a few remarks I made about automation and the usefulness of the testing pyramid as a standard model.

We came to the conclusion that there is no one-size-fits-all model for software development and testing. Paul shared several examples where it would make more sense to have an ice cream cone (more UI, less unit test) or diamond model (less UI, more integration, less unit). This reaffirmed something most in the industry already know – that software development is a case-by-case practice.

That’s why it’s important to keep an open mind when deciding what to automate in your development process. Get all of the facts, then use your own experience to help guide the approach to take. It’s normal to have a go-to approach, but don’t let bias cloud your judgement.

There are, however, several groups of guidelines based on testing throughout a tech stack that can help you determine the best practices for a particular project. When applied to your product, development model, organization maturity, tools and technology, they can help you determine the “right” set for you.

Fundamentals

Automation Selection: Fundamentals Level of Testing
Item: Definition Unit Integration UI
Deterministic:
Tests that clearly pass or fail; subjective tests aren’t ideal automation candidates.
x x x
Repeated:
Tests that occur often.
x x x
Time:
Tests that take a long time to perform manually.
  x x
Precision:

Tests that are subject to human error and/or require high precision.

  x x
Complexity:
Tests that have many steps or complex steps to properly complete.
  x x

Priority

Automation Selection: Priority Level of Testing
Item: Definition Unit Integration UI
Impact:
Failure severity, data loss, etc.
x x x
Likelihood:
Higher possibility of failure, more exposure to risk, more automation importance.
x x x
Data:
Require high data combinations, high quantity.
x x x
Importance:
Tests that are critical to the business; “Key Business Scenarios.”
x x x
Fixes:
Difficult or impossible to fix vs. easily and quickly fixed.
x x x
Conditional:
Tests that simulate practices that are costly or impossible to recreate manually e.g. Perf/Load.
  x x

Return on Investment (ROI)

Automation Selection: ROI Level of Testing
Item: Definition Unit Integration UI
Configurable:
Automation can be reused by changing data, no code change.
x x x
Stability:
Code, environment stability.
x x x
Test time:
Automation development vs. manual execution time, project time remaining.
x x x
Maintenance:
Cost of automation maintenance.
x x x
Dependencies:
Amount of components that need to work together in a specific way.
x x

These aren’t checklists that can be applied to every situation, and there is no single right answer, but these lists can serve as a starting point for your automation process. Realizing the full potential of automation can have substantial benefits for your organization, and having the right partner to lead the way can help you achieve integration, efficiency and effectiveness faster. Learn more about ProKarma’s Process Automation and Transformation practice.