Post-mortem, yearly reports, metrics, KPIs, predictive analysis, graphs, %%%%. Again big reporting for big managers!
My 5 cents which hopefully could help to run off all that nightmare inventing that stuff.
Key Metrics for showing test Automation contribution and value:
1. Coverage - higher %, bigger scope, more influence
- against manual test cases
- against functionality/use cases
- against source code
- against API
2. Test Automation ROI (notorious ROI - I wanna present you my approach for its calculation little bit later). Showing ohw automation saves money/time. Btw, at some point you can show a trend of your ROI instead of current or projecting ROI. For me, seeing dynamics is way better and clean than static numbers. You can have achieved only 90% ROI though if you will show me a steady progress over time, I can value it higher of current 150% ROI which was finally achieved in 5 years. More advanced way to show your dynamics/trend is to lay on 2 graphs on one plot: projected ROI dynamics vs. real ROI dynamics
3. Defecs Discovery Rate (number of found issues vs number of auto tests). I found this Metric more representative if you will correlate this metric to the same metric but applied to Manual testing. In this case you expose productivity of manual team against manual testing in terms of revealed defects.
In the example below, you can find more project-wise metrics and Kpis as taken from Gredy
68.60% | 27 | 778 | 99.23% | ||||
Avg. daily success rate | Number of test cases in suite | Number of all test execution | Stable ("Ready") executions vs. All | 5 Best builds/days (by success %) | 5 Worst builds/days (by success %) | 5 most frequent failing tests (total failed times) |