You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
summary: 'To speed up delivery of software using CD pipelines, performance testing needs to be continual performed, with the correct bounds to add value to the software development process'
6
6
image: 'shift-left-banner.png'
@@ -58,7 +58,7 @@ This has the benefits of;
58
58
* Reduce cost to fix
59
59
* Reduce test & deploy cycles
60
60
61
-
Performance issues are generally more difficult to fix that functional testsfootnote:[Citation Needed], performance testing should be "Shifted-Left" in the same way that functional testing has been
61
+
Functional issues are typically easier to fix than Performance issues because they involve specific, reproducible errors in the software's behavior; therefore, performance testing should be "Shifted-Left" in the same way that functional testing has been
62
62
63
63
'''
64
64
@@ -68,12 +68,16 @@ In the traditional Waterfall model for software development, shift left means pu
Not only do we want to include performance tests earlier in the dev/release cycle, we also want to ensure that the full suite of performance tests (or any proxy performance tests footnoteL[Citation needed]) captures performance regressions before multiple release cycles have occurred.
* Repository Bots allow performance engineers to initiate *Upstream Performance Tests* against open Pull Requests
104
+
* *Code Repository Bots* allow performance engineers to initiate *Upstream Performance Tests* against open Pull Requests, returning comparative performance data to workflow that the engineer uses in the day-to-day job.
101
105
* *Integrated Performance Threshold* tests provide automated gating of acceptable levels of performance
102
106
* *Continual Performance Testing* allows for analyzing trends over time, scaling, soak and chaos type testing, asynchronously from the CI/CD build pipeline
103
107
* *Automated Regression Detection* provides automated tooling for detecting catastrophic performance regression related to a single commit, or creeps in performance degradation over time
104
108
105
109
Continual analysis is performed by experienced engineers, but the process does not require manual intervention with each release.
106
110
107
-
Engineers are free to focus on implementing features and not worry about regressions. When regressions are detected, the information they need to identify the root cause is readily available, in a suitable format.
111
+
Engineers are free to focus on implementing features and not worry about performance regressions. When regressions are detected, the information they need to identify the root cause is readily available, in a suitable format.
108
112
109
-
== Repository Bots
113
+
== Code Repository Bots
110
114
111
-
INFORMATION::
112
-
Repository Bots initiate performance tests against PR's. Their purpose is to allow engineers to make a decision on whether to merge a PR or not. The results need to be actionable by engineers. Profiling data should also be provide to allow engineers to understand what their changes are doing
115
+
Code Repository Bots initiate performance tests against PR's. Their purpose is to allow engineers to make a decision on whether to merge a PR or not. The results need to be actionable by engineers. Profiling data should also be provide to allow engineers to understand what their changes are doing
113
116
114
117
Receive report & analysis of impact of changes to key performance metrics
115
118
@@ -123,7 +126,6 @@ Allow automated capture of profiling data of system under load, allowing enginee
123
126
124
127
== Integrated Performance Thresholds
125
128
126
-
INFORMATION::
127
129
The aim of Integrated Performance Tests is to determine whether a release meets acceptable levels of performance with respect to customer expectations, not to capture changes over time. The results need to be automatically calculated and should provide a boolean Pass/Fail result.
128
130
129
131
* Pass/Fail criteria - the same as functional tests, the performance should be either be acceptable, or not-acceptable
@@ -136,7 +138,6 @@ The aim of Integrated Performance Tests is to determine whether a release meets
136
138
137
139
== Continual Performance Testing
138
140
139
-
INFORMATION::
140
141
The aim of Continual Performance Testing is to perform larger scale performance workloads, that can take time to perform.
141
142
142
143
These tests can include;
@@ -172,4 +173,4 @@ Other tools that can help product teams with performance related issues are;
172
173
* *Performance Bisect*: perform an automated bisect on source repository, running performance test(s) each time to automatically identify the code merge that introduced the performance regression
173
174
* *Automated profiling analysis*: AI/ML models to automatically spot performance issues in profiling data
174
175
* *Proxy Metrics*: System metrics captured during functional testing that will provide an indication that a performance/scale issue will manifest at runtime
175
-
* *Automatic tuning of service configuration*: Using Hyper-parameter optimization to automatically tune configuration space of a service to optimize the performance for a given target environment/workload
176
+
* *Automatic tuning of service configuration*: Using Hyper-Parameter Optimizationfootnote:[https://github.com/kruize/hpo] to automatically tune configuration space of a service to optimize the performance for a given target environment/workload
0 commit comments