Skip to content

Commit 6e46cc7

Browse files
committed
Update shift left post
1 parent 0380406 commit 6e46cc7

File tree

1 file changed

+11
-10
lines changed

1 file changed

+11
-10
lines changed

content/post/shift-left-on-performance/index.adoc

+11-10
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: "Shift Left on Performance"
3-
date: 2023-01-10T00:00:00Z
3+
date: 2025-03-04T00:00:00Z
44
categories: ['performance', 'methodology', 'CI/CD']
55
summary: 'To speed up delivery of software using CD pipelines, performance testing needs to be continual performed, with the correct bounds to add value to the software development process'
66
image: 'shift-left-banner.png'
@@ -58,7 +58,7 @@ This has the benefits of;
5858
* Reduce cost to fix
5959
* Reduce test & deploy cycles
6060

61-
Performance issues are generally more difficult to fix that functional testsfootnote:[Citation Needed], performance testing should be "Shifted-Left" in the same way that functional testing has been
61+
Functional issues are typically easier to fix than Performance issues because they involve specific, reproducible errors in the software's behavior; therefore, performance testing should be "Shifted-Left" in the same way that functional testing has been
6262

6363
'''
6464

@@ -68,12 +68,16 @@ In the traditional Waterfall model for software development, shift left means pu
6868

6969
image::shift-left-waterfall.jpeg[]
7070

71+
source: https://insights.sei.cmu.edu/blog/four-types-of-shift-left-testing/
72+
7173
==== In the Agile world
7274

7375
For continually delivered services, "shifting left" incudes an additional dimension;
7476

7577
image::shift-left-agile.jpeg[Agile Shift Left,,,float="right"]
7678

79+
source: https://insights.sei.cmu.edu/blog/four-types-of-shift-left-testing/
80+
7781
Not only do we want to include performance tests earlier in the dev/release cycle, we also want to ensure that the full suite of performance tests (or any proxy performance tests footnoteL[Citation needed]) captures performance regressions before multiple release cycles have occurred.
7882

7983
== Risks in the managed service world
@@ -97,19 +101,18 @@ image::shift-workflow.png[Agile Shift Left,,,float="right"]
97101

98102
In a "Shifted-left" model;
99103

100-
* Repository Bots allow performance engineers to initiate *Upstream Performance Tests* against open Pull Requests
104+
* *Code Repository Bots* allow performance engineers to initiate *Upstream Performance Tests* against open Pull Requests, returning comparative performance data to workflow that the engineer uses in the day-to-day job.
101105
* *Integrated Performance Threshold* tests provide automated gating of acceptable levels of performance
102106
* *Continual Performance Testing* allows for analyzing trends over time, scaling, soak and chaos type testing, asynchronously from the CI/CD build pipeline
103107
* *Automated Regression Detection* provides automated tooling for detecting catastrophic performance regression related to a single commit, or creeps in performance degradation over time
104108

105109
Continual analysis is performed by experienced engineers, but the process does not require manual intervention with each release.
106110

107-
Engineers are free to focus on implementing features and not worry about regressions. When regressions are detected, the information they need to identify the root cause is readily available, in a suitable format.
111+
Engineers are free to focus on implementing features and not worry about performance regressions. When regressions are detected, the information they need to identify the root cause is readily available, in a suitable format.
108112

109-
== Repository Bots
113+
== Code Repository Bots
110114

111-
INFORMATION::
112-
Repository Bots initiate performance tests against PR's. Their purpose is to allow engineers to make a decision on whether to merge a PR or not. The results need to be actionable by engineers. Profiling data should also be provide to allow engineers to understand what their changes are doing
115+
Code Repository Bots initiate performance tests against PR's. Their purpose is to allow engineers to make a decision on whether to merge a PR or not. The results need to be actionable by engineers. Profiling data should also be provide to allow engineers to understand what their changes are doing
113116

114117
Receive report & analysis of impact of changes to key performance metrics
115118

@@ -123,7 +126,6 @@ Allow automated capture of profiling data of system under load, allowing enginee
123126

124127
== Integrated Performance Thresholds
125128

126-
INFORMATION::
127129
The aim of Integrated Performance Tests is to determine whether a release meets acceptable levels of performance with respect to customer expectations, not to capture changes over time. The results need to be automatically calculated and should provide a boolean Pass/Fail result.
128130

129131
* Pass/Fail criteria - the same as functional tests, the performance should be either be acceptable, or not-acceptable
@@ -136,7 +138,6 @@ The aim of Integrated Performance Tests is to determine whether a release meets
136138

137139
== Continual Performance Testing
138140

139-
INFORMATION::
140141
The aim of Continual Performance Testing is to perform larger scale performance workloads, that can take time to perform.
141142

142143
These tests can include;
@@ -172,4 +173,4 @@ Other tools that can help product teams with performance related issues are;
172173
* *Performance Bisect*: perform an automated bisect on source repository, running performance test(s) each time to automatically identify the code merge that introduced the performance regression
173174
* *Automated profiling analysis*: AI/ML models to automatically spot performance issues in profiling data
174175
* *Proxy Metrics*: System metrics captured during functional testing that will provide an indication that a performance/scale issue will manifest at runtime
175-
* *Automatic tuning of service configuration*: Using Hyper-parameter optimization to automatically tune configuration space of a service to optimize the performance for a given target environment/workload
176+
* *Automatic tuning of service configuration*: Using Hyper-Parameter Optimizationfootnote:[https://github.com/kruize/hpo] to automatically tune configuration space of a service to optimize the performance for a given target environment/workload

0 commit comments

Comments
 (0)