The test team as an enemy of development? And how can this be avoided? Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?What is the sense of having a monkey tester execute your test script?Testing phase in the developmentHow to test your tests without having the system under test?QA as Scrum MasterHow to write automation when test engineers are constantly pulled to do manual testing?How to handle Idle team members in SprintWhat should Testers do if they are not able to find good defects in the product?Why QA tools aggregate info on “projects” and not “teams”?What should tester do when user stories/documentation is outdated or simply wrong?How to deal with or prevent idle in the test team?
Why doesn't the university give past final exams' answers?
Can gravitational waves pass through a black hole?
A German immigrant ancestor has a "Registration Affidavit of Alien Enemy" on file. What does that mean exactly?
Does using the Inspiration rules for character defects encourage My Guy Syndrome?
Has a Nobel Peace laureate ever been accused of war crimes?
Why do C and C++ allow the expression (int) + 4*5?
Short story about an alien named Ushtu(?) coming from a future Earth, when ours was destroyed by a nuclear explosion
Married in secret, can marital status in passport be changed at a later date?
What's the difference between using dependency injection with a container and using a service locator?
Can this water damage be explained by lack of gutters and grading issues?
Reflections in a Square
Is "ein Herz wie das meine" an antiquated or colloquial use of the possesive pronoun?
Converting a text document with special format to Pandas DataFrame
"Destructive force" carried by a B-52?
Compiling and throwing simple dynamic exceptions at runtime for JVM
Can the van der Waals coefficients be negative in the van der Waals equation for real gases?
Can a Wizard take the Magic Initiate feat and select spells from the Wizard list?
How do I deal with an erroneously large refund?
Is my guitar’s action too high?
Coin Game with infinite paradox
Trying to enter the Fox's den
What could prevent concentrated local exploration?
“Since the train was delayed for more than an hour, passengers were given a full refund.” – Why is there no article before “passengers”?
Does GDPR cover the collection of data by websites that crawl the web and resell user data
The test team as an enemy of development? And how can this be avoided?
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Announcing the arrival of Valued Associate #679: Cesar Manara
Unicorn Meta Zoo #1: Why another podcast?What is the sense of having a monkey tester execute your test script?Testing phase in the developmentHow to test your tests without having the system under test?QA as Scrum MasterHow to write automation when test engineers are constantly pulled to do manual testing?How to handle Idle team members in SprintWhat should Testers do if they are not able to find good defects in the product?Why QA tools aggregate info on “projects” and not “teams”?What should tester do when user stories/documentation is outdated or simply wrong?How to deal with or prevent idle in the test team?
Details
Forming a Scrum team should include all the skills necessary to develop a user story in order to deliver a potentially deliverable product increment with each sprint.
In traditional organizations, however, I always encounter a fundamental mistrust of the integration of testers in the Scrum teams. Instead, a separate test team is to remain, which is then responsible for regression tests, load and performance tests and the test automation. The rationale for this type of organization is the so-called independence of the testers.
I have several problems with this view of things. Scrum makes the team fully responsible for the results. The establishment of an "independent" test team assumes that the Scrum team does not live up to its responsibilities and would turn a blind eye to errors in the product increment.
Another danger associated with the independent test team is that the testers become test report reporters who are not involved in the elimination of the problem.
In the scrum sense, we prefer problem solvers. The tester in the Scrum team, as well as all developers responsible for the delivery of a flawless product increment and will make every effort to fix it or have it fixed when uncovering an error. Another advantage of the tester in the team is the simple possibility to develop automated tests in step with the implementation of the user stories.
The Problem:
the procedure described already shows a part of the problem: the lead time for a new Product Backlog Item increases to several Sprints: 1 Sprint implementation plus 1 Sprint deferred test (plus possibly another Sprint error correction, if unfortunately this is no longer possible, without the commitment to break the current sprint, and considered less important). This results in further problems: does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
How to change this problem?
automated-testing manual-testing test-management test-design scrum
add a comment |
Details
Forming a Scrum team should include all the skills necessary to develop a user story in order to deliver a potentially deliverable product increment with each sprint.
In traditional organizations, however, I always encounter a fundamental mistrust of the integration of testers in the Scrum teams. Instead, a separate test team is to remain, which is then responsible for regression tests, load and performance tests and the test automation. The rationale for this type of organization is the so-called independence of the testers.
I have several problems with this view of things. Scrum makes the team fully responsible for the results. The establishment of an "independent" test team assumes that the Scrum team does not live up to its responsibilities and would turn a blind eye to errors in the product increment.
Another danger associated with the independent test team is that the testers become test report reporters who are not involved in the elimination of the problem.
In the scrum sense, we prefer problem solvers. The tester in the Scrum team, as well as all developers responsible for the delivery of a flawless product increment and will make every effort to fix it or have it fixed when uncovering an error. Another advantage of the tester in the team is the simple possibility to develop automated tests in step with the implementation of the user stories.
The Problem:
the procedure described already shows a part of the problem: the lead time for a new Product Backlog Item increases to several Sprints: 1 Sprint implementation plus 1 Sprint deferred test (plus possibly another Sprint error correction, if unfortunately this is no longer possible, without the commitment to break the current sprint, and considered less important). This results in further problems: does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
How to change this problem?
automated-testing manual-testing test-management test-design scrum
4
The point of having someone else test your code is that they will try things that you have not and therefore when you fix the problems that they could find that you and your team could not the quality of the software improves.
– John
2 days ago
add a comment |
Details
Forming a Scrum team should include all the skills necessary to develop a user story in order to deliver a potentially deliverable product increment with each sprint.
In traditional organizations, however, I always encounter a fundamental mistrust of the integration of testers in the Scrum teams. Instead, a separate test team is to remain, which is then responsible for regression tests, load and performance tests and the test automation. The rationale for this type of organization is the so-called independence of the testers.
I have several problems with this view of things. Scrum makes the team fully responsible for the results. The establishment of an "independent" test team assumes that the Scrum team does not live up to its responsibilities and would turn a blind eye to errors in the product increment.
Another danger associated with the independent test team is that the testers become test report reporters who are not involved in the elimination of the problem.
In the scrum sense, we prefer problem solvers. The tester in the Scrum team, as well as all developers responsible for the delivery of a flawless product increment and will make every effort to fix it or have it fixed when uncovering an error. Another advantage of the tester in the team is the simple possibility to develop automated tests in step with the implementation of the user stories.
The Problem:
the procedure described already shows a part of the problem: the lead time for a new Product Backlog Item increases to several Sprints: 1 Sprint implementation plus 1 Sprint deferred test (plus possibly another Sprint error correction, if unfortunately this is no longer possible, without the commitment to break the current sprint, and considered less important). This results in further problems: does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
How to change this problem?
automated-testing manual-testing test-management test-design scrum
Details
Forming a Scrum team should include all the skills necessary to develop a user story in order to deliver a potentially deliverable product increment with each sprint.
In traditional organizations, however, I always encounter a fundamental mistrust of the integration of testers in the Scrum teams. Instead, a separate test team is to remain, which is then responsible for regression tests, load and performance tests and the test automation. The rationale for this type of organization is the so-called independence of the testers.
I have several problems with this view of things. Scrum makes the team fully responsible for the results. The establishment of an "independent" test team assumes that the Scrum team does not live up to its responsibilities and would turn a blind eye to errors in the product increment.
Another danger associated with the independent test team is that the testers become test report reporters who are not involved in the elimination of the problem.
In the scrum sense, we prefer problem solvers. The tester in the Scrum team, as well as all developers responsible for the delivery of a flawless product increment and will make every effort to fix it or have it fixed when uncovering an error. Another advantage of the tester in the team is the simple possibility to develop automated tests in step with the implementation of the user stories.
The Problem:
the procedure described already shows a part of the problem: the lead time for a new Product Backlog Item increases to several Sprints: 1 Sprint implementation plus 1 Sprint deferred test (plus possibly another Sprint error correction, if unfortunately this is no longer possible, without the commitment to break the current sprint, and considered less important). This results in further problems: does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
How to change this problem?
automated-testing manual-testing test-management test-design scrum
automated-testing manual-testing test-management test-design scrum
asked Apr 19 at 21:35
MornonMornon
30312
30312
4
The point of having someone else test your code is that they will try things that you have not and therefore when you fix the problems that they could find that you and your team could not the quality of the software improves.
– John
2 days ago
add a comment |
4
The point of having someone else test your code is that they will try things that you have not and therefore when you fix the problems that they could find that you and your team could not the quality of the software improves.
– John
2 days ago
4
4
The point of having someone else test your code is that they will try things that you have not and therefore when you fix the problems that they could find that you and your team could not the quality of the software improves.
– John
2 days ago
The point of having someone else test your code is that they will try things that you have not and therefore when you fix the problems that they could find that you and your team could not the quality of the software improves.
– John
2 days ago
add a comment |
7 Answers
7
active
oldest
votes
Get a good Scrum Master who can convince the organization that Scrum teams should not be depended on other teams to deliver shippable software. It is an impediment he/she should resolve.
Traditional Organisations want the benefits of Scrum without changing their ways. Even for great coaches, this could be a process of years. Don't give up. Be bluntly honest about these ScrumBut Mini Waterfalls (eg testing after the Sprint) to management. Good Scrum leadership should work on fixing it. I think your thinking is spot on, but trust has still to be earned. See if you can find one team who dares to help prove that your thinking works. Maybe ask dev and test team for 2-3 Sprints to Experiment with your ideas.
The rationale for this type of organization is the so-called
independence of the testers.
The counter-argument is that having an independent test team is that development teams can take shortcuts to make their Sprints because the testers will find their mistakes. Leading to dev-test ping-pong and lower quality because the test team is also under pressure to release and will skip low risks tests over high-risk area's. Leading to a slower release cycle and overall lower quality.
Scrum Testers should create a quality culture in the Scrum team, coaching team members to understand how to produce a well-tested increment at the end of the Sprint.
Load and performance testing could be a separate Product Backlog Item to improve performance. Although automating this type of testing in build pipelines is becoming more and more common.
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
This is a fair point
– Daniel
2 days ago
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
add a comment |
I've tested in the sprint + 1 system under the SAFe framework. The framework does not specific this but lends itself to doing it for organizations coming from waterfall.
Mu suggestion is:
stop it
Your questions of
Does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
plus ones that I would add such as:
How to keep the code bases branched correctly? How to keep code in sync with environments? How to manage deployment through multiple environments and tests and processes? How to record the bugs?
When I find I am writing the words above I pause and go back to:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
particularly
- Individuals and interactions over processes and tools
Testing is sprint+1 is introducing a whole lot of process instead of individuals doing the work now and talking to each other. This sort of set up will inevitably lead to "The test team as an enemy of development" and that is what you have found. Exactly that
You need to keep stressing the importance of changing this. It is an investment. It will slow development down this week... and speed it up in X months. Leadership for the long term view is needed and can come come from any self-empowered person in the organization.
If you cannot change the setup I recommend the following actions:
- Write failing tests first (BDD)
- Pay equitably for automation engineers
- Communicate the benefit of testing to developers
- Work on relationships between application and automation engineers
- Embed automation engineers within the application development teams
- Truly empower automation engineers to 'pull the cord' and say no, don't deploy
- Talk openly about second class citizen syndrome for testers and how to avoid it
- Ensure social events - lunches, parties, lunch and learns, etc. include both parties
- Refer to folks as application and automation engineers instead of 'devs and testers'
add a comment |
Trust your feelings, Luke.
Seriously, the scenario you described is an anti-pattern. They simply are not ready to let go of waterfall. They are probably concerned about the panic among test leads and managers and testers themselves when they realize that their services as they currently exist are no longer needed.
But this is really a binary thing. You either test continuously (all dev team members) and be agile or you hand it off and remain waterfall. Which do they want?
New contributor
add a comment |
The cynic in me feels that if developers could be relied on to detect & fix their bugs as they go along then separate testing teams would never have been invented in the first place.
Much zealotry in this thread...
New contributor
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
|
show 1 more comment
If you cannot finish a PBI in one sprint, it is too big and needs to be split up into smaller parts.
Finishing a PBI means it has to be shippable and conform to your DoD, which normally means it has to be tested.
There is absolutely no reason why a team couldn't build and test in the same sprint. If you have a good DoR, a tester can start preparing tests as the code is being built, and executing those tests can be done in a matter of minutes - with plenty of time left to fix any found defects.
Regression tests, load tests, integration tests are maintained in the same sprint as where an increment is delivered.
If you cannot finish coding on time to allow for testing within the sprint, your PBI is simply too big and the team should consider thinking about smaller increments.
A separate testing team should not be able to find actual defects: why did your team deliver something that did not conform to specifications - that seems to be the most essential part of your DoD?
If new specifications pop up during post-sprint testing, they are new wishes and can be handled as any other change request. If actual deviations from original specs is found, your PBI should not have been considered finished. In order to check if your team's work matches the requirements, you need to... test! Which means that even with a separate test team, you still need testers in your own scrum team.
New contributor
add a comment |
If you cannot have your way, try to go for a compromise that satisfies both stakeholders, you (and perhaps your team*) and the corporation management.
I actually often find a mix rather helpful, where the team has some internal QA people that do in-development testing, but that also coordinate with external testers for final release testing or production bug report verification etc.
That way you can have your cake and eat it too.
- Your team is responsible and can use in-team QA expertise to help the developers track down bugs,
- but it uses existing company resources to stem bigger tasks.
- And that way release/production testing can be set up in a way that it also forces you to be able to hand your product/new features over to external partners ensuring your documentation is up to date. Basically making sure you provide proper interface documentation for whatever you build and having people with an independent mind-set re-check your product.
Btw. you should get it out of your head that any other party being involved with your product would automatically take away your ownership of or responsibility for your product. You don't do security pen-testing in-team or consider pen testers your enemy either (I hope!). Having external testing (on top of internal testing) is just the acknowledgement that you might not see everything as you are so entrenched in your product and how it is supposed to work, you might overlook problems that someone just looking at the surface might rather find. Whether that quality level is required, how it is best implemented (automatic vs. manual etc), are details, but in principle you shouldn't feel somehow attacked because corporate wants to (also) have external testers take a look at your product.
*I'd be careful with assuming your vision for how the team is supposed to work in scrum is supported by the team without clarifying with them.
New contributor
add a comment |
How about making one member of the programming team work as tester (or quality improver) for a month or two, and then rotate them out for another member of the team (as mentioned in another comment already)? Unlike 'normal' testers, they will not only understand better what to look for (on a code level, for instance), but also be able to help automate the testing - requiring debug and coding standards, employing automated testing which catches errors during software development, and so on.
The normal testers continue as usual and can use some of the tools developed by the programmers for their work. They will also be needed to evaluate how effective the internal testing was, i. e., whether their changes are kept or discarded and whether they get longer or shorter terms or more or less power to change things. This is the real kicker, as it will ensure that only those unpopular changes are pushed through which actually show positive results. While someone with good scores can even get rid of old procedures to ensure quality, if they think it is not necessary anymore with the new measures - thus keeping the process streamlined.
Once your quality goes up, a member of the testing team can rotate in every second turn. By then, people will be used to the interference, and this may add more ideas, which, depending how easy they are to set up, can be done directly or by one of the next programmers testing things.
New contributor
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "244"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38832%2fthe-test-team-as-an-enemy-of-development-and-how-can-this-be-avoided%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
Get a good Scrum Master who can convince the organization that Scrum teams should not be depended on other teams to deliver shippable software. It is an impediment he/she should resolve.
Traditional Organisations want the benefits of Scrum without changing their ways. Even for great coaches, this could be a process of years. Don't give up. Be bluntly honest about these ScrumBut Mini Waterfalls (eg testing after the Sprint) to management. Good Scrum leadership should work on fixing it. I think your thinking is spot on, but trust has still to be earned. See if you can find one team who dares to help prove that your thinking works. Maybe ask dev and test team for 2-3 Sprints to Experiment with your ideas.
The rationale for this type of organization is the so-called
independence of the testers.
The counter-argument is that having an independent test team is that development teams can take shortcuts to make their Sprints because the testers will find their mistakes. Leading to dev-test ping-pong and lower quality because the test team is also under pressure to release and will skip low risks tests over high-risk area's. Leading to a slower release cycle and overall lower quality.
Scrum Testers should create a quality culture in the Scrum team, coaching team members to understand how to produce a well-tested increment at the end of the Sprint.
Load and performance testing could be a separate Product Backlog Item to improve performance. Although automating this type of testing in build pipelines is becoming more and more common.
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
This is a fair point
– Daniel
2 days ago
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
add a comment |
Get a good Scrum Master who can convince the organization that Scrum teams should not be depended on other teams to deliver shippable software. It is an impediment he/she should resolve.
Traditional Organisations want the benefits of Scrum without changing their ways. Even for great coaches, this could be a process of years. Don't give up. Be bluntly honest about these ScrumBut Mini Waterfalls (eg testing after the Sprint) to management. Good Scrum leadership should work on fixing it. I think your thinking is spot on, but trust has still to be earned. See if you can find one team who dares to help prove that your thinking works. Maybe ask dev and test team for 2-3 Sprints to Experiment with your ideas.
The rationale for this type of organization is the so-called
independence of the testers.
The counter-argument is that having an independent test team is that development teams can take shortcuts to make their Sprints because the testers will find their mistakes. Leading to dev-test ping-pong and lower quality because the test team is also under pressure to release and will skip low risks tests over high-risk area's. Leading to a slower release cycle and overall lower quality.
Scrum Testers should create a quality culture in the Scrum team, coaching team members to understand how to produce a well-tested increment at the end of the Sprint.
Load and performance testing could be a separate Product Backlog Item to improve performance. Although automating this type of testing in build pipelines is becoming more and more common.
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
This is a fair point
– Daniel
2 days ago
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
add a comment |
Get a good Scrum Master who can convince the organization that Scrum teams should not be depended on other teams to deliver shippable software. It is an impediment he/she should resolve.
Traditional Organisations want the benefits of Scrum without changing their ways. Even for great coaches, this could be a process of years. Don't give up. Be bluntly honest about these ScrumBut Mini Waterfalls (eg testing after the Sprint) to management. Good Scrum leadership should work on fixing it. I think your thinking is spot on, but trust has still to be earned. See if you can find one team who dares to help prove that your thinking works. Maybe ask dev and test team for 2-3 Sprints to Experiment with your ideas.
The rationale for this type of organization is the so-called
independence of the testers.
The counter-argument is that having an independent test team is that development teams can take shortcuts to make their Sprints because the testers will find their mistakes. Leading to dev-test ping-pong and lower quality because the test team is also under pressure to release and will skip low risks tests over high-risk area's. Leading to a slower release cycle and overall lower quality.
Scrum Testers should create a quality culture in the Scrum team, coaching team members to understand how to produce a well-tested increment at the end of the Sprint.
Load and performance testing could be a separate Product Backlog Item to improve performance. Although automating this type of testing in build pipelines is becoming more and more common.
Get a good Scrum Master who can convince the organization that Scrum teams should not be depended on other teams to deliver shippable software. It is an impediment he/she should resolve.
Traditional Organisations want the benefits of Scrum without changing their ways. Even for great coaches, this could be a process of years. Don't give up. Be bluntly honest about these ScrumBut Mini Waterfalls (eg testing after the Sprint) to management. Good Scrum leadership should work on fixing it. I think your thinking is spot on, but trust has still to be earned. See if you can find one team who dares to help prove that your thinking works. Maybe ask dev and test team for 2-3 Sprints to Experiment with your ideas.
The rationale for this type of organization is the so-called
independence of the testers.
The counter-argument is that having an independent test team is that development teams can take shortcuts to make their Sprints because the testers will find their mistakes. Leading to dev-test ping-pong and lower quality because the test team is also under pressure to release and will skip low risks tests over high-risk area's. Leading to a slower release cycle and overall lower quality.
Scrum Testers should create a quality culture in the Scrum team, coaching team members to understand how to produce a well-tested increment at the end of the Sprint.
Load and performance testing could be a separate Product Backlog Item to improve performance. Although automating this type of testing in build pipelines is becoming more and more common.
edited Apr 19 at 22:15
answered Apr 19 at 21:43
Niels van ReijmersdalNiels van Reijmersdal
21.7k23173
21.7k23173
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
This is a fair point
– Daniel
2 days ago
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
add a comment |
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
This is a fair point
– Daniel
2 days ago
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
This answers the question well, so I hope it's ok that I tack on a small detail. The change that you're pointing out is based in the lean principle of building quality in rather than checking for it afterward. For an org that is really concerned about separation of responsibilities, you can actually keep this at first, but you have both dev and test in that team working in the same sprint with the test expert working hand in hand to help developers build quality in. In short order the org usually sees the lack of value in the extra hierarchical separation, but it might be an ok bridge.
– Daniel
Apr 19 at 23:08
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
"concerned about separation of responsibilities" I think those orgs should read the Agile principles. Scrum does not touch on them, but "self-organizing teams and trust them to get the job done." goes a long way. There is only one team, the Scrum team, it should contain developers and testers. Not separate teams working on the same Sprint. Scrum team members could be part of a Virtual Knowledge team to align practises and research their discipline. So a testing team could exist as a community to improve quality. Is management concerned, or are it the testers that are concerned mainly?
– Niels van Reijmersdal
2 days ago
This is a fair point
– Daniel
2 days ago
This is a fair point
– Daniel
2 days ago
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
One interesting aspect I find in these discussions is that modern development ideologies argue with forced separation of concerns when it comes to benefits of micro-services, yet the development processes shall be all handled by one component, the scrum team. More to the question, I often find a mix rather helpful, where the team has some internal QA people that do in-development testing but also coordinate with external testers for final release testing or production bug report verification etc. That way you can have your cake and eat it too.
– Frank Hopkins
yesterday
add a comment |
I've tested in the sprint + 1 system under the SAFe framework. The framework does not specific this but lends itself to doing it for organizations coming from waterfall.
Mu suggestion is:
stop it
Your questions of
Does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
plus ones that I would add such as:
How to keep the code bases branched correctly? How to keep code in sync with environments? How to manage deployment through multiple environments and tests and processes? How to record the bugs?
When I find I am writing the words above I pause and go back to:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
particularly
- Individuals and interactions over processes and tools
Testing is sprint+1 is introducing a whole lot of process instead of individuals doing the work now and talking to each other. This sort of set up will inevitably lead to "The test team as an enemy of development" and that is what you have found. Exactly that
You need to keep stressing the importance of changing this. It is an investment. It will slow development down this week... and speed it up in X months. Leadership for the long term view is needed and can come come from any self-empowered person in the organization.
If you cannot change the setup I recommend the following actions:
- Write failing tests first (BDD)
- Pay equitably for automation engineers
- Communicate the benefit of testing to developers
- Work on relationships between application and automation engineers
- Embed automation engineers within the application development teams
- Truly empower automation engineers to 'pull the cord' and say no, don't deploy
- Talk openly about second class citizen syndrome for testers and how to avoid it
- Ensure social events - lunches, parties, lunch and learns, etc. include both parties
- Refer to folks as application and automation engineers instead of 'devs and testers'
add a comment |
I've tested in the sprint + 1 system under the SAFe framework. The framework does not specific this but lends itself to doing it for organizations coming from waterfall.
Mu suggestion is:
stop it
Your questions of
Does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
plus ones that I would add such as:
How to keep the code bases branched correctly? How to keep code in sync with environments? How to manage deployment through multiple environments and tests and processes? How to record the bugs?
When I find I am writing the words above I pause and go back to:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
particularly
- Individuals and interactions over processes and tools
Testing is sprint+1 is introducing a whole lot of process instead of individuals doing the work now and talking to each other. This sort of set up will inevitably lead to "The test team as an enemy of development" and that is what you have found. Exactly that
You need to keep stressing the importance of changing this. It is an investment. It will slow development down this week... and speed it up in X months. Leadership for the long term view is needed and can come come from any self-empowered person in the organization.
If you cannot change the setup I recommend the following actions:
- Write failing tests first (BDD)
- Pay equitably for automation engineers
- Communicate the benefit of testing to developers
- Work on relationships between application and automation engineers
- Embed automation engineers within the application development teams
- Truly empower automation engineers to 'pull the cord' and say no, don't deploy
- Talk openly about second class citizen syndrome for testers and how to avoid it
- Ensure social events - lunches, parties, lunch and learns, etc. include both parties
- Refer to folks as application and automation engineers instead of 'devs and testers'
add a comment |
I've tested in the sprint + 1 system under the SAFe framework. The framework does not specific this but lends itself to doing it for organizations coming from waterfall.
Mu suggestion is:
stop it
Your questions of
Does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
plus ones that I would add such as:
How to keep the code bases branched correctly? How to keep code in sync with environments? How to manage deployment through multiple environments and tests and processes? How to record the bugs?
When I find I am writing the words above I pause and go back to:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
particularly
- Individuals and interactions over processes and tools
Testing is sprint+1 is introducing a whole lot of process instead of individuals doing the work now and talking to each other. This sort of set up will inevitably lead to "The test team as an enemy of development" and that is what you have found. Exactly that
You need to keep stressing the importance of changing this. It is an investment. It will slow development down this week... and speed it up in X months. Leadership for the long term view is needed and can come come from any self-empowered person in the organization.
If you cannot change the setup I recommend the following actions:
- Write failing tests first (BDD)
- Pay equitably for automation engineers
- Communicate the benefit of testing to developers
- Work on relationships between application and automation engineers
- Embed automation engineers within the application development teams
- Truly empower automation engineers to 'pull the cord' and say no, don't deploy
- Talk openly about second class citizen syndrome for testers and how to avoid it
- Ensure social events - lunches, parties, lunch and learns, etc. include both parties
- Refer to folks as application and automation engineers instead of 'devs and testers'
I've tested in the sprint + 1 system under the SAFe framework. The framework does not specific this but lends itself to doing it for organizations coming from waterfall.
Mu suggestion is:
stop it
Your questions of
Does one need 2 Definition of Done? When does the PO take the item? Does he take it off twice? How much buffer does the deployment team need to keep in fix to fix the returned bugs? Not to mention the context switch that becomes necessary. Pull testers and developers together and try to avoid mistakes instead of finding (with 1 sprint offset)? etc etc
plus ones that I would add such as:
How to keep the code bases branched correctly? How to keep code in sync with environments? How to manage deployment through multiple environments and tests and processes? How to record the bugs?
When I find I am writing the words above I pause and go back to:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
particularly
- Individuals and interactions over processes and tools
Testing is sprint+1 is introducing a whole lot of process instead of individuals doing the work now and talking to each other. This sort of set up will inevitably lead to "The test team as an enemy of development" and that is what you have found. Exactly that
You need to keep stressing the importance of changing this. It is an investment. It will slow development down this week... and speed it up in X months. Leadership for the long term view is needed and can come come from any self-empowered person in the organization.
If you cannot change the setup I recommend the following actions:
- Write failing tests first (BDD)
- Pay equitably for automation engineers
- Communicate the benefit of testing to developers
- Work on relationships between application and automation engineers
- Embed automation engineers within the application development teams
- Truly empower automation engineers to 'pull the cord' and say no, don't deploy
- Talk openly about second class citizen syndrome for testers and how to avoid it
- Ensure social events - lunches, parties, lunch and learns, etc. include both parties
- Refer to folks as application and automation engineers instead of 'devs and testers'
edited 2 days ago
answered Apr 20 at 0:40
Michael DurrantMichael Durrant
14.9k22165
14.9k22165
add a comment |
add a comment |
Trust your feelings, Luke.
Seriously, the scenario you described is an anti-pattern. They simply are not ready to let go of waterfall. They are probably concerned about the panic among test leads and managers and testers themselves when they realize that their services as they currently exist are no longer needed.
But this is really a binary thing. You either test continuously (all dev team members) and be agile or you hand it off and remain waterfall. Which do they want?
New contributor
add a comment |
Trust your feelings, Luke.
Seriously, the scenario you described is an anti-pattern. They simply are not ready to let go of waterfall. They are probably concerned about the panic among test leads and managers and testers themselves when they realize that their services as they currently exist are no longer needed.
But this is really a binary thing. You either test continuously (all dev team members) and be agile or you hand it off and remain waterfall. Which do they want?
New contributor
add a comment |
Trust your feelings, Luke.
Seriously, the scenario you described is an anti-pattern. They simply are not ready to let go of waterfall. They are probably concerned about the panic among test leads and managers and testers themselves when they realize that their services as they currently exist are no longer needed.
But this is really a binary thing. You either test continuously (all dev team members) and be agile or you hand it off and remain waterfall. Which do they want?
New contributor
Trust your feelings, Luke.
Seriously, the scenario you described is an anti-pattern. They simply are not ready to let go of waterfall. They are probably concerned about the panic among test leads and managers and testers themselves when they realize that their services as they currently exist are no longer needed.
But this is really a binary thing. You either test continuously (all dev team members) and be agile or you hand it off and remain waterfall. Which do they want?
New contributor
New contributor
answered 2 days ago
user3266268user3266268
312
312
New contributor
New contributor
add a comment |
add a comment |
The cynic in me feels that if developers could be relied on to detect & fix their bugs as they go along then separate testing teams would never have been invented in the first place.
Much zealotry in this thread...
New contributor
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
|
show 1 more comment
The cynic in me feels that if developers could be relied on to detect & fix their bugs as they go along then separate testing teams would never have been invented in the first place.
Much zealotry in this thread...
New contributor
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
|
show 1 more comment
The cynic in me feels that if developers could be relied on to detect & fix their bugs as they go along then separate testing teams would never have been invented in the first place.
Much zealotry in this thread...
New contributor
The cynic in me feels that if developers could be relied on to detect & fix their bugs as they go along then separate testing teams would never have been invented in the first place.
Much zealotry in this thread...
New contributor
New contributor
answered 2 days ago
HognoxiousHognoxious
311
311
New contributor
New contributor
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
|
show 1 more comment
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
Things change. Why was Waterfall invented in the first place? Because the software was not meant to be soft. After you released it had to work, because it was extremely expensive to fix. Certainly, if you shipped in the form of electronic chips. We used to have command and control managers who would tell people how to work, now we have facilitators who enable smart people to achieve more. Once it made sense to boss everyone around. It is not zealotry, I think it is embracing change.
– Niels van Reijmersdal
yesterday
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@NielsvanReijmersdal The same still applies to many software projects, just for some it is relatively "cheap" to fix and bugs don't have a high risk to generate huge costs in themselves. But with respect to this answer, a scrum team doesn't necessarily have to just consist of developers!
– Frank Hopkins
6 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@FrankHopkins Agreed, teams should contain as many disciplines as needed to deliver a working product. But hopefully not separate teams that create ping-pong or wait dependencies. I do think that the software projects where change is relative cheap are in larger numbers than the we build software that could kill people. Everything should be seen within its own context, and yes in some cases separate quality teams might make perfect sense. I do think the original question is more set in the traditional vs agile mindset and not in niche software that has special needs.
– Niels van Reijmersdal
5 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal Well, it's not only live-and-death software where bugs, even for a short while can have a significant cost that can be a good reason to have more strict quality control, where a separate quality check of any version before use in production could make sense to satisfy that quality goal. For instance, anything that has users and monetary transactions, ie a lot of online gaming and gambling would fall under this. Whether or not that makes sense is typically depending on the general company strategy.
– Frank Hopkins
4 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
@NielsvanReijmersdal That being said, I don't think there's a contradiction between using scrum and having such an additional quality gate. You can still continuously deploy to test environments where you do those quality tests and you can establish a process to use a fast lane if necessary. There are certainly more services where fast delivery is possible than in the past and thus more services where such a gate is not needed or can be very light-weight (e.g. in-team testing). I just feel, we shouldn't make the mistake to rigidly over-apply one particular model of the cool new stuff for every
– Frank Hopkins
3 hours ago
|
show 1 more comment
If you cannot finish a PBI in one sprint, it is too big and needs to be split up into smaller parts.
Finishing a PBI means it has to be shippable and conform to your DoD, which normally means it has to be tested.
There is absolutely no reason why a team couldn't build and test in the same sprint. If you have a good DoR, a tester can start preparing tests as the code is being built, and executing those tests can be done in a matter of minutes - with plenty of time left to fix any found defects.
Regression tests, load tests, integration tests are maintained in the same sprint as where an increment is delivered.
If you cannot finish coding on time to allow for testing within the sprint, your PBI is simply too big and the team should consider thinking about smaller increments.
A separate testing team should not be able to find actual defects: why did your team deliver something that did not conform to specifications - that seems to be the most essential part of your DoD?
If new specifications pop up during post-sprint testing, they are new wishes and can be handled as any other change request. If actual deviations from original specs is found, your PBI should not have been considered finished. In order to check if your team's work matches the requirements, you need to... test! Which means that even with a separate test team, you still need testers in your own scrum team.
New contributor
add a comment |
If you cannot finish a PBI in one sprint, it is too big and needs to be split up into smaller parts.
Finishing a PBI means it has to be shippable and conform to your DoD, which normally means it has to be tested.
There is absolutely no reason why a team couldn't build and test in the same sprint. If you have a good DoR, a tester can start preparing tests as the code is being built, and executing those tests can be done in a matter of minutes - with plenty of time left to fix any found defects.
Regression tests, load tests, integration tests are maintained in the same sprint as where an increment is delivered.
If you cannot finish coding on time to allow for testing within the sprint, your PBI is simply too big and the team should consider thinking about smaller increments.
A separate testing team should not be able to find actual defects: why did your team deliver something that did not conform to specifications - that seems to be the most essential part of your DoD?
If new specifications pop up during post-sprint testing, they are new wishes and can be handled as any other change request. If actual deviations from original specs is found, your PBI should not have been considered finished. In order to check if your team's work matches the requirements, you need to... test! Which means that even with a separate test team, you still need testers in your own scrum team.
New contributor
add a comment |
If you cannot finish a PBI in one sprint, it is too big and needs to be split up into smaller parts.
Finishing a PBI means it has to be shippable and conform to your DoD, which normally means it has to be tested.
There is absolutely no reason why a team couldn't build and test in the same sprint. If you have a good DoR, a tester can start preparing tests as the code is being built, and executing those tests can be done in a matter of minutes - with plenty of time left to fix any found defects.
Regression tests, load tests, integration tests are maintained in the same sprint as where an increment is delivered.
If you cannot finish coding on time to allow for testing within the sprint, your PBI is simply too big and the team should consider thinking about smaller increments.
A separate testing team should not be able to find actual defects: why did your team deliver something that did not conform to specifications - that seems to be the most essential part of your DoD?
If new specifications pop up during post-sprint testing, they are new wishes and can be handled as any other change request. If actual deviations from original specs is found, your PBI should not have been considered finished. In order to check if your team's work matches the requirements, you need to... test! Which means that even with a separate test team, you still need testers in your own scrum team.
New contributor
If you cannot finish a PBI in one sprint, it is too big and needs to be split up into smaller parts.
Finishing a PBI means it has to be shippable and conform to your DoD, which normally means it has to be tested.
There is absolutely no reason why a team couldn't build and test in the same sprint. If you have a good DoR, a tester can start preparing tests as the code is being built, and executing those tests can be done in a matter of minutes - with plenty of time left to fix any found defects.
Regression tests, load tests, integration tests are maintained in the same sprint as where an increment is delivered.
If you cannot finish coding on time to allow for testing within the sprint, your PBI is simply too big and the team should consider thinking about smaller increments.
A separate testing team should not be able to find actual defects: why did your team deliver something that did not conform to specifications - that seems to be the most essential part of your DoD?
If new specifications pop up during post-sprint testing, they are new wishes and can be handled as any other change request. If actual deviations from original specs is found, your PBI should not have been considered finished. In order to check if your team's work matches the requirements, you need to... test! Which means that even with a separate test team, you still need testers in your own scrum team.
New contributor
New contributor
answered yesterday
oerkelensoerkelens
1112
1112
New contributor
New contributor
add a comment |
add a comment |
If you cannot have your way, try to go for a compromise that satisfies both stakeholders, you (and perhaps your team*) and the corporation management.
I actually often find a mix rather helpful, where the team has some internal QA people that do in-development testing, but that also coordinate with external testers for final release testing or production bug report verification etc.
That way you can have your cake and eat it too.
- Your team is responsible and can use in-team QA expertise to help the developers track down bugs,
- but it uses existing company resources to stem bigger tasks.
- And that way release/production testing can be set up in a way that it also forces you to be able to hand your product/new features over to external partners ensuring your documentation is up to date. Basically making sure you provide proper interface documentation for whatever you build and having people with an independent mind-set re-check your product.
Btw. you should get it out of your head that any other party being involved with your product would automatically take away your ownership of or responsibility for your product. You don't do security pen-testing in-team or consider pen testers your enemy either (I hope!). Having external testing (on top of internal testing) is just the acknowledgement that you might not see everything as you are so entrenched in your product and how it is supposed to work, you might overlook problems that someone just looking at the surface might rather find. Whether that quality level is required, how it is best implemented (automatic vs. manual etc), are details, but in principle you shouldn't feel somehow attacked because corporate wants to (also) have external testers take a look at your product.
*I'd be careful with assuming your vision for how the team is supposed to work in scrum is supported by the team without clarifying with them.
New contributor
add a comment |
If you cannot have your way, try to go for a compromise that satisfies both stakeholders, you (and perhaps your team*) and the corporation management.
I actually often find a mix rather helpful, where the team has some internal QA people that do in-development testing, but that also coordinate with external testers for final release testing or production bug report verification etc.
That way you can have your cake and eat it too.
- Your team is responsible and can use in-team QA expertise to help the developers track down bugs,
- but it uses existing company resources to stem bigger tasks.
- And that way release/production testing can be set up in a way that it also forces you to be able to hand your product/new features over to external partners ensuring your documentation is up to date. Basically making sure you provide proper interface documentation for whatever you build and having people with an independent mind-set re-check your product.
Btw. you should get it out of your head that any other party being involved with your product would automatically take away your ownership of or responsibility for your product. You don't do security pen-testing in-team or consider pen testers your enemy either (I hope!). Having external testing (on top of internal testing) is just the acknowledgement that you might not see everything as you are so entrenched in your product and how it is supposed to work, you might overlook problems that someone just looking at the surface might rather find. Whether that quality level is required, how it is best implemented (automatic vs. manual etc), are details, but in principle you shouldn't feel somehow attacked because corporate wants to (also) have external testers take a look at your product.
*I'd be careful with assuming your vision for how the team is supposed to work in scrum is supported by the team without clarifying with them.
New contributor
add a comment |
If you cannot have your way, try to go for a compromise that satisfies both stakeholders, you (and perhaps your team*) and the corporation management.
I actually often find a mix rather helpful, where the team has some internal QA people that do in-development testing, but that also coordinate with external testers for final release testing or production bug report verification etc.
That way you can have your cake and eat it too.
- Your team is responsible and can use in-team QA expertise to help the developers track down bugs,
- but it uses existing company resources to stem bigger tasks.
- And that way release/production testing can be set up in a way that it also forces you to be able to hand your product/new features over to external partners ensuring your documentation is up to date. Basically making sure you provide proper interface documentation for whatever you build and having people with an independent mind-set re-check your product.
Btw. you should get it out of your head that any other party being involved with your product would automatically take away your ownership of or responsibility for your product. You don't do security pen-testing in-team or consider pen testers your enemy either (I hope!). Having external testing (on top of internal testing) is just the acknowledgement that you might not see everything as you are so entrenched in your product and how it is supposed to work, you might overlook problems that someone just looking at the surface might rather find. Whether that quality level is required, how it is best implemented (automatic vs. manual etc), are details, but in principle you shouldn't feel somehow attacked because corporate wants to (also) have external testers take a look at your product.
*I'd be careful with assuming your vision for how the team is supposed to work in scrum is supported by the team without clarifying with them.
New contributor
If you cannot have your way, try to go for a compromise that satisfies both stakeholders, you (and perhaps your team*) and the corporation management.
I actually often find a mix rather helpful, where the team has some internal QA people that do in-development testing, but that also coordinate with external testers for final release testing or production bug report verification etc.
That way you can have your cake and eat it too.
- Your team is responsible and can use in-team QA expertise to help the developers track down bugs,
- but it uses existing company resources to stem bigger tasks.
- And that way release/production testing can be set up in a way that it also forces you to be able to hand your product/new features over to external partners ensuring your documentation is up to date. Basically making sure you provide proper interface documentation for whatever you build and having people with an independent mind-set re-check your product.
Btw. you should get it out of your head that any other party being involved with your product would automatically take away your ownership of or responsibility for your product. You don't do security pen-testing in-team or consider pen testers your enemy either (I hope!). Having external testing (on top of internal testing) is just the acknowledgement that you might not see everything as you are so entrenched in your product and how it is supposed to work, you might overlook problems that someone just looking at the surface might rather find. Whether that quality level is required, how it is best implemented (automatic vs. manual etc), are details, but in principle you shouldn't feel somehow attacked because corporate wants to (also) have external testers take a look at your product.
*I'd be careful with assuming your vision for how the team is supposed to work in scrum is supported by the team without clarifying with them.
New contributor
New contributor
answered yesterday
Frank HopkinsFrank Hopkins
1112
1112
New contributor
New contributor
add a comment |
add a comment |
How about making one member of the programming team work as tester (or quality improver) for a month or two, and then rotate them out for another member of the team (as mentioned in another comment already)? Unlike 'normal' testers, they will not only understand better what to look for (on a code level, for instance), but also be able to help automate the testing - requiring debug and coding standards, employing automated testing which catches errors during software development, and so on.
The normal testers continue as usual and can use some of the tools developed by the programmers for their work. They will also be needed to evaluate how effective the internal testing was, i. e., whether their changes are kept or discarded and whether they get longer or shorter terms or more or less power to change things. This is the real kicker, as it will ensure that only those unpopular changes are pushed through which actually show positive results. While someone with good scores can even get rid of old procedures to ensure quality, if they think it is not necessary anymore with the new measures - thus keeping the process streamlined.
Once your quality goes up, a member of the testing team can rotate in every second turn. By then, people will be used to the interference, and this may add more ideas, which, depending how easy they are to set up, can be done directly or by one of the next programmers testing things.
New contributor
add a comment |
How about making one member of the programming team work as tester (or quality improver) for a month or two, and then rotate them out for another member of the team (as mentioned in another comment already)? Unlike 'normal' testers, they will not only understand better what to look for (on a code level, for instance), but also be able to help automate the testing - requiring debug and coding standards, employing automated testing which catches errors during software development, and so on.
The normal testers continue as usual and can use some of the tools developed by the programmers for their work. They will also be needed to evaluate how effective the internal testing was, i. e., whether their changes are kept or discarded and whether they get longer or shorter terms or more or less power to change things. This is the real kicker, as it will ensure that only those unpopular changes are pushed through which actually show positive results. While someone with good scores can even get rid of old procedures to ensure quality, if they think it is not necessary anymore with the new measures - thus keeping the process streamlined.
Once your quality goes up, a member of the testing team can rotate in every second turn. By then, people will be used to the interference, and this may add more ideas, which, depending how easy they are to set up, can be done directly or by one of the next programmers testing things.
New contributor
add a comment |
How about making one member of the programming team work as tester (or quality improver) for a month or two, and then rotate them out for another member of the team (as mentioned in another comment already)? Unlike 'normal' testers, they will not only understand better what to look for (on a code level, for instance), but also be able to help automate the testing - requiring debug and coding standards, employing automated testing which catches errors during software development, and so on.
The normal testers continue as usual and can use some of the tools developed by the programmers for their work. They will also be needed to evaluate how effective the internal testing was, i. e., whether their changes are kept or discarded and whether they get longer or shorter terms or more or less power to change things. This is the real kicker, as it will ensure that only those unpopular changes are pushed through which actually show positive results. While someone with good scores can even get rid of old procedures to ensure quality, if they think it is not necessary anymore with the new measures - thus keeping the process streamlined.
Once your quality goes up, a member of the testing team can rotate in every second turn. By then, people will be used to the interference, and this may add more ideas, which, depending how easy they are to set up, can be done directly or by one of the next programmers testing things.
New contributor
How about making one member of the programming team work as tester (or quality improver) for a month or two, and then rotate them out for another member of the team (as mentioned in another comment already)? Unlike 'normal' testers, they will not only understand better what to look for (on a code level, for instance), but also be able to help automate the testing - requiring debug and coding standards, employing automated testing which catches errors during software development, and so on.
The normal testers continue as usual and can use some of the tools developed by the programmers for their work. They will also be needed to evaluate how effective the internal testing was, i. e., whether their changes are kept or discarded and whether they get longer or shorter terms or more or less power to change things. This is the real kicker, as it will ensure that only those unpopular changes are pushed through which actually show positive results. While someone with good scores can even get rid of old procedures to ensure quality, if they think it is not necessary anymore with the new measures - thus keeping the process streamlined.
Once your quality goes up, a member of the testing team can rotate in every second turn. By then, people will be used to the interference, and this may add more ideas, which, depending how easy they are to set up, can be done directly or by one of the next programmers testing things.
New contributor
New contributor
answered yesterday
Carl DombrowskiCarl Dombrowski
1111
1111
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38832%2fthe-test-team-as-an-enemy-of-development-and-how-can-this-be-avoided%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
The point of having someone else test your code is that they will try things that you have not and therefore when you fix the problems that they could find that you and your team could not the quality of the software improves.
– John
2 days ago