Definition of Done
When I join a team, one of the most common exercises that I help the team with is the Definition of Done (DoD). Wikipedia defines Definition of Done (DoD) as follows:
“The exit-criteria to determine whether a product backlog item is complete. In many cases the DoD requires that all regression tests should be successful. The definition of “done” may vary from one Scrum team to another, but must be consistent within one team.”
The original page can be reached here.
I like to help the team to come up with their own Definition of Done (DoD), but in the end I like to challenge their DoD, making them think a bit ;)
Some time ago I wrote a blog post about what means to have a Green Build. This blog post can be found here. For me is clear that a story is only Done when all the topics of the DoD list are met and the build is Green. Below you have a picture describing what I mean. And of course the team is always trying to move items from the outside Box into the Green Build Policy box.
Below I am listing 15 items that you might or might not want to include on your DoD. Some of these items are more challenging than others, but this is just a basic list of ideas that can help you to define your own DoD, a general guideline; feel free to pick and choose those that best apply to your own situation.
- X% of code coverage from Unit Test
- Automated tests for the User Story are created and pass
- Acceptance criteria is met
- All regression testing of the product is passing
- System-level Non-Functional requirements ( e.g. System-level performance) tested and passed
- Software is running in the The server to be defined by the team (optimally in pre-production)
- Code review by peer
- Technical documentation updated
- User documentation updated
- User Documentation is localised
- Localisation for the story is done
- Localisation testing is done
- Marketing input is done
- Legal documents are done
- Beta Testing is done
If you liked these questions and want to receive other tips, subscribe to the mailing list. I promise to deliver only high-quality content.
I would love to get a star rating for this post:


Hello Luis,
Good and Excellent information which you have provided which need to take care while deciding on the DoD. Thank you very much your detailed explanation and sharing the knowledge.
Thanks and Regards
G.S.Ravinarayanan
Thanks for your nice comments :)
Looking at the following given definition of DoD from wikipedia “The exit-criteria to determine whether a product backlog item is complete. In many cases the DoD requires that all regression tests should be successful…”, I find something quite important is missing.
“All regression tests should be successful” is vague and differs from team to team. To be very precise and it’s the way we should go about it, we need to base success on the success criteria attached to the US.
Otherwise, success to one, means another thing to someone else and here is where the dilemma starts.
Without these precise criteria at hand, developers will assume their perspective on success and the customer will also do the same.
What do we have at the end? A functionality that is nether here or there, the developer thought one thing and the customer another. From the developer’s perspective, the US wasn’t detailed enough and from the customer’s perspective development didn’t understand business processes or needs.
This is something I came across many organisations and it brings nothing but chaos to both groups.
Thanks for your comment.
Everything what you said is correct but thats why that DoD and everything that happens within the team should be discussed and aligned within the team.
You will never have a global and standard DoD or Acceptance or whatever :) Its based on the local context.
Cheers,
Luis
Hi! Great list, indeed.
Allow me to digress about something that I usually find to be a source of friction. Maybe you can help me make sense of it.
Thing is, I find it very hard to include these 3 topics together:
#1 All regression testing of the product is passing
#2 System-level Non-Functional requirements ( e.g. System-level performance) tested and passed
#3 Software is running in the The server to be defined by the team (optimally in pre-production)
If we include the first two, but have it running on a QA environment on a (rebased!) feature/story branch, we can check stories/features in isolation, and burn them down individually.
But when we test in a staging environment we typically find integration problems and people start questioning that the done stories weren’t actually done.
So the team feels like we need to include #3 in the DoD and, as a consequence, the process starts drifting to a “massive QA at the end” of the sprint … and the feeling that nothing is really done until everything is done and tested together.
In a couple of sprints with these 3 checks in DoD and QA running on staging environment, burndown charts start looking like a cliff in the very last couple of days. Now developers are hanging on to a bunch of pending stories, unable to get closure and move their focus to the next story during the sprint.
Ultimately, burndown is not reflecting reality, and the lack of focus/closure has a negative impact on actual velocity.
What are your thoughts on this?
All database changes are scripted
Multi-device ready & tested
Integration Guide is updated
Hi Luis! Nice examples. Mine look very similar. I worked once at a company, where we had a strong focus on clean code. In order to keep our inner quality of the software high we also added items like the following to our DoD:
* No major (or higher severity) issues detected by sonarqube
* No checkstyle / findbugs violations detected
Very good Stefan :)
Really thanks for your input :)
Luis
Good examples Louis. For a team to get started with a D.o.D, initial resistance can be often be overcome by starting a D.o.D. of a list of stuff the team already does currently. This list can then be expanded with a wishlist of the stuff the team would like to do and then, in true agile fashion, incrementally move stuff from the wishlist to the D.o.D. over iterations. Start small!
Hi Willem,
Exactly what we did in the past remember ;)
Luis
A “funny” observation I have done is that general “Done” criteria and the specific “Accept Criteria” is not given very much interest when you work with a Story… Often these parts need to be requested from a test-role or ScrumMaster - or even from a department manager, before it is given sufficient focus already from start!
Sometimes I have used the Poker Estimation and keep asking questions about how we will be able to decide when the Story is Done - but it is not a thing that comes natural to most people…
…the list of “tricks” to ensure “Done” criteria - both general and specific ones - becomes natual for most team members to focus on - that could be an interesting add on to this list! :-)
Hi,
Thanks for your comment :) I did not understand the lost of tricks :) Why not put them as DoD? :)