Done is Done
The Agile Weekly Crew discuss what it means to be done with a story.
Derek Neighbors: Welcome to another Scrumcast. I’m Derek Neighbors.
Clayton Lengel‑Zigich: I’m Clayton Lengel‑Zigich.
Done Is Done
Derek: Today we want to talk about something that a lot of teams struggle with, and that’s the concept of “Done is done”. I guess to start off with Clayton, what do we mean when we say “Done is done.”?
Clayton: That’s a hard one to sum up in one phrase. There’re a lot of things that go into it, but I would say that it’s basically, if you want to go with more of a book answer, you want to delivery potentially shippable software. That’s an easy definition for that. Obviously you need to expand on it.
Where Do You Draw The Line In The Sand
Derek: There’re a lot of exceptions depending on the size of your team, and what team functionality is. You might draw a different line in the sand of what is done. Meaning, maybe I’ve got a Q & A team that is entirely separate from my team so done is done for. My team would be making sure that we’ve done A, B and C, and we’ve handed it off to the Q & A team, and when we’ve handed it off to the Q & A team it’s thereby done.
For today’s conversation we want to talk about “Done is done,”, is that a single team is responsible for the entire chain all the way to deployment. What does it mean to be done in order to deploy? What we see a lot is a developer will say “Oh, this is done.”
“Mrs. Product Owner, it’s out there. It’s totally ready to go. Go check it out.”, and a product owner comes in and they go to the website and say “I don’t see how to…Where do I get to this”? “Oh. Well, you have to enter like this super magic URL to get there.” “OK.” They go in and they pull out some data, and they press a button, and boom the software blows up, and then there’s a defect.
Obviously not done, but thought it was done. Today maybe let’s go over what are some things that hold developers back from being able to give product owners features that are actually done the first time.
Lack Of Automated Testing
Clayton: Take your maybe less savvy, what you want to call it, developer not doing any automated testing. Those people in my experience…I got into the industry. I’m going to do some feature, and I’m going to spend a lot of time manually going through their entire process. I know how to make a testing, but one way that people go wrong with that is they choose the golden paths. It’s a phrase I’ve heard before.
I know that I need to fill in on these fields, and I know that if I put a two highway number in this field, it doesn’t going to work, so I normally put ten in that field. I know that I need to press the submit button, and I know that when I get to the next page, there’s a bunch of [indecipherable 03:18] , but there’s this little link down here. If I click on that, “OK, sweep, it’s done.”
They’re just lazy in that regard. They don’t think about it in terms of how someone actually is going to use it. I would say for the more savvy developer, that’s actually writing automated test.
It’s really easy to do the same thing when you’re testing. You have different test cases, but you still do the golden path for a lot of those. You don’t think to put in much crazy test cases, maybe you shouldn’t. You don’t necessarily wanting to catch every single edge case, but it’s still easy to do that.
Also, you get the false sense of security of while this feature is tested when you actually maybe deployed of that. The product owner is looking at it. They don’t do the same except that you did in your test obviously. You get the ball of the software.
Works On My Machine
Derek: I definitely think that that’s one of the biggest things that…at least in web development. I don’t even say we see this a lot in mobile development. It’s not actually deploying to target platforms and do the testing.
It’s the classic, “Hey! This works in my environment. Everything is great,” the works on my machine syndrome, so going along, blistering along, everything is great, and I handed off, and the product owner complaints that it just doesn’t function at all. You scratch your head and say, “How the hell? I’ve worked on this for four hours, and I’ve not seen anything remotely close to what you’re talking about.” You’re completely crazy.
You go to the diversion of the operating system on the mobile device or their particular mobile device where you go to their web browser and you go, “Oh, oh, oh. I forget about that dependency or whatever.” That is probably one of the lowest hanging pieces of fruit that developers can do to get a better done is done and that is, make sure that you’re deploying to a solid target platform.
If it’s multiple platforms, we see this as mobile, if you have to support multiple versions of the operating system or multiple versions of a browser, they are actually doing deployment and a test with those and before you ask that product owner, do the same thing. Because invariably, they will pick the one platform that you did not choose to look at, in order to do their testing.
An Appropriate Dataset
Clayton: Going along with that would be appropriate data set. It’s really easy when you’re developing some feature, and you’ve got your two dummy users in your system, and everything’s kosher. Then you’ve got to deploy it to the target platform, or the staging server, even in production. Everything goes great when I looked at it, all the functionality works, but it’s unusable.
A big part of done is done, is that from beginning to end of the whole feature, it needs to be usable in a reasonable way. Not something that requires tricks, excessive waiting, or all those kind of things. You really have to be sensible in that regard.
Derek: That’s a big part. Even automated testing sometimes even makes it worse, but regardless, it exists even without automated testing. That’s the whole sensible workflow chain. What does this feature look like from cradle to grave?
We get too hellbent on, “We’ve got these great regression tests, so when I go add this new piece of functionality, a feature that builds upon new feature, I’ve already tested the original feature. I’m testing the feature that I’m building that piggybacks on top of this feature, I don’t need to go walk through the visual workflow.”
In reality, the product owner gets to it, and they say, “I did this, and I did that, but there’s no way to get to this other thing short of having to go the long way around the fence.”
The other one I see a lot of times is missing roles. It works great as an admin, but as a guest it doesn’t work. It’s supposed to work because you’ve built this funky thing on. Making sure the UI and the UX are reasonable is a big part of making sure that things don’t come back.
UX Isnt My Fault
Clayton: A lot of that, the UI, UX stuff…A lot of developers probably put on their I’m‑a‑developer hat, and they don’t want to get into the front end or UX mentality. Even if you read some basic stuff about UX or information architecture, or whatever those people call themselves these days, even if you knew basic stuff about that, it’s really just a common sense thing.
I know common sense isn’t very common, but there’s a lot you can without really having to exert a lot of effort on your part while you’re developing the feature. Most of those things, in terms of the UX stuff especially, and the workflow, comes from communication with the product owner, and planning. That’s probably one of the biggest ones that prevent developers from delivering something that is actually what the product owner wanted.
Derek: What I see a lot, especially with people who don’t have as much experience or don’t have confidence, is they’ll have a planning meeting with the product owner, they’ll get some reasonably good wireframe, UX, UI, the designer would get involved to do that.
They’ll go to start implementing the feature, and they’ll know the workflow is broken when they’re doing it. Instead of raising the red flag back up to the product owner, or to somebody else on the team who’s responsible for UX or UI, and saying, “This feels really clumsy. You’re asking me to select 100 items here, but if you try to select more than 5, it takes forever. Isn’t there a better way we can do it”?
A lot of times, developers shirk that responsibility, and say, “I’m not the designer, I’m not a UX guy. I’m just going to implement what was given to me, and what was discussed.” The first thing that happens is the product owner or the designer might even sign off and say, “Yeah, it looks great.” Then they give it to the first user who actually has to select 100 things out of that 1000, and goes, “This is the worst piece of software, I can’t use this.”
The developer instinct is, “I knew that.” If you knew that, you need to speak up. That’s a big part of it as well.
Clayton: More often than not, you probably run into a situation where you don’t wireframes necessarily or lots of well thought out interface elements, and things like that.
When you get into that mentality, especially when people feel like they’re crunched for time, they try and use the simplest possible solution. Going back to the Web application example, if you don’t really have a whole lot of design elements in place, or a style guide for instance, and you’re just winging it, it’s really easy to crap out a bunch of stuff on this page. And it totally doesn’t make any sense. Submit button is this thin little thing that’s way off on the right‑hand side.
When developers use a site like that, you tell them, “Go use this antiquated government website,” all they have to say are all these terrible things. “Oh, I’m so much better, and I would never do this.”
But when they’re crunch for time or even when they’re not crunch for time, when they’re just trying to get the feature done, they say, “Hey sweet, I got the feature done. It totally works. I can click through it,” even though for the product owner or maybe a not so technical person or especially a user, it’s like, “What’s this huge thing I’m staring at? I have no idea how to do anything. I don’t know how to start.”
Entry and Exit Points
Derek: Yeah. Another part that comes along that is, sometimes we get such tunnel visioned on doing iterative development that we only think about the current iteration, the current feature set that we’re on. We forget those entry and exit points and some of those workflows along it.
Maybe I’ve got some form of object that’s got some attributes on it. It goes through some form of a process to do calculations or to do something, and somebody asked for this brand new piece of functionality. He says, “Hey, I need to be able to calculate this new thing. I need a new attribute, and based on that attribute, I need to calculate new values. After 30 days, you need to go check this other attribute and re‑tally something.”
So we go, and we add the attribute to the table. We update the calculation. We write this really fantastic test. We ran all of our regression test, and we say, “This is awesome. The feature’s implemented. We’re golden.”
Then the product owner goes and says, “Look, I’ll go and check that.” And the first thing they do is, “Um, there’s no way to add the new value to the attribute.” We’ve totally forgotten that, “Oh yeah, we inserted that in the database, and we inserted that in our test without ever having a screen going and updating the screen for that object to allow for that attribute.”
Sometimes it’s as simple as asking another person on the team that’s not part of that process and say, “Hey, can you just take a look at this and run through it really quick.” And you find that kind of stuff right away because the first thing they say is, “Where do I put that new value”?
Clayton: Right. Two big questions that would be huge wins for most teams would be, when you’re discussing a feature with the product owner, being able to say this question of, “How do I get here, and how does the user get here? Then after they do this thing, what happens next”?
You can imagine a system where you built some system that takes a report that someone generates. They type up some values in some text file or whatever, and they are supposed to be able to put this report into the system. Then there’s some black box that happens, and then it spits out some report.
People forget to ask, “Well, how do they get the report in the system in the first place”? because that’s the sort of thing that I think developers…They would develop the system, and they would say, “Oh well, I’m going to use my shell script that I wrote to import this file and parse it and blah, blah, blah.”
Then they’ll tell this little old lady user who works part‑time, “Yeah, just use your shell script.” “What”? Then when things come out, it’s like, “Oh, your report was generated. Where do I get it”? “Oh yeah, just SFTP into this thing, and find it directly with your username.”
It’s like, “Whoa, shouldn’t that get like emailed or put in a public place or something”? Those kinds of things. The, “How do I get here? What’s the entry point”? And then, “What happens next after I clicked the feature”? Those are two very important questions.
Derek: A lot of these things obviously can be addressed during planning meetings, meaning that if you’re asking a lot of these questions during your planning meeting, you’re getting good quality wireframes. You’re having those visual discussions and those entry and exit points, those workflows.
It avoids a lot of problems which comes to the last thing and that’s…At least here in Integrum, we do something where we have acceptance criteria, and when we walk through the product owner during planning meeting, we basically say, “What are the terms that consider this complete”?
I think that access a checklist at least on a functionality perspective for both the product owner and the developer to say, “I really shouldn’t be telling the product owner that this feature is complete. I know what’s done, these things that we agreed upon at the planning meeting.” It also allows the product owner to say, “Let me go through this checklist to make sure that the developers said what we agreed upon.”
Though, I don’t think that’s enough. That’s a really good start, but I still think ‑‑ what we talked about ‑‑ is the design acceptable? Is the UI acceptable? Do we have the proper entry and exit points? Do we have a sensible workflow? Have we tested it on production? Did we have somebody else on the team test it on production? Is it shippable? Is it deployable? There’s a lot to it.
I guess what I’m saying is, developers do not be so lazy when it comes to the point. A lot of times we try to push all the responsibility back to a product owner or to a QA team. In 20 years at software this has been problem in every single company I’ve been with.
Even with the QA team…The QA team says, “Listen assholes, you guys don’t ever test anything before you give it to us. What’s wrong with you”? I’ve heard product owners say, “Why do you keep asking me to check this out when it doesn’t even remotely close to work”? How do we get to the point where when we got this checklist or this formula of “these are all the things we need to do” that we actually do it?
Clayton: That’s a hard thing to overcome in the sense of…Until you see the value that you get from that, it’s difficult to get yourself in that because it is extra work. Most people or a lot of developers have this idea of “That’s not my job. My job is to write code and implement the feature, and your job is to test it or whatever and make sure that it works, QA team people.”
But if you look at it from some other aspect of your life…For instance, my wife and I, if I say, “We have all these dishes in the sink, and we need to put them in the dishwasher or whatever.” And I ask my wife, “Is that something you’ll be able to do while I go do this other thing”? And we’re exchanging things.
She’s going to be pretty upset if every single dish I’ve ever put in the sink that week or that day or whatever has tons of fruitcake onto it. So there are certain things you’d have to do so that the next person in line…And I know that the benefit I get from taking that extra step is totally worth it.
I know that if I follow the checklist, and I make it really easy for the product owner to sign things off, and I make it really easy for them to say, “I was able to go and demonstrate that this feature worked. It looks perfect. It’s just what we talked about,” that’s a huge win for me because now that that feature is actually done, I can move on to the next thing or I can complete some other story.
It’s not this idea of, “I’m going to do a whole bunch of work, and then tell you halfway through the iteration, “Go take a look at 80 percent of the work that I’ve completed.” Then you find out, “Oh well, this is broken. This is broken, this is broken,” all that stuff and I have to go back and fix it.”
If I have this confidence that when I say something is done, it actually is done, I don’t have to think about it anymore, I can just keep going. So getting people to understand that value, that’s a way that we can get people to start adapting the practice of following that checklist and really thinking about these things.
Derek: That’s just a good area that almost every team can improve upon. Again, whether you have a QA team, whether you are the QA team, whether you rely on a product donor or a third party to do verification for you, this is something every team can inspect and do some adaptation to and really improve their quality level, it’s independent a code. It’s really a discipline based improvement and so we just encourage you to check it out. We hope to see you next time.
The Agile Manifesto and Embedded TDD with James Grenning
The Agile Weekly Crew and James Grenning discuss the history of the agile manifesto, TDD in embedded systems and estimates.
October 10, 2012
Agile Success with Woody Zuill
The Agile Weekly Crew and Woody Zuill discuss Agile success, the Agile manifesto, new Agile mantras and clean code.
August 15, 2012
Agile 2012 Conference Technical Track Teaser with Mitch Lacey
The Agile Weekly Crew and Mitch Lacey discuss Agile 2012, technical tracks and gaining buy in from developers.
July 19, 2012
Deloitte Digital Agile with Alex Sloley
The Agile Weekly Crew and Alex Sloley discuss Agile in a consulting environment, estimating and 1 week iterations.
June 27, 2012