Automated Testing, Everything That Can Be Automated, Should Be Automated

Episode 124

October 23, 2013

17:08

testing automation

The Agile Weekly Crew discuss automated testing and why everything that can be automated, should be automated.

Jade Meskill: Hello. Welcome to another episode of the Agile Weekly Podcast. I’m Jade Meskill.

Roy vandeWater: I’m Roy vandeWater.

Clayton Lengel‑Zigich: I’m Clayton Lengel‑Zigich.

Derek Neighbors: And I’m Derek Neighbors.

[laughter]

Automated Testing Struggling to Go Fast

Jade: Very nice. Derek, you had something you wanted to talk about. What was it?

Derek: Yeah. I’ve been doing the work with a number of teams, and one thing that I’m starting to see emerge as a pattern is the team’s struggle to go fast. They struggle to work particularly within the Scrum framework a lot of times. Not only are they siloed where they got developers and QA that are separate, but where they don’t have a lot of automation. This could be automation around deployment.

This could be automation around testing, so maybe they don’t have an automated testing suite. Or if they’ve got some automated testing suite, there’s still some manual regression happening. Even if they have a fully automated testing suite, they’re not running it automatically.

They’re not having continuous integration, or it takes an enormous amount of long time to build. What I’m finding is it’s very, very hard for them to see what performance looks like, because they can’t get over the concept of their current reality. The current reality is, “Hey, it takes five days to run the test suite. There’s no way we can have a one‑week iteration. That’s just impossible. It takes us one week just to test, once the developer’s done with a feature.”

Have you guys seen instances, where maybe the current reality or the lack of automation makes it so teams just find…I had somebody specifically today, tell me, “That’s really nice that you stand up there and talk about a 10 minute build. Frankly, there’s no way you’ve ever done that before, because I’ve worked for five different companies, and none of them have ever been able to do that. I find what you’re saying impossible to believe.”

Jade: [inaudible 02:06] we have a 10 second build?

Roy: Yeah. I agree. I’ve struggled so hard to get a build to take 10 minutes. I don’t believe it’s possible.

Derek: In their current reality, I understand that. If you’re a manual regression tester, and you’ve got a fairly complicated suite, it’s taking two or three days to run a single test plan. I can understand how it feels impossible that you could, with any level of comfort, run a test suite under 10 minutes that made you feel like you weren’t shipping crap.

Roy: We’re working with the team right now, but we’re not working with working microphones.

[laughter]

Letting Technology Get In The Way

Jade: That’s right. We are working with the team right now that a single test takes two weeks of constant computation, a single test.

Clayton: I’ve seen that in a lot of instances where, the technology gets in the way. If you’ve got a java applet that loads a Swf, how do you automate testing of that content? That seems like this impossible thing. I think a lot of people say, “It is what it is, throw my hands up in the air. What else are we going to do but, let those hourly regression people slave away at typing in commands and looking at the screen”?

Roy: Along the idea of trying to test a flash out, I think that’s one of the first mistakes that immediately causes your test suite to get larger, and take longer, regardless of whether it’s manual regression or automated testing. That’s when you write the test case last, after everything is done.

Jade: Yeah.

Test Driven Development Helps Reduce Some Problems

Derek: When you write the test case first, you end up using solutions that are easier to test. You don’t run into those things.

Clayton: That’s one of my favorite things about doing, not even just TDD, but just test first. Where you have to take into account what easy to test and what isn’t. I’m working with a team that has a fairly large fitness test suite. You can, totally, tell the way they went about using fitness to write this acceptance for higher‑level tests, because it was so difficult to test at the unit level, the only thing they could do was test the big black box of spaghetti code. They were all written after the fact.

They were trying to do more of a TDD approach or write unit test. It would’ve been so impossible to write any test whatsoever, if they were actually dedicated to that, they wouldn’t have had to solve that problem in the first place. The reason automation doesn’t get more not popular, it’s because a lot of times team get rewarded for effort. Jade and I were talking about this the other day. If you’re using a lot of effort that probably means you’re dumb. You can be so smart and lazy that you can do a lot of things automatically.

If you work in a system or an environment where you rewarded for staying up till five in the morning working on some stupid thing that should be automated, what incentive do you have to automate things? You’re going to get clapped for, and pat on the back if you put a lot of effort into stupid things. Human nature wise, that seems to make sense. That’s like irrational decision at that point.

Jade: Yup. If you’re putting a lot of effort, it must be important.

Clayton: Yeah. Exactly.

Relying on Backstops Creates Bad Habits

Roy: The part that’s really difficult especially when you go to test after the fact is to justify the expense of writing a test. Because you have this working piece of software that your user could be using right now, why not just release it, and then not worry about the test?

Clayton: I’ve heard regression teams be referred to as backstops and people make a baseball analogy metaphor that’s, “The catcher is supposed to be there to catch the ball and that should be fine, right?”

That’s what I think what the developers think of themselves as, “We’re going to do this thing where we’re going to write something, and I’m going to look at it, and I’m pretty sure it works. That’s kind of the catcher, but in case there’s some wild crazy pitch edge case that gets past me, at least there’s a backstop.”

Pretty soon, they stop trying to catch the ball and everything is just the backstop. I think that’s what happens. Why would you spend the extra time to make sure no bugs or no defects or problems or no nothing ever got to the regression team, because they’re always there?

You know they’re going to be there. If something goes wrong, wouldn’t you rather someone blame them than blame you? That’s always…

Roy: Then you get into that whole QA developer rivalry, too, where you hate them because they’re making you do more work. You don’t get to work on new stuff, because they keep uncovering the crap that you wrote earlier.

Testing vs Checking

Clayton: I really like the way that some people in the QA or testing or whatever community talk about testing versus checking. Those are the things that a computer can do. Making sure that this algorithm works properly in these different cases or whatever.

I really like that idea where testing is more about heuristics and looking at, “How does the system function, and what do I expect to happen? What people perceive and is it consistent with the rest of the thing?”

All the stuff that you, actually, need the human brain for. Those are valuable things that people could be working on, actual people. Everything else really should just be automated. [inaudible 06:56] should be automated testing.

The $465 Million Deployment Mistake

Jade: You posted a really awesome article, I think at the beginning of this week, about a case where a company neglected to use automated deployment. In this case, instead of automated testing, but another case where something is done repetitively and there was an opportunity for automation that wasn’t taken. In this case, I think it ended up costing that company $365 million.

Clayton: Some trading company, right?

Jade: Right. We’ll attach the article to the description, but it was pretty crazy. They break down exactly what happened, and it ends up being, “We just didn’t automate something that should have been automated.” Now it’s human error that comes into play.

Business Rules So Complicated They Are Untestable Is a Smell

Derek: I see some funniness in that one of the things that had come up in some the discussions today, too, was, “Well, one of the things that QA is really incredible for is our business rules are so complex that nobody understands them. Literally, nobody actually understands the business rule.”

The great thing is what we do is we have QA, what would happen is you would basically code the story as a developer, and as you code the story as a developer, my fantastic test plan is going to cover every edge case, so that I can actually tell you how you didn’t understand the business rule.

When I think about that, it sets the fallacy that this stuff is so difficult that we’re going to have humans try to remember how it works. Like, “I’m a human, I know that doesn’t work.”

Clayton: That sounds like the exact opposite of how it should be.

Roy: Right, where if it’s super complicated, and you’ve got this thing, shouldn’t you have the computer be checking that it’s the right thing? Then make it happen faster so that I can get immediate feedback.

Derek That’s what I was kind of saying is, “The problem is that if I go and I code this thing up and it takes me five minutes, and I hand it to you to run your test plan, and it takes you two days to give me feedback, that’s really irritating. What if I could run a 10 minute build and get that feedback immediately, and make an adjustment?

That’s when it just blew up into, “It’s impossible that you could ever have a 10 minute build.” I think the second part I think that the thing was, “Great, if we really automate that stuff, what happens to us as a manual test team? We’re still going to have to do a bunch of stuff.”

I think, if we look at some of the best companies in the world that are really doing continuous deployment well, they’re not having manual testers test. They’re having real users test. When they deploy something, they deploy it to a small set of people or to a small set of systems, run tests on them and continue to get feedback, and continue to let things deploy as they get more and more feedback.

I think in order to be competitive, especially in the kind of high tech space, you’ve got to get to the point where your crap’s automated, man. I just can’t see hanging with the big dogs if you’re sitting out having manual tests. I just don’t think it’s reasonable anymore.

Automated Testing Suites Are Liberating

Jade: There’s actually a ton of freedom and liberation in having those things automated, right?

Roy: Sure.

Jade: There’s no need to have human slaves that are doing that. I’ve been reading a lot of Buckminster Fuller, and he talks a lot about freeing the human race from being muscle reflex machines. That’s what computers should do for us. They should free us from having to worry about those types of details, and those repetitive motions that can be fully 100 percent automated.

Automated Testing Frees QA Staff to Do Meaningful Work

Roy: Many of those QA people are valuable people that could be contributing so much more than just a menial task…

Jade: Right, than following a script and clicking buttons.

Derek: One of the things I kind of brought up is that in many instances the QA people have the largest amount of domain knowledge in the company. They could be the front and centre of helping to find the product and helping work with customers, because they’re the ones that understand the product most intimately.

Instead of putting them out there helping them make the product better, because of their understanding of the product. Instead we have them doing the menial labor of kind of like sweeping the shop floor every night, which is just ridiculous.

Where Do You Start?

Clayton: I think one thing maybe we’re glassing over a bit is if you already have some big kind of legacy kluge system that it’s not very well tested, maybe has a big manual test sweep. What is the first step in fixing that problem? How do you even start automating things?

Derek: One of things that I’ve been recommending because it’s something that comes up is at the beginning of the sprint the testers don’t have a whole lot to do. QA doesn’t have a lot to do, because there is no code ready for them to test.

Normally what they’ll do is they’ll start to write their test plans. Do the shell of their manual test plans. Then, what happens is at the end of the sprint, the developers don’t have anything to do and this is one of the big complaints about Scrum on teams that work like this is, “Hey, it really sucks, I waste my time because there’s three days left in a two week sprint where I’m not allowed to do anything, because I can’t bring new work in because there will be no time to test it. What do I do with my time?

A lot of times I’ll see Scrum masters say, well what you should do is you should go help the testers by running manual tests. What I’ve been recommending is you should help the testers by helping them automate the tests that they need to be writing, or should be go finding code that is the most complicated code that causes you the most grief. You should be writing tests that surround that code.

In that time that you really can’t go grab new code, because you won’t have time to test it you should be helping the team move towards automation. Starting to create that path and create those good habits, the other thing I intend to say is in a bare minimum start unit testing in all new code you write.

Clayton: Even that’s pretty hard. The one thing I always tell people if they want to go towards automation is it’s probably going to be 10 times harder than you think.

I think for a lot of people it’s kind of like, “OK, we have the manual tests, I can just automate those.” But then you start doing that, if you want to make good choices you end up getting to a point where you really do have to go back and re look at a lot of the stuff.

It takes some skill to be able to go in and look at it, some legacy code base. Legacy [inaudible 13:14] two weeks ago, kind of thing. Be able to go in there and say, “Where can I add some tests?” or “How can I pin this code down, so that I can actually test it and get it under tested, so that I am confident in that test sweep.”

Jade: Yeah, but if you do it for humans, doing testing by clicking around those are some really easy places to start automating. There is lots of great tools out there to automate the clicking around and report exceptions for you.

The last place I was coaching at we ran into this problem where huge legacy system, and I started showing the regression testers how to use Selenium and some tools. There is definitely a lot of challenges in just getting the environment working.

They were able to automate the 80 percent of the everyday stuff, and that freed them up to make much better contributions.

Roy: Yeah, for all the people that have the excuse of our environment is so specific that we can’t really test in an automated fashion. I am willing pretty much to guarantee that there is probably somebody else that ran into the same problem was sick of it, and came up with some way to deal with it.

It may not be pretty. We saw crazy hack together solutions using all sorts of different technologies just trying to get something to work, but it’s still beats having human beings click through that stuff.

Automated Testing Enables Automated Deployment

Derek: Then I think the second thing is where I see complete lack of automation that causes a lot of team’s pain as round deployment. Whereas, the deployment process is so painful and every time someone deploys they screw something up, and because every time they do something, and they screw it up, a bunch of process comes out about it.

Now you’re not allowed to deploy, you have to fill out a form, and you have to go before a deployment board. Only certain people have access to the production environment. It just cascades into ‑‑ it takes two days to deploy a five minute change. Are you guys seeing stuff like that anywhere?

Clayton: Yeah. I think there’s a lot of rules. I’m getting put around deployment. It’s the same stuff, where you can’t automate it because of some usually like some silly technological problem. People don’t know just how certain things could work or there’s some fear around it where we can’t deploy automatically, because if do that then we won’t get the release notes and user update information out in time.

Like coordinating a bunch of different departments that are all doing things in kind of a dumb way. That you could solve those problems, but people usually just avoid that, like we only deploy every so often so I’d rather just do it manually.

We’d rather pick one poor person that has to stay until nine o’clock to press the button than fix the problem. It’s easier doing that.

Derek: I think maybe some of that in the manual testing as well, is people don’t even know the tools exist. I think that’s part of it. It seems so scary, because I’ve never seen it done before, so I don’t even know that tools exist to help make this stuff easier.

Jade: Yeah. It does look like magic, if you’ve never seen it before.

Clayton: If you’re a windows developer.

[laughter]

Jade: On that note that’s all the time we’ve got for today. Thanks for listening to our weekly podcast.

Related episodes