9 comments on “End-To-End Testing Considered Harmful”

  1. Steve,
    Nice piece. I think that you and I may have been suffering from a “type 2 disagreement” 😉

    Type 1 disagreement – we disagree.
    Type 2 – we have been saying the same thing in different ways and so missed the fact that actually we agree 😉

    I agree entirely that e2e tests as you describe them are an anti pattern. Actually the only thing that is disagree with in this piece is that “CD says there should be a small number of e2e tests..” I don’t think that CD does advocate that anywhere. I confess that I have used the term “e2e test” to describe what I more generally call “Acceptance Tests” these tests are e2e, but only in not he context of the system under development – the system that the team own. I don’t advocate, and never have, including systems that don’t belong to the team.

    As you describe this is an inefficient ineffective strategy. This approach gives only the illusion of rigour. It is actually much less precise in its ability to get the system-under-test into the state that you want it to be in for a given test and is less accurate in its ability to report on outputs and state.

    When I draw test pyramids I don’t include e2e tests at all, only unit, acceptance, sometimes things like performance and migration tests if I am making a specific point, and exploratory tests.

    As I said, nice piece, pleased to see that we do agree on this.

    • Hi Dave

      Thanks for commenting, obviously it’s great to get your views on this.

      I very carefully define my context of acceptance tests, end-to-end tests, and End-To-End Testing at the outset as there are so many variants. For example, I disagreed with Mike Wacker’s Google article “Just Say No To More End-To-End Tests” http://googletesting.blogspot.com.au/2015/04/just-say-no-to-more-end-to-end-tests.html and agreed with Adrian Sutton’s LMAX article on “Making End-To-End Tests Work” https://www.symphonious.net/2015/04/30/making-end-to-end-tests-work – particularly as I know from my own 2 years at LMAX how well they do acceptance testing. However, I found both articles misleading as they referred to acceptance tests as end-to-end tests. Adrian did refer to “end-to-end acceptance tests” later on, but in Growing Object-Oriented Software Nat Pryce and Steve Freeman point out all automated tests should test end-to-end from public entry to exit points.

      In several places I have over-simplified automated testing for brevity, and Test Pyramid vs. Test Ice Cream Cone is one such example as they have multiple variants. I use Unit -> Acceptance -> End-To-End and Unit -> API -> GUI depending on client context.

      In the CD book when describing the Smoke Test Your Deployments Practice, you say “your smoke test should also check that any services your application depends on are up and running – such as a database, messaging bus, or external services”. So CD does recommend automated end-to-end tests but in a specialised scenario, which I entirely agree with. Personally I want a huge suite of automated unit and acceptance tests where I own the entire SUT, and then at release time I like to take a few of those acceptance tests and re-run them as smoke tests with real endpoints configured.

      One of the key points I try to make is automated unit tests and acceptance tests free you up to do exploratory testing at build time and a few automated end-to-end tests at release time. Unfortunately, what I often see happen with clients is either:

      a) Automated functional end-to-end tests. When a company is charged for end-to-end test time with the supplier of an unowned dependent service, I often point out they are paying the supplier to test the supplier code for them
      b) Manual performance end-to-end tests. This makes me very sad, as results are so non-deterministic and setup/teardown so expensive that any performance data is highly suspect and rarely not worth the effort. The Decomposition Fallacy seems prominent here and there is often a lack of operational monitoring as well.

      Thanks again

      Steve

  2. Hi, thanks for the interesting article. It deserves a longer response than is possible here. I don’t disagree with a lot of what you say however are you aware that Service Virtualization addresses some of the issues you mention? That is a relatively new method that I believe may go some distance in addressing many of the problems you describe.

    For instance, where you say:

    “A small number of automated end-to-end tests should be used to validate core user journeys, but not at build time when unowned dependent services are unreliable and unrepresentative. ”

    If you used Service Virtualization to represent the those dependent services I think you’d be able to say:

    “A relatively small number of automated end-to-end tests should be used to validate core user journeys. Using Service Virtualization issues of non-determinism with dependent services that are unreliable or unrepresentative can be selectively removed from the tests.

    “If the whole end-to-end tests are too cumbersome to be run after every build, consider partitioning the tests into functional domains and run a relevant subset with or soon after the build. Aim to run the complete system end-to-end test suite every hour.”

    Service Virtualization is a fairly new method. Basically it relies on recording interactions with real services and APIs during a test record phase and then playing back those responses during tests. The more capable implementations can also manufacture or manipulate responses to deal with data issues.

    Think of it as stubbing and mocking for services and assuming you choose the right tool, achievable without very much additional programming. The wikipedia article on SV is pretty good: https://en.wikipedia.org/wiki/Service_virtualization

    Here’s an article I’ve co-written on the subject that lists the commercial and open source offerings: https://www.specto.io/continuously-delivering-soa-using-service-virtualization/

    To declare my interest I’m CTO of a startup with an Open Source Service Virtualization tools called Mirage (for the big boys) and hoverfly (for the rest).

    • Hi John

      Thank you for commenting. Yes I’m aware of Service Virtualisation, and I can some scenarios where it might help. However, using end-to-end tests means dealing with second- and/or third-parties that are free to disregard technology recommendations.

      Regards

      Steve

  3. The more I see these types of practices being adopted, the more I see quality getting worse, actually. The ambience of software, the usability, and the real quality regarding “does the thing actually work” are really hanging on human testing and judgement as the last line of defense.

    I see teams using full test automation and continuous testing and delivery, which is totally fine, but if the _customer_ becomes the last line of defense, you’ve *utterly failed*. Open betas where we expect the customer to be QA is an *open failure*.

    Frankly I’m sick of using crappy, poorly crafted software that was released because the automated testing passed. That’s software only a computer could love. I want software made for and tested by humans.

    • Hi Tim

      Thanks for commenting. When you say “these types of practices being adopted”, what are you referring to? Automated testing in general? Open betas?

      Full test automation, Continuous Testing, Continuous Delivery are of course all good things but customer exclusion is inimical to Continuous Delivery. As I mention in the article exploratory testing is essential if a product is to be of a high standard – and focussed exploratory testing is predicated upon high quality test automation if testers are to be freed up from endless, repetitive regression testing.

      Continuous Delivery itself is a proxy to learning from customers faster and creating products that customers love. I covered that some time ago in http://www.alwaysagileconsulting.com/articles/what-is-continuous-delivery/

      Thanks

      Steve

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>